Kubernetes illustration
Jump to section

Kubernetes security best practices

Copy URL

Implementing Kubernetes security best practices involves remediating known security vulnerabilities during the build phase, reconfiguring misconfigurations during the build/deploy phase, responding to threats at runtime, and securing the entire Kubernetes infrastructure. 

These correspond to responses about top security concerns gathered as part of the latest State of Kubernetes Security report, which found that more than 50% of respondents are worried about misconfigurations and vulnerabilities due to the highly customizable nature of Kubernetes and the complexity of container security. In order to overcome security challenges and avoid slowdowns in application deployment, organizations must make securing Kubernetes a priority throughout the full development life cycle.

Containers are everywhere

Kubernetes is an open source container orchestration platform used to manage hundreds (sometimes thousands) of Linux® containers batched into Kubernetes clusters. It relies heavily on application programming interfaces (APIs) connecting containerized microservices. This distributed nature makes it difficult to quickly investigate which containers might have vulnerabilities, may be misconfigured, or pose the greatest risks to your organization.

The solution is to develop a comprehensive view of container deployments that captures critical system-level events in each container.

KuppingerCole Report Leadership Compass: Container security

Get a comprehensive overview of the container and Kubernetes security market to help you evaluate and select the right container security solution.

Images and registries can be misused

Container images (also known as base images) are immutable templates used to create new containers. Newly copied container images can then be modified to serve distinct purposes.

The solution is to set up policies determining how images are built, and how they’re stored in image registries. Base images need to be regularly tested, approved, and scanned. And only images from allowed image registries should be used to launch containers in a Kubernetes environment.

Uninhibited container communication

Containers and pods need to talk to each other within deployments, as well as to other internal and external endpoints to properly function. If a container is breached, the ability for a hacker to move within the environment is directly related to how broadly that container can communicate with other containers and pods. In a sprawling container environment, implementing network segmentation can be prohibitively difficult given the complexity of configuring such policies manually.

The solution is to track traffic moving between namespaces, deployments, and pods; and determine how much of that traffic is actually allowed.

Learn more about how Red Hat can help containers securely communicate

Default container network policies

By default, Kubernetes deployments do not apply a network policy to a pod—the smallest unit of a Kubernetes application. These network policies behave like firewall rules. They control how pods communicate. Without network policies, any pod can talk to any other pod. 

The solution is to define network policies that limit pod communication to only defined assets, and to mount secrets in read-only volumes within containers instead of passing them as environment variables.

Container and Kubernetes compliance

Cloud-native environments facilitated by Kubernetes should (like all other IT environments) comply with security best practices, industry standards, benchmarks, and internal organizational policies—and prove that compliance. Sometimes this means adapting compliance strategies so Kubernetes environments meet controls originally written for traditional application architectures.

The solution is to monitor for compliance adherence and automate audits.

Runtime

Kubernetes is an immutable infrastructure. Patching isn't possible during container runtime—running containers must be destroyed and recreated. Compromised containers can run malicious processes, like crypto mining and port scanning.

The solution is to destroy any breached or running container, rebuild an uncompromised container image, and then relaunch it.

Kubernetes security begins in the build phase by creating strong base images and adopting vulnerability scanning processes.

  • Use minimal base images. Avoid using images with operating system (OS) package managers or shells—which could contain unknown vulnerabilities—or remove the package manager later.
  • Use trusted sources. Only choose base images that come from a trusted source and are hosted in a reputable registry.
  • Don’t add unnecessary components. As a rule of thumb, common tools can become security risks when included in images.
  • Use up-to-date images only. Update component versions.
  • Use an image scanner. Identify vulnerabilities within images—broken down by layer.
  • Integrate security into CI/CD pipelines. Automate a repeatable facet of security that will fail continuous integration builds and generate alerts for severe, fixable vulnerabilities.
  • Label permanent vulnerabilities. Add known vulnerabilities that can’t be fixed, aren’t critical, or don’t need to be fixed right away to an allow list. 
  • Implement defense-in-depth. Standardize policy checks and remediation workflows to detect and update vulnerable images.

Configure Kubernetes infrastructure security before workloads are deployed. That begins by knowing as much as possible about the deployment process, such as what’s being deployed (image, components, pods), where it’s deployed (clusters, namespaces, and nodes), how it’s deployed (privileges, communication policies, applied securities), what it can access (secrets, volumes), and the compliance standards.

  • Use namespaces. Separating workloads into namespaces can help contain attacks, and limit the impact of mistakes or destructive actions by authorized users.
  • Use network policies. Kubernetes allows every pod to contact every other pod by default, but network segmentation policies and plugins that control ingress and egress traffic from the application can override that default.
  • Restrict permissions to secrets. Only mount secrets that deployments require.
  • Assess container privileges. Provide only the capabilities, roles, and privileges that allow the container to perform its function. 
  • Assess image provenance. Use images from known registries.
  • Scan deployments. Enforce policies based on the scans’ results. 
  • Use labels and annotations. Label or annotate deployments with the contact information of the team responsible for a containerized application to streamline triage.
  • Enable role-based access control (RBAC). RBAC controls user and service account authorization to access a cluster’s Kubernetes API server.

Security incidents are less common when best practices for securing Kubernetes are applied during the build and deploy phases, but identifying and responding to runtime threats requires continually monitoring process activity and network communications.

  • Use contextual information. Use the build and deploy time information in Kubernetes to evaluate observed vs. expected activity during runtime in order to detect suspicious activity.
  • Scan running deployments. Monitor running deployments for the same recently discovered vulnerabilities discovered in container images.
  • Use built-in controls. Configure the security context for pods to limit their capabilities.
  • Monitor network traffic. Observe and compare live network traffic to what Kubernetes network policies allow to identify unexpected communication.
  • Use allow lists. Identify processes executed during the normal course of the app’s runtime to create an allow list.
  • Compare runtime activity in similarly deployed pods. Replicas with significant deviations require investigation.
  • Scale suspicious pods to zero. Use Kubernetes native controls to contain breaches by automatically instructing Kubernetes to scale suspicious pods to zero, or destroy and restart instances.

Kubernetes security extends beyond images and workloads. Security includes the entire Kubernetes infrastructure: clusters, nodes, the container engine, and even clouds.

  • Apply Kubernetes updates. Updating your Kubernetes distributions will apply security patches and install new security tools.
  • Secure the Kubernetes API server. The Kubernetes API server is the gateway to the Kubernetes control plane. Disable unauthenticated/anonymous access and use TLS encryption for connections between kubelets and the API server. Audit logging should also be enabled for visibility into atypical API calls.
  • Secure etcd. etcd is a key-value store used by Kubernetes for data access. Secure the kubelet to minimize the attack surface. Disable anonymous access to the kubelet by starting the kubelet with the --anonymous-auth=false flag, and use the NodeRestriction admission controller to limit what the kubelet can access.

Cloud security

Regardless of what type of cloud (public cloud, private cloud, hybrid cloud, or multicloud) hosts the containers or runs Kubernetes, the cloud user—not the cloud provider—is always responsible for securing the Kubernetes workload, including:

  • Container images: Sources, contents, and vulnerabilities
  • Deployments: Network services, storage, and privileges
  • Configuration management: Roles, groups, role bindings, and service accounts
  • Application: Kubernetes secrets management, labels, and annotations
  • Network segmentation: Network policies in the Kubernetes cluster
  • Runtime: Threat detection and incident response

Using containers and Kubernetes doesn’t change your security goals: to minimize vulnerabilities and security risks.

  • Embed security best practices early into the container lifecycle. Kubernetes security should allow developers and DevOps teams to confidently build and deploy applications that are production-ready.
  • Use Kubernetes-native security controls. Native controls keep security controls from colliding with the orchestrator
  • Let Kubernetes prioritize remediation.

Securing cloud-native applications and the underlying infrastructure requires significant changes to an organization’s security approach—organizations must apply controls earlier in the application development life cycle, use built-in controls to enforce policies that prevent operational and scalability issues, and keep up with increasingly rapid release schedules.

Red Hat® Advanced Cluster Security for Kubernetes is a Kubernetes-native security platform that equips organizations to more securely build, deploy, and run cloud-native applications anywhere. The solution helps improve the security of the application build process, protect the application platform and configurations, and detect and respond to runtime issues. 

Keep reading

Article

What's a Linux container?

A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.

Article

Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.

Article

What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

More about containers

Products

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.

Resources

Podcast

Command Line Heroes Season 1, Episode 5:
"The Containers Derby"

E-Book

Boost agility with hybrid cloud and containers

Training

Free training course

Running Containers with Red Hat Technical Overview

Free training course

Containers, Kubernetes and Red Hat OpenShift Technical Overview

Free training course

Developing Cloud-Native Applications with Microservices Architectures