Kubernetes

Kubernetes Security : 6 Best Practices for 4C Security Model

Kubernetes Security Best Practices

Kubernetes is the most popular orchestrator for deploying and scaling containerized applications in production. While Kubernetes makes it easy to start new workloads, this convenience comes at a cost: the system isn’t secure by default, so your clusters and users could be at risk.

Container attacks are on the rise and insecure Kubernetes installations are a magnet for attackers. Fortunately you can protect your clusters by adhering to a few best practices. You can run containers in production without security issues, if you consciously implement protections around your deployments.

Kubernetes Security Best Practices

These steps help harden your environment to minimize your attack surface and defend against incoming threats. Implementing them all will give you the greatest protection by restricting network activity, encrypting data at rest, and preventing vulnerable workloads from reaching your cluster.

  1. Use RBAC
  2. Protect the Control Plane
  3. Harden Your Nodes
  4. Add Network Security Policies
  5. Use Pod-Level Security Features
  6. Harden Your Workloads

You can also take a look at how Spacelift helps you manage the complexities and compliance challenges of using Kubernetes. Anything that can be run via kubectl can be run within a Spacelift stack. Find out more about how Spacelift works with Kubernetes, and get started on your journey by creating a free trial account.

1. Use RBAC

Role-Based Access Control (RBAC) is a built-in Kubernetes feature. It lets you control what individual users and service accounts can do by assigning them one or more roles. Each role allows a combination of actions, such as creating and listing Pods but not deleting them.

You should use RBAC to assign appropriate roles to each user and service account that interacts with your cluster. Developers may not need as many roles as operators and administrators, while CI/CD systems and Pod service accounts can be granted the bare minimum permissions needed to run their jobs.

RBAC protects your cluster if credentials are lost or stolen. An attacker who acquires a token for an account will be restricted to just the roles you’ve specifically assigned.

Roles must be as granular as possible to have the greatest security effect. Over-privileged roles, configured with too many permissions, are a risk because they grant attackers extra capabilities without providing any benefit to the legitimate user.

2. Protect the Control Plane

The Kubernetes control plane is responsible for managing all cluster-level operations. It exposes the API server, schedules new Pods onto Nodes, and stores the system’s current state. Breaching the control plane could give attackers control of your cluster.

Implementing these strategies will help lockdown the control plane and limit the effects of any compromise that occurs.

  1. Restrict access to etcd. Kubernetes uses etcd to store your cluster’s data. This includes credentials, certificates, and the values of ConfigMaps and Secrets you create. The central positioning of etcd makes it an attractive target for attackers. You should isolate access to it behind a firewall which only your Kubernetes components can reach through. You can achieve this by running etcd on a dedicated Node and using a network policy engine like Calico to enforce traffic rules.
  2. Enable etcd encryption. Data within etcd is not encrypted by default. This option can be turned on by specifying an encryption provider when you start the Kubernetes API server. If you’re using a managed Kubernetes service in the cloud, check with your provider to see if encryption is already active and whether you can enable it. Encryption will help protect the credentials, secrets, and other sensitive information within your cluster if the control plane is successfully compromised.
  3. Set up external API server authentication. The Kubernetes API server is usually configured with simple certificate-based authentication. This makes it challenging to configure and maintain large user cohorts. Integrating Kubernetes with your existing OAuth or LDAP provider tightens security by separating user management from the control plane itself. You can use your provider’s existing controls to block malicious authentication attempts and enforce login policies such as multi-factor authentication. Anonymous Kubelet authentication should be disabled too. This will block requests to Kubelet from sources other than the Kubernetes API server instance that it’s connected to. Set the --anonymous-auth=false flag when you start Kubelet if you’re maintaining your own cluster.

3. Harden Your Nodes

Don’t forget your Nodes when you’re securing your environment. Basic misconfiguration at the Node-level could compromise your entire cluster in a worst case scenario. Gaining access to a Node grants attackers privileged access to the Kubernetes API server through abuse of the Kubelet worker process that all Nodes run.

Securing Nodes used for Kubernetes is no different from protecting any other production server. You should monitor system logs regularly and keep the OS updated with new security patches, kernel revisions, and CPU microcode packages as they become available.

It’s best to dedicate your Nodes to Kubernetes. Avoid running other workloads directly on a Node, particularly network-exposed software, which could give attackers a foothold. Lockdown external access to sensitive protocols such as SSH.

4. Add Network Security Policies

Kubernetes defaults to allowing all Pods to communicate freely with each other. Compromising one Pod could let bad actors inspect its surroundings and then move laterally into other workloads. Breaching the Pod that serves your website allows an attacker to direct traffic straight to your database Pods, for example.

Kubernetes Network Policies defend against this risk by giving you precise control over the situations when Pods are allowed to communicate. You can specify at the Pod-level whether Ingress and Egress are allowed based on the other Pod’s identity, namespace, and IP address range. This lets you prevent access to sensitive services from containers that shouldn’t need to reach them.

Network policies are a Kubernetes resource type that you can apply to your cluster using YAML files. Here’s a simple example Policy that targets Pods with an app-component: database label:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: database-access
spec:
  podSelector:
    matchLabels:
      app-component: database
    policyTypes:
      - Ingress
    ingress:
      - from:
        - podSelector:
            matchLabels:
              app-component: api

This policy declares that incoming traffic to your database Pods can only originate from other Pods in the same namespace with the app-component: api label. Nefarious requests made from your frontend web server’s Pod labeled as app-component: frontend will be rejected at the network-level.

It’s possible to set up a default namespace-level network policy to guard against Pods being accidentally omitted from your rules. Using an empty podSelector field will apply the Policy to every Pod in the namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: demo-namespace
spec:
  podSelector: {}
    policyTypes:
      - Ingress
      - Egress

This policy prevents both Ingress and Egress traffic to the namespace’s Pods, unless a more specific rule overrides it. You can block incoming or outbound traffic without affecting the other by changing the types listed under spec.policyTypes.

5. Use Pod-Level Security Features

Besides using networking policies to prevent unwanted traffic flows, Kubernetes has a few other Pod-level capabilities that protect your cluster and applications from vulnerabilities in each other.

All Pods should be assigned a security context that defines their privileges. You can use this mechanism to require that containers run with restricted Linux capabilities, avoid the use of HostPorts, and run with AppArmor, SELinux, and Seccomp enabled, among other controls. The security context can also define the user and group to run containers as:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  securityContext:
    runAsUser: 1000
    runAsNonRoot: true
    allowPrivilegeEscalation: false
    readOnlyRootFilesystem: true
  containers:
    - name: nginx
      image: nginx:latest

This security context makes containers in the Pod run as the user with UID 1000 while preventing privilege escalation. The runAsNonRoot: true declaration enforces that the container will run as a non-root user, while runAsUser: 1000 requires it to run with UID 1000. You can omit the UID and use runAsNonRoot alone in situations where it doesn’t matter which user runs the container, provided it’s not root.

The security context shown above also stipulates that containers will run with a read-only root filesystem. Setting securityContext.readOnlyRootFilesystem to true prevents workloads inside the container from writing to the container filesystem. It limits what attackers can achieve if the container is compromised as they’ll be unable to persist malicious binaries or tamper with existing files.

Security contexts can also be set at the container-level, as spec.containers[].securityContext, overriding any constraints set on their Pod. This allows you to further harden individual containers, or relax the rules for administrative workloads.

Pod Security admission rules, a replacement for the older PodSecurityPolicy system, allow you to enforce minimum security standards for the Pods in your cluster. This mechanism will reject any Pods that violate the configured Pod security standard, such as by omitting securityContext settings, binding HostPorts, or using HostPath volume mounts.

Policies can be enabled at the namespace or cluster-level. It’s good practice to use this system across your clusters. It guarantees that Pods with potentially dangerous weaknesses are prevented from running until you address their policy issues. If you need to run a Pod with elevated capabilities, you can use the Privileged profile to make your intentions explicit.

6. Harden Your Workloads

Kubernetes security isn’t all about your cluster. Applications need to be secure before they’re deployed, which means taking some basic steps to protect your container images.

First run automated security scanning tools to detect vulnerabilities in your code. Next, use a hardened base image and layer in your source. Finally, scan your built image with an analyzer like Clair or Trivy to identify outdated OS packages and known CVEs. You should rebuild the image to incorporate mitigations if issues are present.

In security-critical situations, think about starting from scratch so you can have certainty about what’s present in your containers. This lets you assemble your entire filesystem without relying on an upstream image that could contain lurking threats.

It’s also vital to use the security mechanisms that Kubernetes provides. Sensitive data such as database passwords, API keys, and certificates shouldn’t reside in plain-text ConfigMaps, or be hardcoded into container filesystems, for example. Kubernetes Secrets let you store these values securely, independently of your Pods. They don’t encrypt values by default, though, so it’s vital to enable etcd encryption wherever they’re used. Secrets can also integrate with external datastores to save credentials outside your cluster.

How The "4C" Model Protects Your Cluster

Using these best practices helps you adhere to the 4C model of cloud-native security. This simple mental picture sets out four security layers to protect yourself, each of them common words prefixed with “C”:

  • Cloud – Vulnerabilities in your cloud infrastructure, such as not enabling 2FA for your Azure, AWS, or Google Cloud accounts, give attackers a route to all your resources. Protect yourself by regularly auditing your environment and choosing a reputable provider with a good compliance record.
  • Cluster – Apply recommendations such as etcd encryption, RBAC controls, and Node isolation to protect your cluster from attack. Cluster-level compromise will expose all your applications and their data.
  • Container – Individual containers can be strengthened by using hardened base images, scanning for vulnerabilities, and avoiding use of privileged capabilities. Malware inside a single container might be able to break out to access other resources and the host Node.
  • Code – Code inside containers should be audited, scanned, and probed as it’s created to identify any weaknesses. Don’t underestimate attackers: while Kubernetes provides strong container isolation when configured correctly, weaknesses in your code could let intruders exploit zero-day vulnerabilities to escape the container and control your cluster.

Applying security protections across all four segments will give you the greatest protection. Only targeting a few areas could create weaknesses that let attackers move down the 4C pyramid and then laterally across your resources.

Read more about Container security best practices and solutions.

Key Points

Kubernetes makes it easy to start and run containers but using plain images in a fresh cluster can be a security risk. Your workloads and clusters need to be hardened to make them safe for critical production environments. While it can be tempting to skip these steps, you’ll be vulnerable to exploitation if bad actors find your cluster.

The steps we’ve shared above will help you use Kubernetes securely by following the 4C model of Cloud, Cluster, Container, and Code. Attackers can manipulate weaknesses in any of these areas to cause a security incident.

Although the techniques listed here are good starting points, they’re not an exhaustive list of measures you can take. You can uncover additional improvement opportunities by using automated tools like Kubescape. This policy-based cluster scanner detects Kubernetes misconfigurations, security vulnerabilities, and container image risks in a single scan of your cluster.

There’s growing industry interest in strengthening Kubernetes deployments, including from U.S. government bodies. The NSA/CISA Kubernetes hardening guide and the Center for Internet Security’s Kubernetes security benchmark are two references you can use to find threats in security-critical situations. We’ve also got more posts on the Spacelift blog about Kubernetes security, such as how to use secrets to store sensitive data in your cluster.

The most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities s for infrastructure management.

Start free trial

Kubernetes Commands Cheat Sheet

Grab our ultimate cheat sheet PDF

for all the kubectl commands you need.

k8s book
Share your data and download the cheat sheet