Elevating IaC Workflows with Spacelift Stacks and Dependencies 🛠️

Register for the July 23 demo →

Kubernetes

What is a Kubernetes Workload? Resource Types & Examples

kubernetes workloads

Kubernetes workloads provide a powerful and automated way to deploy, manage, and scale containerized applications. They handle the underlying complexities of Pod lifecycle management, allowing you to focus on your application logic.

In this article, we will explore Kubernetes (K8s) workloads and the types of workload resources available. We will also show how K8S workloads actually work and discuss their use cases. Lastly, we will discuss some security tips you’ll want to use in your deployments and some general workload best practices.

What we will cover:

  1. What is a Kubernetes workload?
  2. Types of Kubernetes workload resources
  3. How do Kubernetes workloads work?
  4. Workload security in Kubernetes
  5. Kubernetes workloads best practices

What is a Kubernetes workload?

A Kubernetes workload is an application or service that runs on the platform. It’s a higher-level abstraction that groups one or more containers and defines how they are packaged, deployed, managed, and scaled. 

Workloads are defined using declarative configurations, making it easy to manage and reproduce environments consistently across different stages of development and production. Each workload has its own characteristics and use cases. They can range from simple, stateless web applications to complex, stateful distributed systems.

Custom workload resources can also be created in Kubernetes using custom resource definitions (CRDs) and custom controllers.

What is the difference between a Kubernetes workload and a service?

You will typically use both workloads and services together on your Kubernetes cluster. Where a workload is concerned with the lifecycle management of Pods and how your application runs on Kubernetes, a service is used to expose pods and manage network traffic. They define how your application is accessed within the cluster.

A service is a logical construct that exposes a set of Pods as a single unit for network access. It acts as an abstraction layer for your application, allowing other services or clients to discover and interact with your application without needing to know the specific Pods that make it up.

Types of Kubernetes workload resources

The most popular types of components that make up a Kubernetes workload include Pods, Deployments, StatefulSets, DaemonSets, Jobs, and CronJobs, each serving different purposes for managing and scaling applications. 

Let’s see some examples.

1. Pods

Pods are the fundamental unit of deployment in Kubernetes. A Pod represents a group of one or more containers that are tightly coupled and share storage (usually a volume). Workloads typically consist of one or more Pods. 

An example YAML configuration for a Pod to run a web app:

apiVersion: v1
kind: Pod
metadata:
 name: web-app
spec:
 containers:
 - name: web-server
   image: nginx:latest  # Replace with the image of your web application
   ports:
   - containerPort: 80  # Expose port 80 within the container

2. Deployment controllers

Deployment controllers manage the creation, deletion, and scaling of Pods. 

Popular examples include Deployments (for managing stateless applications) and ReplicaSets (for maintaining a desired number of Pod replicas). A Deployment is a higher-level abstraction that manages a ReplicaSet to ensure that a specified number of Pod replicas are running at any given time.

Below, you can see an example Deployment YAML configuration for a web app deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: web-app
spec:
 replicas: 3  # Number of replicas (instances) of the application to run
 selector:
   matchLabels:
     app: web-app
 template:
   metadata:
     labels:
       app: web-app
   spec:
     containers:
     - name: web-server
       image: nginx:latest  # Replace with the image of your web application
       ports:
       - containerPort: 80  # Expose port 80 within the container

Deployments use cases

Deployments are well-suited for stateless applications that can be scaled horizontally. They manage a set of identical pods, ensuring that the desired number of replicas are running and handling updates seamlessly. Deployments are commonly used for Web applications (front-end and back-end services), Microservices-based architectures, and any application where horizontal scaling is required. They provide isolation and scalability for each microservice component.

ReplicaSets use cases

ReplicaSets are suitable for simpler deployments where rolling updates are not a critical concern. They are commonly used when deploying stateful applications (in combination with Persistent Volumes for data persistence) or running database replicas for high availability.

3. Job controllers

Jobs are used to run multiple Pods, typically for batch processing tasks or one-time jobs, such as data processing, backups, or periodic tasks. 

Let’s see an example Job specification:

apiVersion: batch/v1
kind: Job
metadata:
 name: web-app-job
spec:
 template:
   metadata:
     name: web-app-pod
   spec:
     containers:
     - name: web-server
       image: nginx:latest  # Replace with the image of your web application
       ports:
       - containerPort: 80  # Expose port 80 within the container
     restartPolicy: Never  # Job pods are not restarted upon failure

Jobs use cases

Jobs are suitable for running batch processing tasks or one-off jobs that need to run to completion. They are commonly used for data processing (e.g., data analysis, video transcoding), backups, periodic tasks, and other batch-oriented workloads (e.g., database migrations, applying configuration changes).

4. CronJob controllers

Similar to Jobs, CronJob allows scheduling repetitive executions based on a defined CRON expression (e.g., every hour, daily, etc.). 

Here is an example CronJob specification:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
 name: web-app-cronjob
spec:
 schedule: "*/5 * * * *"  # Cron expression to define the schedule (every 5 minutes)
 jobTemplate:
   spec:
     template:
       metadata:
         name: web-app-pod
       spec:
         containers:
         - name: web-server
           image: nginx:latest  # Replace with the image of your web application
           ports:
           - containerPort: 80  # Expose port 80 within the container
     restartPolicy: OnFailure  # Job pods are restarted only on failure

CronJobs use cases

CronJobs are specifically designed for scheduling tasks to run at specified intervals using cron expressions. They are commonly used for periodic maintenance tasks (e.g. periodic data backups, scheduled reports generation), data processing, and other scheduled activities.

5. StatefulSets

StatefulSets are ideal for applications that require stable, unique identifiers and persistent storage. They ensure that each pod in the set maintains its identity and can be easily identified and addressed. To manage the network identities of StatefulSets, you must first create a headless service in Kubernetes.

StatefulSets are commonly used for deploying databases (e.g., MySQL, PostgreSQL), message queues (e.g., Kafka, RabbitMQ), and other stateful applications that require stable storage and identity.

Let’s see an example that sets up a basic StatefulSet object to run three replicas of an application:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app: app
  replicas: 3
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
        - name: app
          image: nginx:latest

6. DaemonSets

DaemonSets are useful for deploying services or agents that need to run on every node in the cluster. They ensure that a copy of a specific pod runs on each node, providing cluster-level functionality such as logging (e.g., Fluentd), monitoring (e.g., Prometheus Node Exporter), or networking. 

They are often used to deploy infrastructure components such as storage drivers, monitoring agents, security agents for centralized logging and threat detection, or network plugins.

This DeamonSet example runs the Fluentd logging system on each of the Nodes:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      name: fluentd
  template:
    metadata:
      labels:
        name: fluentd
    spec:
      containers:
        - name: fluentd-elasticsearch
          image: quay.io/fluentd_elasticsearch/fluentd:latest

How do Kubernetes workloads work?

Once the building blocks of the Kubernetes workload are decided, you define the desired state of your application in YAML files. These files will include details for the workload, including the container images to use, the number of Pod replicas (copies of your application), resource requirements (CPU, memory) for each Pod that will run on your cluster nodes, and the type of workload controller (Deployment, Job, etc.). Under the hood, each component is represented by various Kubernetes API objects.

Once your workload is defined, the deployment controller will create the specified number of Pods based on your configuration and ensure they remain running even if Pods fail or become unhealthy. Scaling up or down is achieved by adjusting the desired number of replicas. The job controllers will orchestrate the execution of Jobs.

The K8s scheduler takes the workload definition and Pod specifications into account, placing the Pods on appropriate worker nodes in the cluster based on resource availability and other scheduling policies. The control plane continuously monitors the Pods. If a Pod fails or becomes unhealthy, the corresponding controller recreates it to maintain the desired state defined in your workload configuration.

Workload security in Kubernetes

As a Kubernetes workload refers to a collection of items that make up your application, a general understanding of the threats that face your Kubernetes deployments largely falls into these categories: container vulnerabilities, misconfiguration, privilege escalation, and lateral movement.

The table below shows the measures to consider when securing each component in your workload.

Security threat Description Security measures
Container vulnerabilities Exploitable vulnerabilities in container images can be a gateway for attackers.
  • Regularly scan container images for vulnerabilities before deploying them to your cluster.
  • Implement multiple security layers (e.g., PSPs, network policies) to create a layered defense.
  • Integrate third-party security tools like vulnerability scanners and intrusion detection systems (IDS) for enhanced protection.
Misconfigurations Incorrect security settings within deployments, namespaces, or the cluster itself can create security gaps.
  • Set limits on resource consumption (CPU, memory) by Pods to prevent resource exhaustion.
  • Enable audit logging to track user activity and identify potential security incidents. Analyze logs for suspicious activity and unauthorized access attempts.
  • Leverage workload identity to provide containerized applications with credentials for accessing cloud resources without embedding secrets.
  • Encrypt communication channels between Pods and other components using TLS.
Privilege escalation Attackers might exploit flaws to gain elevated privileges within containers or the cluster.
  • Pod Security Policies (PSPs): Define baseline security requirements for Pods, limiting privileges and capabilities.
  • RBAC (Role-Based Access Control): Define user permissions for accessing and modifying Kubernetes resources.
  • Grant only the minimum permissions required for Pods and service accounts to perform their tasks.
  • Security Context Constraints (SCCs) (deprecated): While deprecated in favor of PSPs, SCCs might still be encountered in older deployments.
Lateral movement Once inside, attackers might move laterally across containers or the cluster to access sensitive data.
  • Control network traffic flow between Pods and enforce communication restrictions using Network Policies.
  • Store sensitive information (passwords, tokens) securely using Secrets and avoid embedding them in container images.
  • Continuously monitor your cluster for suspicious activity and resource usage anomalies. Tools like Prometheus and Grafana can be used for infrastructure monitoring.

By implementing security measures, you minimize the potential entry points for attackers, safeguard sensitive data stored or processed within your applications, and meet industry regulations and compliance requirements.

Kubernetes workloads best practices

In addition to the security of your workload using the recommendations above, here are other best practices around Kubernetes workloads to consider:

  • Image immutability: Treat container images as immutable artifacts by avoiding changes post-deployment. For updates, build a new image with the necessary changes and redeploy it to ensure consistency and reliability across deployments.
  • Optimize image size: Smaller container images reduce the attack surface and improve deployment speed. Use tools like multi-stage builds and image layering to optimize image size.
  • Integrate Kubernetes deployments with CI/CD pipelines: Integrate your deployments with a CI/CD pipeline to automate builds, testing, and deployments. This helps ensure consistency and reduces the risk of errors.

For these integrations, consider leveraging tools like Spacelift. Spacelift brings the benefits of CI/CD to infrastructure management. Your team can collaborate on infrastructure changes right from your pull requests. Spacelift lets you visualize your resources, enable self-service access, and protect against configuration drift.

Use Spacelift to manage your Kubernetes clusters without directly interacting with your cloud providers or IaC tools like Terraform, OpenTofu, Pulumi, or CloudFormation. For example, you can create a Spacelift stack that provisions a new AWS EKS cluster with Terraform, giving team members the ability to safely test their changes on demand.

  • Rolling updates and rollbacks: Use rolling updates to deploy new application versions with minimal downtime. Additionally, have a rollback strategy in place in case of issues with new deployments.
  • Use declarative configuration: Managing workloads and cluster configurations using YAML files to promote version control and simplify infrastructure as code (IaC) practices. Declarative configurations ensure consistency and ease of maintenance.
  • Resource management and autoscaling: Specify CPU and memory resource requests and limits in your workload configurations to ensure efficient resource utilization and prevent resource contention. This helps Kubernetes make better scheduling decisions and avoids performance degradation due to resource starvation.

Implement Horizontal Pod Autoscaling (HPA) to automatically scale workloads based on resource utilization, ensuring optimal performance and resource efficiency.

  • Disaster recovery and health monitoring: Develop a disaster recovery plan for your Kubernetes cluster. This plan should include procedures for backing up, restoring from backups, and resuming operations after an incident.

Define readiness and liveness probes in your workload configurations to enable Kubernetes to determine the health status of your pods. Readiness probes indicate when a pod is ready to serve traffic, while liveness probes detect when a pod is unhealthy and should be restarted.

  • Pod affinity and anti-affinity:  Use pod affinity and anti-affinity rules to influence pod scheduling decisions. Pod affinity ensures that pods are scheduled on nodes with certain characteristics or co-located with other pods, while anti-affinity prevents pods from being scheduled together.

Key points

Kubernetes workloads refer to the various types of resources, such as Pods, Deployments, StatefulSets, DaemonSets, and Jobs, which are used to run and manage applications on a Kubernetes cluster. These workloads enable efficient scaling, optimal resource utilization, and high availability of applications. Proper management and configuration of Kubernetes workloads are essential for maintaining application performance, security, and resilience.

And if you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.

Manage Kubernetes Faster and More Easily

Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide