Guide to Balancing Speed and Control in DevOps

➡️ Download

Kubernetes

Kubernetes DaemonSet – What It Is & How to Use (Example)

Kubernetes DeamonSet - What It is & How to Use It (Example)

🚀 Level Up Your Infrastructure Skills

You focus on building. We’ll keep you updated. Get curated infrastructure insights that help you make smarter decisions.

A DaemonSet is a type of Kubernetes API object that replicates identical Pods across the Nodes in your cluster.

This article will go in-depth on what DaemonSets are, how they work, and when you should use them. We’ll also include a simple tutorial that shows how to deploy a DaemonSet in your own cluster.

What we will cover:

  1. What is a Kubernetes DaemonSet?
  2. What is DaemonSet used for?
  3. Examples of using a DaemonSet
  4. Scoping DaemonSets to specific Nodes
  5. How to scale a DaemonSet
  6. DaemonSet best practices

What is a Kubernetes DaemonSet?

Kubernetes DaemonSet is a workload resource that ensures a specific pod runs on all (or selected) nodes in a cluster. It’s commonly used for deploying node-level services like log collectors, monitoring agents, or network plugins. As nodes are added or removed, the DaemonSet automatically adds or removes the pod accordingly.

How do DaemonSets work?

DaemonSets are Kubernetes API objects that allow you to run Pods as a daemon on each of your Nodes. New Nodes that join the cluster will automatically start running Pods that are part of a DaemonSet. DaemonSets are often used to run long-lived background services such as Node monitoring systems and log collection agents. To ensure complete coverage, it’s important that these apps run a Pod on every Node in your cluster.

 

kubernetes daemonset

By default, Kubernetes manages your DaemonSets so that every Node is always running an instance of the Pod. You can optionally customize a DaemonSet’s configuration so that only a subset of your Nodes schedule a Pod.

When new Nodes join your cluster, they’ll automatically start running applicable Pods defined by DaemonSets. Similarly, when nodes are deprovisioned, Kubernetes will deschedule those Pods and run garbage collection.

As DaemonSets are designed to run a Pod on every Node reliably, they come with default tolerations that allow them to schedule new Pods in situations that would normally be prevented. For example, DaemonSet Pods will still be scheduled even if a target Node is facing resource constraints or isn’t accepting new Pods.

What is the difference between Pods, ReplicaSets, Deployments, StatefulSets, and DaemonSets?

Pods are the fundamental unit in Kubernetes: they represent a collection of one or more containers running in your cluster.

ReplicaSets builds upon this foundation by providing a construct that guarantees a specified number of Pod replicas will be running at a given time. Deployments implement declarative management of ReplicaSets, which is how most stateless apps are deployed in Kubernetes, while StatefulSets simplifies the use of stateful workloads that require persistent data storage.

DaemonSets differ from any of these other Kubernetes workload types because they have unique scheduling behavior. Pods, ReplicaSets, and Deployments are scheduled to be available on cluster Nodes automatically, until the requested number of replicas is running. Unless you set affinity rules, you can’t know which Nodes will be selected to run a Pod. DaemonSets, however, ensure every Node runs a replica of the Pod.

Let’s look at some of these differences in more detail:

What is the difference between Deployment and DaemonSet?

A Deployment manages a replicable set of Pods and ensures the desired number are running, typically across the cluster. It’s ideal for stateless applications, with built-in rollout and rollback capabilities.

A DaemonSet ensures a Pod runs on every node (or a subset via node selectors). It’s used for node-level tasks like log collection or monitoring agents. Unlike Deployments, it doesn’t scale by replicas but by node presence.

kubernetes deployment

Read also Kubernetes StatefulSet vs. Deployment.

What is the difference between a static pod and a DaemonSet?

A static pod is managed directly by the kubelet on a specific node and defined via local manifest files. It’s independent of the Kubernetes API server and won’t be rescheduled if the node fails. 

In contrast, a DaemonSet is a Kubernetes object that ensures a copy of a pod runs on all (or selected) nodes, and is managed by the control plane. Static pods are node-scoped, while DaemonSets provide cluster-level control and lifecycle management.

What is the difference between a sidecar and a DaemonSet?

A sidecar is a container that runs alongside a primary container in the same Pod, while a DaemonSet ensures a specific Pod runs on every node in a Kubernetes cluster.

Sidecars extend or enhance the functionality of the main application container, often sharing resources like volumes and networks. For example, a logging sidecar might collect logs from the main container and forward them to a central system.

DaemonSets, on the other hand, deploy the same Pod across all (or selected) nodes, typically to provide system-wide services like monitoring agents, log collectors, or network plugins. These Pods run independently on each node and are not tied to any specific application.

What is DaemonSet used for?

DaemonSets ensure that specific Pods run on every Node in a Kubernetes cluster, making them ideal for system-level operations that require full Node coverage.

  1. Running Node monitoring agentsIn-cluster services that collect metrics data from your Nodes need to deploy a Pod on each one reliably. For maximal coverage, the deployment should occur immediately after the Node joins the cluster. DaemonSets implement this behavior without requiring any special configuration.
  2. Collecting logs from NodesSimilarly, collecting the contents of Node-level logs (such as Kubelet and kernel logs) helps you audit your environments and troubleshoot problems. Deploying your logging service as a DaemonSet ensures that all your Nodes are included.
  3. Backing up Node data Backups are another good candidate for DaemonSets. Using a Daemon Set ensures that all your Nodes will be included in your backups without making you scale or reconfigure your backup service when Nodes change. If some Nodes don’t need backups, you can customize your Daemon Set so that only relevant Nodes are covered.
  4. Security and compliance agents – Security tools like Falco, Sysdig Secure, or file integrity monitors are often deployed as DaemonSets to provide node-level visibility into potential threats. These agents continuously scan for malicious activity, privilege escalation attempts, or policy violations, helping enforce compliance across all nodes uniformly.
  5. Network and DNS infrastructure – DaemonSets are also used to deploy networking components that require node-level presence. For example, CoreDNS in host networking mode or CNI plugins like Calico and Flannel must operate on each node to manage pod network routing and name resolution. This setup ensures reliable service discovery and inter-pod communication throughout the cluster.

Example of using a Kubernetes DaemonSet

Now we’ve covered the theory behind DaemonSets, let’s look at a simple example you can run in your own cluster.

Prerequisites

Because DaemonSets replicate Pods across multiple Nodes, you’ll need access to a multi-node Kubernetes cluster before you can follow this tutorial.

You can use Minikube to create a new local cluster on your own machine. Follow the guidance on the Minikube website to install Minikube, then run the following command to start a cluster with three virtual Nodes:

$ minikube start --nodes=3
...
Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Wait while your cluster starts up; progress will be shown in your terminal.

Once you see the Done! kubectl is now configured message, run the following Kubectl command to check that your cluster’s Nodes are running:

$ kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
minikube       Ready    control-plane   62s   v1.27.4
minikube-m02   Ready    <none>          45s   v1.27.4
minikube-m03   Ready    <none>          31s   v1.27.4

This confirms the three Nodes are operational. One is configured as the cluster control plane and the other two are workers.

How to create a DaemonSet

To create a DaemonSet in Kubernetes, define a YAML manifest that specifies a DaemonSet kind with a spec.template describing the Pod to run on each node. Apply it using kubectl.

Here’s a simple manifest for a DaemonSet that runs the Fluentd logging system on each of your cluster’s Nodes:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      name: fluentd
  template:
    metadata:
      labels:
        name: fluentd
    spec:
      containers:
        - name: fluentd-elasticsearch
          image: quay.io/fluentd_elasticsearch/fluentd:latest

The manifest’s spec.selector field must reference the labels assigned to the Pod template in spec.template. The template is a regular Kubernetes Pod spec that defines the containers the DaemonSet will run.

Copy the manifest to fluentd.yaml in your working directory, then use Kubectl to apply it to your cluster:

$ kubectl apply -f fluentd.yaml
daemonset.apps/fluentd created

Wait while the DaemonSet’s Pods start, then use the kubectl get pods command with the -o wide option to list the Pods and the Nodes that they’re scheduled to:

$ kubectl get pods -o wide
NAME            READY   STATUS    RESTARTS   AGE     IP           NODE
fluentd-jn24d   1/1     Running   0          2m10s   10.244.1.2   minikube-m02
fluentd-pzmjh   1/1     Running   0          2m10s   10.244.2.2   minikube-m03
fluentd-zcq57   1/1     Running   0          2m10s   10.244.0.3   minikube

You can see that Kubernetes has automatically scheduled a Fluentd Pod onto each of the three Nodes in your cluster.

The kubectl get daemonsets command will show you the status of the DaemonSet object. This includes the desired number of Pods to run, based on the current number of Nodes in your cluster, as well as the current number of Pods that are ready, available, and in the latest up-to-date configuration.

$ kubectl get daemonset
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd   3         3         3       3            3           <none>          3m55s

How to update a DaemonSet

DaemonSets are updated in the same way as other Kubernetes objects. You can use the kubectl update and kubectl patch commands, or you can take advantage of declarative updates by editing your YAML files and repeating the kubectl apply command.

However, not all DaemonSet fields are updatable. You’re prevented from changing the DaemonSet’s spec.selector because any modifications could result in existing Pods being orphaned.

How to delete a DaemonSet

The standard Kubernetes deletion process applies to DaemonSets too. You can use the kubectl delete command to stop and remove all the Pods created by the DaemonSet, then delete the DaemonSet object itself:

$ kubectl delete daemonset/fluentd
daemonset.apps "fluentd" deleted

$ kubectl get daemonsets
No resources found in default namespace.

$ kubectl get pods
No resources found in default namespace.

Optionally, you can delete just the DaemonSet object, while leaving its Pods intact. To do this, you must specify --cascade=orphan when you issue your deletion command:

$ kubectl delete daemonset/fluentd --cascade=orphan
daemonset.apps “fluentd” deleted

The Pods will stay running on their existing Nodes. If you later create another DaemonSet with the same name, then it will automatically adopt the orphaned Pods.

Learn also how to delete a Deployment in Kubernetes.

Scoping DaemonSets to specific nodes

You can configure DaemonSets with a nodeSelector and affinity rules to run Pods on only some of your cluster’s Nodes. These constraints are set using the DaemonSet’s spec.template.spec.nodeSelector and spec.template.spec.affinity fields, respectively.

Here’s a modified version of the Fluentd DaemonSet manifest from above:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      name: fluentd
  template:
    metadata:
      labels:
        name: fluentd
    spec:
      nodeSelector:
        - log-collection-enabled: "true"
      containers:
        - name: fluentd-elasticsearch
          image: quay.io/fluentd_elasticsearch/fluentd:latest

Before applying the manifest to your cluster, set the log-collection-enabled: true label on one of your Nodes:

$ kubectl label node minikube-m02 log-collection-enabled=true
node/minikube-m02 labeled

Then apply the updated DaemonSet manifest:

$ kubectl apply -f fluentd.yaml

Retrieve the DaemonSet’s details with Kubectl’s get daemonsets command:

$ kubectl get daemonsets
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
fluentd   1         1         1       1            1           log-collection-enabled=true   13s

This time, you can see that a node selector is applied to the DaemonSet. The DaemonSet’s desired, current, and available Pod counts show one because only one of your nodes has been assigned the label that matches the selector.

Viewing the Pod list will confirm that the Pod is running on the labelled Node—minikube-m02, in our example:

$ kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE     IP           NODE
fluentd-dflnq   1/1     Running   0          8m55s   10.244.1.3   minikube-m02

How to scale a DaemonSet

DaemonSets are scaled differently from other Kubernetes workload objects. They are automatically scaled based on the number of Nodes in your cluster that match the DaemonSet’s configuration.

Therefore, the way to scale a DaemonSet is to add or remove matching Nodes simply. Creating a new Node will deploy an additional replica of your Pod, while deprovisioning a Node effectively scales down the DaemonSet.

How do you scale a DaemonSet down to 0?

Sometimes you might want more control over DaemonSet scaling. For example, scaling down to 0 and then back up again is a common way to force Kubernetes to redeploy Pods as new instances.

To achieve this with a DaemonSet, you should patch the DaemonSet’s configuration to apply a nodeSelector that doesn’t match any Nodes:

$ kubectl patch daemonset example-daemonset -p '{"spec": {"nodeSelector": {"dummy-nodeselector": "foobar"}}}'

Removing the nodeSelector afterwards—or replacing it with the correct original one—will allow the DaemonSet to scale back up again.

DaemonSet best practices

Here are some best practices for using DaemonSets that will help you maximize performance and reliability.

  1. Only use DaemonSets when Pod scaling is coupled to Node count – DaemonSets are designed to scale Pods across your Nodes. Regular workload objects, such as ReplicaSets and Deployments, should be used when you scale Pod counts independently of your cluster’s Node count.
  2. Ensure all DaemonSet Pods have a correct restart policy – Pods in a DaemonSet must have their restartPolicy set to Always, if you choose to specify a value. This is so the Pods restart with the Node.
  3. Do not manually manage DaemonSet Pods – Pods created as part of a DaemonSet shouldn’t be manually edited or deleted. Making changes outside of the DaemonSet could result in Pods being orphaned.
  4. Use rollbacks to revert DaemonSet changes quickly – An advantage of using DaemonSets for your cluster’s background services is the ease with which you can roll back to earlier revisions if a problem occurs. Initiating a rollback is quicker and more reliable than manually reverting the change, then starting a new rollout.

DaemonSets are a good way to run any daemonized software in Kubernetes. However, standard Kubernetes best practices also apply to their use: for example, it’s important to configure proper resource constraints and security context settings for your DaemonSet Pods.

Managing Kubernetes easier and faster with Spacelift

If you need help managing your Kubernetes projects, consider Spacelift. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. 

With Spacelift, you get:

  • Policies to control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications
  • Stack dependencies to build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that can combine Terraform with Kubernetes, Ansible, and other infrastructure-as-code (IaC) tools such as OpenTofu, Pulumi, and CloudFormation,
  • Self-service infrastructure via Blueprints, or Spacelift’s Kubernetes operator, enabling your developers to do what matters – developing application code while not sacrificing control
  • Creature comforts such as contexts (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code
  • Drift detection and optional remediation

If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.

Key points

Kubernetes DaemonSets replicate Pods across the Nodes in your cluster. This functionality isn’t available in the default Kubernetes scheduling implementation used by other API objects such as ReplicaSets and Deployments.

We’ve seen how DaemonSets are an effective way to deploy global cluster services, including logging tools and backup agents. Any app that needs direct interaction with your cluster’s Nodes is a good candidate to run as a DaemonSet.

Manage Kubernetes easier and faster

Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.

Start free trial

Kubernetes Commands Cheat Sheet

Grab our ultimate cheat sheet PDF

for all the kubectl commands you need.

k8s book
Share your data and download the cheat sheet