Automation and containerization help keep your infrastructure consistent and portable across environments. Two key tools in this space are Ansible and Docker.
Ansible automates repetitive server tasks like installing software, configuring settings, managing users, and deploying code. Instead of using long bash scripts or manual steps, you define everything in a YAML Playbook and target machines via an inventory file.
It connects over SSH without agents; you just need Python and SSH access. Ansible is idempotent, meaning it won’t repeat completed tasks, and it ensures consistency across dev, test, and production environments.
Docker packages your app and its dependencies into containers, ensuring they run in the same way everywhere — from your laptop to production. You define an image using a Dockerfile and use Docker Compose to manage multi-container apps with databases, ports, and environment variables. This simplifies deployment and boosts portability and speed across environments.
What we will cover in this article:
- Why use Ansible to manage Docker?
- Ansible modules for Docker
- How to install Docker using Ansible
- How to automate Docker container management with Ansible
- Using Ansible to build Docker Images
- Practical example: Deploying a sample web application using Docker and Ansible
- Best practices using Ansible with Docker
Using Docker to run your applications is great for consistency and portability. But setting up a Docker environment isn’t always straightforward. It still requires:
- Installing Docker
- Configuring the Docker daemon
- Setting up networking and permissions
- Managing dependencies and firewall rules
- Building and running Docker images
Completing these tasks manually might be fine for a few machines, but not if you’re working with a larger setup or scaling out.
That’s where Ansible comes in.
Using Ansible to manage Docker simplifies the automation of the container lifecycle, particularly across multiple hosts. Ansible can install Docker, deploy containers, manage images, configure networks, and handle orchestration tasks using YAML playbooks without requiring agents on the target systems.
This makes it useful for integrating Docker into broader CI/CD or infrastructure pipelines, especially when Docker is just one part of the overall stack.
Why not just use shell scripts?
You could write shell scripts instead, but they can be messy and harder to manage. They usually run line-by-line, and if something fails midway, you might end up with a half-configured server.
That doesn’t happen with Ansible. It’s declarative, so you just tell it how the final setup should look, and it figures out how to get there. It’ll also skip previously completed steps, which makes it safe to run again and again.
So, if you’re managing Docker environments across multiple machines, using Ansible can make the process smoother, safer, and easier to scale.
Example: Ansible playbook for Docker
The following demonstrates how we can use an Ansible playbook to install specific dependency packages on your Docker servers and install the Docker Engine.
This playbook can be used to spin up hundreds of Docker servers just by running the ansible-playbook command instead of running these steps individually on each server:
- name: Install Docker on Ubuntu
hosts: docker_hosts
become: true
tasks:
- name: Install dependencies
apt:
name: "{{ item }}"
state: present
loop:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker APT repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable
state: present
- name: Install Docker Engine
apt:
name: docker-ce
state: present
update_cache: true
Ansible also provides you with a set of modules that can be used to perform internal Docker tasks such as running Docker containers, building images, configuring networks, and more. In the next section, we will delve deeper into all the modules Ansible offers to manage Docker.
Below is a small example of how effective Ansible can be to manage specific Docker tasks:
- name: Start my web app
hosts: docker_hosts
become: true
tasks:
- name: Run container
docker_container:
name: myapp
image: source/webapp:latest
state: started
ports:
- "8080:80"
env:
APP_ENV: production
The Ansible modules for managing Docker are part of the community.docker
collection, which is maintained and regularly updated. These modules help automate specific Docker-related tasks and promote idempotency across your Docker configurations.
Commonly used modules include:
docker_container
: Manages the lifecycle of Docker containers (start, stop, restart, etc.).docker_image
: Pulls, builds, and manages Docker images.docker_network
: Creates and manages Docker networks.- docker_volume: Handles Docker volumes for persistent storage.
docker_login
: Manages Docker registry logins.docker_compose
: Deploys applications using Docker Compose files.docker_prune
: Removes unused Docker resources.docker_swarm
,docker_service
: For managing Docker Swarm and services.
To use the Docker modules, you will need to first install the collection on your Ansible control node by running the following:
ansible-galaxy collection install community.docker
1. docker_container module
The docker_container
module allows you to use commands such as docker run
, docker stop
, docker restart
, and docker rm
. It also allows you to specify the ports, environment variables, and volumes you want to use for that container.
Example 1: Starting a Docker container (NGINX)
- name: Start Nginx container
docker_container:
name: nginx
image: nginx:latest
state: started
restart_policy: always
published_ports:
- "80:80"
Example 2: Bind mounting volumes and setting environment variables
- name: Run app with volume and env var
docker_container:
name: myapp
image: source/webapp:latest
state: started
env:
NODE_ENV: production
volumes:
- /data/uploads:/app/uploads
2. docker_image
The docker_image
module lets you pull, build, and remove images from a Docker server. It can be very useful when you want to streamline your deployments through a CI/CD pipeline.
Example 1: Pulling the latest Redis image
- name: Pull the latest Redis image
docker_image:
name: redis
source: pull
Example 2: Building an image from a local directory
- name: Build Docker image from Dockerfile
docker_image:
name: source/customapp
build:
path: /opt/app
Example 3: Removing an image
- name: Remove unused image
docker_image:
name: old-image
state: absent
3. docker_network
The docker_network
module allows you to create and manage Docker networks.
Example: Creating a Bridge Network and attaching a container to that network
- name: Create Docker network
docker_network:
name: myapp_network
state: present
- name: Run container on custom network
docker_container:
name: web
image: nginx
networks:
- name: myapp_network
5. docker_volume
In Docker, we often need to maintain some level of persistent data. The docker_volume
module can deploy Docker volumes to store persistent data and remove volumes.
Example 1: Creating and mounting a Docker volume
- name: Create volume for database
docker_volume:
name: pgdata
state: present
- name: Run PostgreSQL with volume
docker_container:
name: postgres
image: postgres:15
volumes:
- pgdata:/var/lib/postgresql/data
Example 2: Removing a Docker volume
- name: Remove old volume
docker_volume:
name: oldvolume
state: absent
6. docker_login
The docker_login
module can assist in logging into private Docker repositories and pulling an image right from your Ansible playbook run:
- name: Log in to private registry
docker_login:
registry_url: registry.source.com
username: myuser
password: "{{ my_registry_password }}"
7. docker_compose
If your Docker environment is running from Docker Compose, Ansible also supports this. You just need to specify the directory that contains your docker-compose.yml file.
This approach can be useful for a multi-container environment.
- name: Deploy app with Docker Compose
docker_compose:
project_src: /opt/myapp
state: present
You can also apply conditionals within Ansible, which can be beneficial in triggering your workflows. Conditionals allow you to manipulate the way your containers are run and avoid unexpected changes.
For example, typically, you would want to recreate your Docker containers if a new image is being pulled. Here, you can apply a conditional to recreate your Docker container when the image being used by that container changes.
- name: Pull latest nginx image
docker_image:
name: nginx
source: pull
register: nginx_image_result
- name: Recreate nginx container if image was updated
docker_container:
name: nginx
image: nginx
state: started
recreate: yes
when: nginx_image_result.changed
Example: Using Ansible modules to manage your Docker application
The following example shows a high-level application deployment using Docker with Ansible to demonstrate a real-world scenario of using Ansible to manage your Docker application.
From installing Docker, pulling the Docker image, to deploying the application container:
- name: Deploy my web app
hosts: docker_hosts
become: true
tasks:
- name: Ensure Docker is installed
apt:
name: docker.io
state: present
when: ansible_os_family == "Debian"
- name: Pull latest app image
docker_image:
name: source/webapp:latest
source: pull
- name: Run the container
docker_container:
name: webapp
image: source/webapp:latest
state: started
published_ports:
- "8080:80"
env:
ENV: production
API_URL: https://api.example.com
Overall, Ansible supports all these modules to manage your Docker application and automate the manual steps involved in your workflows.
In the previous examples, we installed Docker through Ansible simply by adding the Docker repository and installing docker-ce. However, in a real-world scenario, you will need to install a couple of prerequisites and also have other Docker components installed and configured.
In this section, we will explore a full Docker installation using Ansible.
Before continuing, it is important to have a separate inventory file that lists all the servers you want to deploy and configure Docker onto. The following is a sample inventory file:
[docker_hosts]
docker1.example.com
docker2.example.com
To install Docker, you will need the following prerequisites installed on the server before triggering a Docker install:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
You can install these prerequisites by using an Ansible loop in your Ansible playbook as shown below:
- name: Install dependencies
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
state: present
update_cache: yes
Once you have the prerequisites installed, you can add the Docker GPG keys for the Docker repository and the repository itself and trigger the install:
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker APT repository
apt_repository:
repo: >
deb [arch=amd64] https://download.docker.com/linux/ubuntu
{{ ansible_lsb.codename }} stable
state: present
filename: docker
- name: Install Docker Engine
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
state: present
update_cache: yes
It is also best practice to make sure the Docker service is fully on and ready to be used with the following task:
- name: Ensure Docker is started and enabled
systemd:
name: docker
enabled: yes
state: started
Adding the following is optional, but it is recommended to have a separate Docker group and separate users added to that group for Docker access:
- name: Add users to Docker group
user:
name: "ubuntu"
groups: docker
append: yes
You can also utilize the task below to ensure the Docker service is fully available before continuing to any key Docker deployment step to avoid any playbook failures:
- name: Wait for Docker socket to become available
wait_for:
path: /var/run/docker.sock
state: present
timeout: 30
These steps are necessary to automate the installation of Docker on your server, preventing you from having to go through multiple servers and install/configure Docker.
Installing Docker with Ansible roles
You can also convert these installation steps into an Ansible role to simplify it even further. You can create an Ansible role, such as docker_install, and have all the tasks listed under a main.yml. Here is a high-level structure of the role:
roles/
├── docker_install/
│ ├── tasks/
│ │ └── main.yml
│ └── defaults/
│ └── main.yml
Now, all you have to do in your playbooks is call the docker_install
role as follows to install Docker:
roles:
- docker_install
In this section, we will show various ways you can use Ansible to manage your Docker environment.
1. Control Docker environment startup order
In a Docker environment where the database, cache, and application are containerized, it is important to ensure the database and other dependent containers are up and running before the application container gets triggered so that the application can run properly without any errors.
With Ansible, we can orchestrate this process and ensure there is proper timing in place for the services to start up before the application container is triggered.
In the following playbook, we are using:
- PostgreSQL database container
- Redis cache container
- Node.js application container.
We need to ensure there are proper mechanisms for all the dependent services:
- name: Start and wait for PostgreSQL
docker_container:
name: db
image: postgres:15
env:
POSTGRES_PASSWORD: db_secret_pass
state: started
restart_policy: always
- name: Wait for PostgreSQL to accept connections
wait_for:
host: 127.0.0.1
port: 5432
delay: 5
timeout: 30
- name: Start Redis
docker_container:
name: redis
image: redis:latest
state: started
- name: Wait for Redis to be ready
wait_for:
port: 6379
delay: 3
timeout: 20
- name: Start Node.js app
docker_container:
name: app
image: source/app:latest
published_ports:
- "80:3000"
env:
DATABASE_URL: postgres://postgres:db_secret_pass@db:5432/mydb
REDIS_URL: redis://redis:6379
state: started
2. Rolling updates with Ansible
To deploy a new Docker image to our existing containerized application, we will need a strategy with no downtime for our application. We can also use Ansible to assist us in properly rolling out our new image to all our containers.
In the following example, we have a set of Docker containers spread across five servers. To avoid bringing down the entire application, we will utilize Ansible’s serial
feature, which allows us to perform a clean image rollout to each server one at a time.
- name: Rolling update of app container
hosts: my_app_servers
become: true
serial: 1
tasks:
- name: Pull latest image
docker_image:
name: source/webapp
source: pull
register: result
- name: Recreate app container only if image changed
docker_container:
name: app
image: source/webapp
state: started
recreate: yes
when: result.changed
3. Manage containers without Docker modules
Managing the Docker environment with Ansible doesn’t always need to be through the Ansible Docker module collection. You may experience a scenario where the Docker module isn’t fulfilling the required task. You can utilize the shell module to run Docker commands on your servers.
In the following example, we are performing a one-time DB migration command within a container:
- name: Run DB migration job
shell: docker run --rm source/app:latest npm run migrate
You can also use the shell module to run the Docker logs command to retrieve container logs:
- name: Get container logs
shell: docker logs my_api > /tmp/mylogs.txt
This approach can also help extract a list of containers to a text file:
- name: List all containers
shell: docker ps -a > /tmp/my_containers.txt
As you can see, this approach increases flexibility in managing your Docker environment.
4. Backup Docker volume
You can also use Ansible to backup Docker volumes by spinning up a separate Alpine container:
- name: Backup Docker volume using alpine container
hosts: docker_hosts
become: true
tasks:
- name: Run a temp container to backup volume
docker_container:
name: backup_volume
image: alpine
command: >
sh -c "cd /data && tar czcf /mybackup/volume_backup.tar.gz ."
volumes:
- my_named_volume:/data
- /tmp:/backup
state: started
autoremove: true
To build Docker images using Ansible and automate the docker build
process, you can use the docker_image
module.
In the following examples, we will go over different ways we can use the docker_image
module:
- Building an image from a local Dockerfile:
- name: Build image from local Dockerfile
hosts: docker_hosts
become: true
tasks:
- name: Build myapp image
docker_image:
name: myapp
tag: latest
build:
path: /opt/myapp
- Building an image from a Git repository:
- name: Build image from Git repo
hosts: docker_hosts
become: true
tasks:
- name: Build image from Git
docker_image:
name: gitimage
tag: latest
build:
path: https://github.com/myorg/myapp.git
- Passing in build arguments (Pass in environment variables/secrets.):
- name: Build image using build arguments
hosts: docker_hosts
become: true
tasks:
- name: Build with args
docker_image:
name: flask-api
tag: prod
build:
path: /opt/flask-api
args:
API_KEY: "{{ lookup('env', 'MY_SECRET_API_KEY') }}"
ENVIRONMENT: production
- Cleaning the image build without cache:
- name: Clean build without cache
docker_image:
name: webapp-clean
tag: latest
build:
path: /opt/webapp
pull: true
nocache: true
- Force image build even if the image exists:
- name: Rebuild image even if it already exists
docker_image:
name: myapp
tag: latest
force: true
build:
path: /opt/myapp
- Build and push the image to the registry:
- name: Build and push Docker image
docker_image:
name: myappregistry.com/dept/myapp
tag: "2025.05.25"
build:
path: /opt/myapp
push: true
In this section, we will demonstrate deploying a full web application on Docker using Ansible.
For this demo, we are hosting an Apache Airflow application. This open-source data platform schedules, authors, and monitors data workflows and pipelines. The technology stack used in this deployment will include a PostgreSQL database, Redis Cache, and the Apache Airflow application.
We will also be placing all our configurations in a docker-compose file based on Apache Airflow documentation.
The following will be our file structure for this deployment:
airflow_ansible/
├── inventory/
│ └── hosts
├── docker-compose.yaml
├── .env
├── site.yml
Starting off with the inventory file, this will include the server host name/IP address:
[airflow_servers]
airflow-node1 ansible_host=10.10.0.10 ansible_user=ubuntu
airflow-node2 ansible_host=10.10.0.11 ansible_user=ubuntu
We will be passing in our Airflow UID through environment variables listed in our .env file:
AIRFLOW_UID=50000
The docker-compose file will include all the containers we will be deploying, along with all the necessary configurations required for a full Airflow Application Deployment:
version: '3.8'
x-airflow-common:
&airflow-common
image: apache/airflow:3.0.1
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'false'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
- redis
- postgres
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 10s
retries: 5
restart: always
redis:
image: redis:7.2-bookworm
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
retries: 5
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- "8080:8080"
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 30s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8974/health"]
interval: 30s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
environment:
<<: *airflow-common-env
DUMB_INIT_SETSID: "0"
restart: always
airflow-triggerer:
<<: *airflow-common
command: triggerer
restart: always
airflow-init:
<<: *airflow-common
command: bash -c "airflow db migrate && airflow users create \
--username airflow \
--password airflow \
--firstname Admin \
--lastname User \
--role Admin \
--email airflow@example.com"
user: "0:0"
depends_on:
- postgres
- redis
flower:
<<: *airflow-common
command: celery flower
ports:
- "5555:5555"
restart: always
volumes:
postgres-db-volume:
And, finally, here is our main playbook we will use to deploy our Docker environment for Airflow, which is Docker, creating the necessary directories for Airflow, copying over the docker-compose file, and starting all the containers:
- name: Deploy Apache Airflow with Docker Compose
hosts: airflow_servers
become: true
vars:
airflow_dir: /opt/airflow
tasks:
- name: Install Docker and Docker Compose
apt:
name:
- docker.io
- docker-compose
state: present
update_cache: true
- name: Create Airflow project directory
file:
path: "{{ airflow_dir }}"
state: directory
mode: '0755'
- name: Create required Airflow subdirectories
file:
path: "{{ airflow_dir }}/{{ item }}"
state: directory
mode: '0755'
loop:
- dags
- logs
- plugins
- name: Copy Docker Compose file
copy:
src: docker-compose.yaml
dest: "{{ airflow_dir }}/docker-compose.yaml"
mode: '0644'
- name: Copy .env file
copy:
src: .env
dest: "{{ airflow_dir }}/.env"
mode: '0644'
- name: Initialize Airflow database
command: docker-compose run airflow-init
args:
chdir: "{{ airflow_dir }}"
become: true
- name: Start Airflow containers
docker_compose:
project_src: "{{ airflow_dir }}"
state: present
If we have an updated image from Airflow, we can use Ansible to update all our existing Docker containers.
We can utilize Jinja templating to pass in image versions into a variable in our docker-compose.yml Jinja template, which can be copied over to our main docker-compose.yml file.
Let’s start by declaring the image tag on a group_vars file (group_vars/all.yml), we will be going from the Airflow image version 3.0.1 to 3.1.0:
airflow_image_tag: 3.1.0
We have our entire docker-compose.yml file in a file called docker-compose.yml.j2, but we only replace specific areas in this file to support our airflow image tag:
airflow-webserver:
image: apache/airflow:{{ airflow_image_tag }}
...
airflow-scheduler:
image: apache/airflow:{{ airflow_image_tag }}
...
airflow-worker:
image: apache/airflow:{{ airflow_image_tag }}
...
Now we can create a new playbook for updating the airflow version (airflow_version_update.yml), which will:
- Copy the Jinja template to our main docker-compose file
- Pull the latest Airflow image will retrieve the airflow 3.1.0 version
- Restart all Docker containers using the new docker-compose file
- name: Roll out new Airflow image version
hosts: airflow_servers
become: true
vars:
airflow_dir: /opt/airflow
tasks:
- name: Apply updated docker-compose template
template:
src: docker-compose.yaml.j2
dest: "{{ airflow_dir }}/docker-compose.yaml"
mode: '0644'
- name: Pull latest Apache Airflow image
shell: docker compose pull
args:
chdir: "{{ airflow_dir }}"
- name: Re-run airflow-init for DB migrations
shell: docker compose run airflow-init
args:
chdir: "{{ airflow_dir }}"
- name: Restart Airflow services using new image
docker_compose:
project_src: "{{ airflow_dir }}"
state: present
restarted: true
Now we can simply run the playbook against our Airflow servers:
ansible-playbook airflow_version_update.yml -i inventory/hosts
This approach can also be used with any other value that is in the docker-compose.yml file, increasing the flexibility of automating your docker-compose environment through Ansible.
Following best practices to ensure your Docker environment is properly running within Ansible will ensure your playbooks are readable, reusable, secure, and easy to troubleshoot.
1. Clean your Ansible architecture
Having a clean, organized architecture for your Ansible files is important for your application to deploy successfully and be easy to manage.
In Ansible, you can create a role for each major function to spread out your tasks properly instead of having all the various tasks in one playbook.
For example, you can create a separate role for installing Docker, core application, separate databases/cache, etc.
deploy/
├── inventory/
│ ├── dev.ini
│ ├── staging.ini
│ └── prod.ini
├── playbook.yml
├── roles/
│ ├── docker_install/
│ ├── app_deploy/
│ └── database/
---
- name: App Deployment
hosts: docker_hosts
become: true
roles:
- docker_install
- database
- app_deploy
2. Use variables in your playbooks
You might have various values you need to update in your Dockerfile or in your docker-compose.yml file if you are using Docker Compose. Incorporating variables across your playbooks and files can make your deployments more dynamic and easier to manage.
Instead of hard-coding values across multiple areas in your playbooks, you can set variables across your playbooks and manage all the variable values from one place. This approach simplifies managing your Ansible environment for a containerized application:
Instead of hard-coding image versions like this:
image: source/webapp:1.2.3
You can use variables as:
image: "{{ webapp_image }}"
Now all you have to do is just update the variable file along with our other variables:
webapp_image: source/webapp:latest
db_user: postgres
3. Enforce idempotency across all your playbook tasks
In your playbook tasks that use Docker modules such as docker_container, avoid setting the ‘State’ to ‘restarted’, as this will force a re-create for the container every time the playbook runs. Instead, you can set the ‘State’ to ‘started’ and use the ‘recreate’ flag along with conditionals.
Avoid using:
state: restarted
Use the following instead:
- name: Pull image
docker_image:
name: "{{ webapp_image }}"
source: pull
register: image_result
- name: Recreate container only if image changed
docker_container:
name: webapp
image: "{{ webapp_image }}"
recreate: yes
state: started
when: image_result.changed
4. Create an inventory file for each environment
When managing multi-environment applications through Ansible, it is recommended in any scenario to keep separate inventory files for each of those environments in order to prevent any accidental deployments:
inventory/
├── dev.ini
├── staging.ini
└── prod.ini
ansible-playbook deploy.yml -i inventory/dev.ini
5. Use tags with your playbook tasks
Tags in Ansible are very powerful. They allow you to trigger certain tasks in your playbooks instead of running the entire playbook each time you run it. Combining tags with the Docker container module can give you more control over different phases in your application deployment lifecycle.
For example, if you want to start only the frontend containers in a Docker environment, set tags to that specific task and add them during the ansible-playbook command run:
- name: Start frontend container
docker_container:
name: frontend
...
tags: ['frontend']
ansible-playbook deploy.yml --tags frontend
Spacelift’s vibrant ecosystem and excellent GitOps flow are helpful for managing and orchestrating Ansible. By introducing Spacelift on top of Ansible, you can easily create custom workflows based on pull requests and apply any necessary compliance checks for your organization.
Another advantage of using Spacelift is that you can manage infrastructure tools like Ansible, Terraform, Pulumi, AWS CloudFormation, and even Kubernetes from the same place and combine their stacks with building workflows across tools.
You can bring your own Docker image and use it as a runner to speed up deployments that leverage third-party tools. Spacelift’s official runner image can be found here.
Our latest Ansible enhancements solve three of the biggest challenges engineers face when they are using Ansible:
- Having a centralized place in which you can run your playbooks
- Combining IaC with configuration management to create a single workflow
- Getting insights into what ran and where
Provisioning, configuring, governing, and even orchestrating your containers can be performed with a single workflow, separating the elements into smaller chunks to identify issues more easily.
Would you like to see this in action, or just get an overview? Check out this video showing you Spacelift’s Ansible functionality:
If you want to learn more about using Spacelift with Ansible, check our documentation, read our Ansible guide, or book a demo with one of our engineers.
Using Ansible to manage Docker streamlines container operations by automating everything from image pulls and container runs to rolling updates and backups. With purpose-built Docker modules, Ansible helps define consistent, scalable deployments that reduce manual work and errors.
This article showed how to structure roles, apply conditionals, and follow best practices, giving you a clear path to repeatable, reliable infrastructure automation.
Solve your infrastructure challenges
Spacelift is a flexible orchestration solution for IaC development. It delivers enhanced collaboration, automation, and controls to simplify and accelerate the provisioning of cloud-based infrastructures.