Subscribe to the Spacelift Blog newsletter, Mission Infrastructure |

Sign Up ➡️

Terraform

How to Deploy an AWS ECS Cluster with Terraform [Tutorial]

Deploying an AWS ECS Cluster of EC2 Instances with Terraform

The Elastic Container Service (ECS) from Amazon Web Services (AWS) is a fully-managed cloud Container Orchestration Service. It runs multiple Docker containers on the cluster using AWS EC2 instances. Optionally, these containers can also run on Fargate.

The more AWS ECS Clusters we deploy, the more complex the infrastructure management becomes. Especially when deploying and scaling a large cluster, the process can become time-consuming, as it involves many repetitive tasks that lead to human errors, creating configuration drift and increasing the risk of security breaches.

Using Terraform is the perfect solution to simplify the deployment of the AWS ECS Cluster. Terraform provides an automated way to manage AWS ECS Clusters, ensuring a consistent, repeatable deployment process.

In this post, we will focus on how to set up an ECS cluster of EC2 instances using Terraform.

We will cover

  1. How to deploy a service on ECS clusters
  2. ECS deployment with Terraform – Overview
  3. How to set up ECS with Terraform – Example

How to deploy a service on ECS clusters

There are mainly two ways to deploy a service on ECS clusters: Fargate and EC2 instances. This depends on the underlying infrastructure used to run the container workloads of any ECS service.

AWS Fargate is a more cloud-native approach where the compute instances are automatically managed by AWS.

Running the ECS service on EC2 instances provides more control over the infrastructure. This also requires additional steps to set up EC2 instances, an auto-scaling group, networking, etc.

An ECS service generally consists of the components shown in the diagram below.

ecs task definition terraform

When an ECS Cluster is created, before any service is deployed, we have to provide the cluster with capacity providers.

In this blog post, the capacity providers are the EC2 instances, with scaling managed by ASG. The service can then be deployed once the capacity providers or the capacity-providing EC2 instances are registered with the ECS cluster.

An ECS service consists of multiple tasks. Each task is created based on the task definition provided by the service.

A task definition is a template that describes the source of the application image, resources required in terms of CPU and memory units, container and host port mapping, and other critical information.

In addition to the task definition, the ECS service also specifies the number of container instances to create. This automatically creates the tasks, which are then assigned a target EC2 instance infrastructure where the containers are run. The running containers serve the incoming requests from the application load balancer (ALB), which is deployed in front of the EC2 instances in a VPC.

ECS deployment with Terraform - Overview

The diagram below shows the outcome of ECS deployment using Terraform.

It shows how an ECS cluster is set up on EC2 instances spread across multiple availability zones within a VPC. It also includes several details, such as Application Load Balancers (ALB), auto-scaling groups (ASG), ECS Capacity Providers, ECS services, etc.

Note that the diagram does not include other aspects, such as ECR, Route tables, task definitions, etc., which we will also cover.

terraform aws ecs

We will break this infrastructure down into three parts. The list below provides a high-level overview of all the steps we will take to achieve our goal of hosting ECS clusters on EC2 instances. As we proceed, we will also write the corresponding Terraform configuration.

  1. VPC setup – In this part, we will explain and implement the basic VPC and networking setup required. We will implement subnets, security groups, and route tables, to access the hosted service from the internet as well as to SSH into our EC2 instances. 
  2. EC2 setup – In this part, we will explain and implement auto-scaling groups, application load balancer, and EC2 instances, across the two availability zones. We will also cover the setup in detail required to host ECS container instances on our EC2 machines.
  3. ECS setup – Finally, we do a step-by-step creation of the ECS cluster by creating ECS clusters, services, tasks, and capacity providers. We will also take a look at the application image, which will be used to run on this ECS cluster. Additionally, we will also show how to create task definitions along with various Terraform resource attributes which enable the end-to-end deployment of the service.

Note: The final Terraform configuration is saved in this GitHub repository.

Before moving ahead, it is highly recommended to follow these steps hands-on.

How to set up ECS with Terraform - Example

Before we start, you should have Terraform installed locally. We will not go through the details of setting up the providers, AWS credentials, and AWS CLI.

Let’s go through the steps to deploy ECS using Terraform.

1. Setting up the VPC

To begin, let us establish our isolated network by defining the VPC and related components. This is important because we need to create multiple container instances of our application so the load balancer can evenly distribute requests.

The diagram below displays the target VPC design we would achieve using Terraform.

aws ecs service terraform

Create a file named vpc.tf in the project repository and add the code below.

You can optionally include all the Terraform code in the same file (e.g., in main.tf). However, I prefer to keep separate files to manage the IaC easily. 

Step 1: Define our VPC

The code below creates a VPC with a given CIDR range. You are free to choose a CIDR range.

As an example, we use 10.0.0.0/16 in this case and name our VPC as “main”.

resource "aws_vpc" "main" {
 cidr_block           = var.vpc_cidr
 enable_dns_hostnames = true
 tags = {
   name = "main"
 }
}

Step 2: Add 2 subnets

Create two subnets in different availability zones to place our EC2 instances. We have used “cidrsubnet” Terraform function to dynamically calculate the CIDR range based on the VPC’s CIDR.

Note that we are using different availability zones in the Frankfurt region for both subnets.

resource "aws_subnet" "subnet" {
 vpc_id                  = aws_vpc.main.id
 cidr_block              = cidrsubnet(aws_vpc.main.cidr_block, 8, 1)
 map_public_ip_on_launch = true
 availability_zone       = "eu-central-1a"
}

resource "aws_subnet" "subnet2" {
 vpc_id                  = aws_vpc.main.id
 cidr_block              = cidrsubnet(aws_vpc.main.cidr_block, 8, 2)
 map_public_ip_on_launch = true
 availability_zone       = "eu-central-1b"
}

Step 3: Create an internet gateway (IGW)

resource "aws_internet_gateway" "internet_gateway" {
 vpc_id = aws_vpc.main.id
 tags = {
   Name = "internet_gateway"
 }
}

Step 4: Create a Route table and associate the same with subnets

This route table enables internet communication between both subnets, making them public.

resource "aws_route_table" "route_table" {
 vpc_id = aws_vpc.main.id
 route {
   cidr_block = "0.0.0.0/0"
   gateway_id = aws_internet_gateway.internet_gateway.id
 }
}

resource "aws_route_table_association" "subnet_route" {
 subnet_id      = aws_subnet.subnet.id
 route_table_id = aws_route_table.route_table.id
}

resource "aws_route_table_association" "subnet2_route" {
 subnet_id      = aws_subnet.subnet2.id
 route_table_id = aws_route_table.route_table.id
}

Step 5: Create a security group along with ingress and egress rules

Both ingress and egress rules of the security group allow inbound and outbound access for any protocol, via any port. This is not best practice and should only be done to work through this example. Tighter rules should be implemented when in production.

resource "aws_security_group" "security_group" {
 name   = "ecs-security-group"
 vpc_id = aws_vpc.main.id

 ingress {
   from_port   = 0
   to_port     = 0
   protocol    = -1
   self        = "false"
   cidr_blocks = ["0.0.0.0/0"]
   description = "any"
 }

 egress {
   from_port   = 0
   to_port     = 0
   protocol    = "-1"
   cidr_blocks = ["0.0.0.0/0"]
 }
}

At this point, this is all we need as far as the VPC and networking is concerned. You can go ahead and provision these resources now or proceed with the next section.

The VPC setup is quite straightforward. We will not go deep into exploring the VPC creation using Terraform, as that is not the topic of this blog post. For more information on that, check out How to Build AWS VPC using Terraform.

This gives us the networking basics to proceed to the next step.

2. Configuring the EC2 instances

In this section, we are focusing on provisioning the auto-scaling group, defining the EC2 instance template used to host the ECS containers, and provisioning the application load balancer in the VPC created in the previous section.

Below you can see the updated diagram.

 

terraform ecs cluster example

Create another file named ec2.tf, and follow the steps below. 

Step 1: Create an EC2 launch template

A launch template, as the name suggests, defines the template used by the auto-scaling group to provision and maintain a desired/required number of EC2 instances in a cluster.

Launch templates define various characteristics of the EC2 instance:

  1. Image: We use an Amazon Linux image with CPU architecture as AMD.
  2. Type: Size of the instance. We use “t3.micro”.
    The instance size is determined by the resources consumed by the container. In our case, we are hosting a “Docker Getting Started” image which does not consume a lot of resources. More about the image will be covered later in the post.
  3. Key name: Specify the name of the key to be able to SSH into these instances from our local machines. You can either create a key using a separate Terraform resource block. In this case, I am just using a key that I have already generated.
  4. Security group: The security group to be associated with the EC2 instance. Associate the same security group we created in the previous section.
  5. IAM instance profile: This is very important. Without this, the EC2 instances will not be able to access the ECS service in AWS. “ecsInstanceRole” is a predefined role available in all the AWS accounts. However, if you want to use a custom role, make sure it can access the ECS service.
  6. User data: This is also very important. The “ecs.sh” file contains a command to create an environment variable in “/etc/ecs/ecs.config” file on each EC2 instance that will be created. Without setting this, the ECS service will not be able to deploy and run containers on our EC2 instance.
data "aws_ssm_parameter" "ecs_ami" {
  name = "/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id"
}

resource "aws_launch_template" "ecs_lt" {
  name_prefix   = "ecs-template"
  image_id      = data.aws_ssm_parameter.ecs_ami.value
  instance_type = "t3.micro"

  key_name               = var.key_name
  vpc_security_group_ids = [aws_security_group.security_group.id]

  iam_instance_profile {
    name = "ecsInstanceRole"
  }

  block_device_mappings {
    device_name = "/dev/xvda"
    ebs {
      volume_size = 30
      volume_type = "gp2"
    }
  }

  tag_specifications {
    resource_type = "instance"
    tags = {
      Name = "ecs-instance"
    }
  }

  user_data = filebase64("${path.module}/ecs.sh")
}

The contents of ecs.sh file are:

#!/bin/bash
echo ECS_CLUSTER=my-ecs-cluster >> /etc/ecs/ecs.config

Step 2: Create an auto-scaling group (ASG)

Create an ASG and associate it with the launch template created in the last step.

ASG automatically manages the horizontal scaling of EC2 instances as per what is required by the ECS service but within the limits defined in this resource block.

resource "aws_autoscaling_group" "ecs_asg" {
 vpc_zone_identifier = [aws_subnet.subnet.id, aws_subnet.subnet2.id]
 desired_capacity    = 2
 max_size            = 3
 min_size            = 1

 launch_template {
   id      = aws_launch_template.ecs_lt.id
   version = "$Latest"
 }

 tag {
   key                 = "AmazonECSManaged"
   value               = true
   propagate_at_launch = true
 }
}

Note that apart from the desired capacity, max, and min count, we have also specified the “vpc_zone_identifier” attribute. This limits the ASG to provisioning instances in the same availability zones as the subnets.

A region may have more than two availability zones. Since we are using only two subnets, the vpc_zone_identifier should be set to the appropriate AZs dynamically.

Step 3: Configure Application Load Balancer (ALB)

An ALB is required to test our implementation in the end. In a way, it is optional for the discussion in this blog post, but there is no fun in doing the hard work and not being able to witness it in the real world.

See How to Manage Application Load Balancer (ALB) with Terraform.

Create the ALB, its listener, and the target group as defined in the code samples below.

resource "aws_lb" "ecs_alb" {
 name               = "ecs-alb"
 internal           = false
 load_balancer_type = "application"
 security_groups    = [aws_security_group.security_group.id]
 subnets            = [aws_subnet.subnet.id, aws_subnet.subnet2.id]

 tags = {
   Name = "ecs-alb"
 }
}

resource "aws_lb_listener" "ecs_alb_listener" {
 load_balancer_arn = aws_lb.ecs_alb.arn
 port              = 80
 protocol          = "HTTP"

 default_action {
   type             = "forward"
   target_group_arn = aws_lb_target_group.ecs_tg.arn
 }
}

resource "aws_lb_target_group" "ecs_tg" {
 name        = "ecs-target-group"
 port        = 80
 protocol    = "HTTP"
 target_type = "ip"
 vpc_id      = aws_vpc.main.id

 health_check {
   path = "/"
 }
}

Note that we are still using the same VPC, subnets, and security group we created in the previous section. The rest of the ALB configuration is straightforward and basic.

Again, at this point, you can choose to create these resources using terraform plan and apply commands. Or, move forward for the final section, we would create the ECS cluster.

3. Configuring the ECS cluster

With the networking and EC2 infrastructure defined or provisioned, we finally have arrived at provisioning an ECS cluster and hosting a web application on the same.

In this section, we will achieve the target architecture represented in the target deployment section as well as below.

terraform aws ecs

Step 1: Create and push the application image

We would deploy the application “docker/getting-started”, which is usually shipped with every Docker Desktop installation. Pull this image locally, and then push it to any accessible image repository of your choice.

To keep this simple, I created a public Elastic Container Registry (ECR), tagged the image appropriately, and pushed it to this public ECR.

Note that if you are using a Mac M1 processor (or any ARM-based processor) to build the image locally, the CPU architecture differs from that of the Amazon Linux AMI (X86) used in the EC2 setup section. This impacts the way images are built and is not compatible. As a workaround, do a manual build and push by logging in to an Amazon Linux AMI-based EC2 instance.

Step 2: Provision the ECS cluster

Optionally, create a new configuration file and add the code below. In my example, I have created a main.tf file to add the ECS-related configuration. We begin by provisioning an ECS cluster using the “aws_ecs_cluster” resource.

resource "aws_ecs_cluster" "ecs_cluster" {
 name = "my-ecs-cluster"
}

This is a simple resource block with just a name attribute. This does not do much, but this is where provisioning an ECS cluster begins.

Step 3: Create capacity providers

Next, we create a couple of resources to provision capacity providers for the ECS cluster created in the previous step.

The “aws_ecs_capacity_provider” resource associates the auto-scaling group with the cluster’s capacity provider. Whereas “aws_ecs_cluster_capacity_providers” binds the ASG capacity provider with the ECS cluster created in Step 1.

resource "aws_ecs_capacity_provider" "ecs_capacity_provider" {
 name = "test1"

 auto_scaling_group_provider {
   auto_scaling_group_arn = aws_autoscaling_group.ecs_asg.arn

   managed_scaling {
     maximum_scaling_step_size = 1000
     minimum_scaling_step_size = 1
     status                    = "ENABLED"
     target_capacity           = 3
   }
 }
}

resource "aws_ecs_cluster_capacity_providers" "example" {
 cluster_name = aws_ecs_cluster.ecs_cluster.name

 capacity_providers = [aws_ecs_capacity_provider.ecs_capacity_provider.name]

 default_capacity_provider_strategy {
   base              = 1
   weight            = 100
   capacity_provider = aws_ecs_capacity_provider.ecs_capacity_provider.name
 }
}

Step 4: Create ECS task definition with Terraform

As described in the ECS overview section above, we now define the container task template to run on the ECS cluster using the image we pushed in Step 1.

Some of the important points to note here are:

  1. We have defined the network mode to be “awsvpc”. This tells the ECS cluster to use the VPC networking we have defined in the “VPC setup” section.
  2. We have provided the task definition with the ecsTaskExecutionRole.
  3. Defined CPU resource requirement as 256.
  4. The runtime platform is an important attribute. Since we are using Amazon Linux AMI for our EC2 instances, the operating_system_family is specified as “LINUX,” and the CPU architecture is set as “X86_64”. If this information is incorrect, the ECS tasks enter a constant restart loop.
  5. Container definitions: This is where we define the resource requirements of the container to be run in the task. We have provided the attributes below:
    • Name of the container instance
    • Image URL of the application image
    • CPU capacity units
    • Memory capacity units
    • Container and host port mappings
data "aws_iam_role" "ecs_task_execution_role" {
  name = "ecsTaskExecutionRole"
}

resource "aws_ecs_task_definition" "ecs_task_definition" {
  family             = "my-ecs-task"
  network_mode       = "awsvpc"
  execution_role_arn = data.aws_iam_role.ecs_task_execution_role.arn
  cpu                = 256

  runtime_platform {
    operating_system_family = "LINUX"
    cpu_architecture        = "X86_64"
  }

  container_definitions = jsonencode([
    {
      name      = "dockergs"
      image     = "public.ecr.aws/f9n5f1l7/dgs:latest"
      cpu       = 256
      memory    = 512
      essential = true
      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
          protocol      = "tcp"
        }
      ]
    }
  ])
}

Step 5: Create the ECS service

This is the last step, where we provision the service to be run on the ECS cluster. This is where all the resources created culminate to successfully run the application service.

The attributes defined here are:

  1. Name: name of the ECS service
  2. Cluster: Reference to the ECS cluster record we created in Step 2.
  3. Task definition: Reference to the task definition template created in Step 4.
  4. Desired count: We have specified that we want to run two instances of this container image on our ECS cluster.
  5. Network configuration: We have specified the subnets and security group we created in the VPC setup section.
  6. Placement constraints: We have specified that the container instances should run on distinct instances instead of using the residual capacity in each instance. This is not a best practice, but just to prove the concept.
  7. Capacity provider strategy: We have provided the reference to the capacity provider created in Step 3.
  8. Load balancer: Reference to the load balancer we created in the EC2 setup section.
resource "aws_ecs_service" "ecs_service" {
 name            = "my-ecs-service"
 cluster         = aws_ecs_cluster.ecs_cluster.id
 task_definition = aws_ecs_task_definition.ecs_task_definition.arn
 desired_count   = 2

 network_configuration {
   subnets         = [aws_subnet.subnet.id, aws_subnet.subnet2.id]
   security_groups = [aws_security_group.security_group.id]
 }

 force_new_deployment = true
 placement_constraints {
   type = "distinctInstance"
 }

 triggers = {
   redeployment = timestamp()
 }

 capacity_provider_strategy {
   capacity_provider = aws_ecs_capacity_provider.ecs_capacity_provider.name
   weight            = 100
 }

 load_balancer {
   target_group_arn = aws_lb_target_group.ecs_tg.arn
   container_name   = "dockergs"
   container_port   = 80
 }

 depends_on = [aws_autoscaling_group.ecs_asg]
}

Run terraform plan and apply commands to provision all the infrastructure defined so far.

In the next section, we will describe the steps to test if the deployment was successful or not.

4. Testing the ECS deployment with Terraform

When the Terraform code is successfully applied, as confirmed from the terminal output, log in to the AWS account and check for the existence of the following resources.

It should have created the VPC, associated subnets, route table, and internet gateway, as shown in the resource map below.

aws ecs task terraform

It should create a couple of EC2 instances as defined in the launch template in appropriate VPC subnets, and appropriate security groups should have been assigned.

Confirm this by navigating to the EC2 dashboard.

deploy ecs with terraform

Navigate to EC2 > Load balancers. It should have created the Application load balancer (ALB), with appropriate VPC and subnet association, as shown in the screenshot below.

ecs with terraform

Navigate to EC2 > Autoscaling groups.

It should have created the autoscaling group with the attribute values we supplied in the resource configuration.

ecs terraform

Navigate to Amazon ECS service. A cluster by name “my-ecs-cluster” should have been created, as shown below.

ecs cluster

This ECS cluster should have correctly associated capacity providers in the form of ASG and registered EC2 instances managed by that ASG, as shown in the screenshot below.

ecs capacity provider

Under the “Services” tab, a service with the configuration below should be created.

ecs service

As defined in the configuration, the Service should trigger two tasks and thus create two container instances on separate EC2 instances, as shown in the screenshot below.

ecs container service

Finally, to test if the application is being served as expected, navigate to EC2 > Load balancer.

Copy the DNS name (A Record / URL) and open the same in a separate tab. It should serve the “Docker Getting Started” homepage, as shown in the screenshot below.

terraform ecs docker

5. Cleaning up

The infrastructure deployed in this tutorial includes EC2 instances, an Application Load Balancer, and an Auto Scaling Group — all of which incur ongoing costs if left running.

Once you have finished testing, tear down all resources with a single command:

terraform destroy

Terraform will display a plan of everything it intends to remove and prompt for confirmation before proceeding. Review the list carefully, then type yes to confirm.

Note: If you receive an error during destroy related to the ECS capacity provider, you may need to deregister the container instances from the cluster manually in the AWS console before re-running terraform destroy. This is a known ECS/Terraform edge case when the ASG scales down before the ECS agent has deregistered the instances.

Why use ECS over EC2?

ECS and EC2 are services provided by AWS for running applications in the cloud. Although at a high level, they seem to host workloads, they serve different purposes and are suited for different types of workloads.

Some of the reasons for choosing ECS over EC2 are described below:

  • Containerization: ECS is designed for running containers either on Fargate or EC2 instances, which provide a scalable way to package and deploy applications. Containers offer better resource utilization, faster startup times, and improved isolation compared to traditional virtual machines used in EC2. If you’re using containerized applications, ECS provides a managed environment optimized for running and orchestrating containers.
  • Scalability and Elasticity: ECS allows you to easily scale your applications based on demand. It integrates with AWS Auto Scaling, which can automatically adjust the number of containers based on predefined scaling policies and placement constraints.
  • Orchestration: ECS provides built-in orchestration capabilities through integration with AWS Fargate or EC2 launch types. With Fargate, we don’t have to provision or manage any EC2 instances, as AWS takes care of the infrastructure for you. This is also a great option and allows us to focus solely on our applications. EC2 launch type offers more control and flexibility if we prefer managing the underlying EC2 instances ourself.
  • Monitoring: ECS provides a centralized management console and CLI tools for deploying and managing containers. It also integrates with AWS CloudWatch, allowing us to collect and analyze metrics, monitor logs, and set up alarms for our containerized applications. EC2 instances require separate management and monitoring configurations.
  • Cost Optimization: ECS can help optimize costs by allowing us to run containers on-demand without the need for permanent EC2 instances. With AWS Fargate, we pay only for the resources allocated to our containers while they are running, which is more cost-effective compared to maintaining a fleet of EC2 instances. Read more about AWS cost optimization.

It’s important to note that the choice between ECS and EC2 depends on our specific requirements, workload characteristics, and familiarity with containerization. While ECS offers benefits for container-based applications, EC2 still provides more flexibility for running traditional workloads that don’t require containerization.

How to manage Terraform resources with Spacelift

Terraform is powerful, but to build a truly secure, end-to-end infrastructure orchestration workflow, you need a platform purpose-built to run it. Spacelift takes infrastructure as code (IaC) management further by giving you a dedicated orchestration layer with governance, visibility, and developer self-service built in.

Key capabilities include:

  • Policy as code (Open Policy Agent) — Control how many approvals runs require, what kinds of resources engineers can create, what parameters those resources can have, and what happens when a pull request is opened or merged.
  • Multi-IaC orchestration — Combine Terraform with Kubernetes, Ansible, OpenTofu, Pulumi, and CloudFormation. Create dependencies between stacks and share outputs across them.
  • Governed developer self-service — Use Blueprints to build Golden Paths so teams can provision infrastructure through a simple, policy-backed form — no Terraform expertise required.
  • Third-party integrations — Integrate with your existing tools and build policies around them. Custom Inputs, for example, let you bring security scanners directly into your workflows.
  • AI-powered visibility — Spacelift Intelligence lets you query your infrastructure, stacks, and policies using natural language, so you can surface insights and troubleshoot faster without digging through dashboards.
  • Reusable configuration — Spacelift Templates let you define and share proven stack configurations across teams, reducing setup time and ensuring consistency at scale.
  • Two-path deployment model — Use IaC and GitOps for production workloads that demand rigor, and Intent for non-critical workloads like tests, POCs, and demos where speed matters more than ceremony.
logixboard logo

Supply chain management platform Logixboard was a Terraform Cloud customer seeking a more reliable Terraform experience. By migrating from Terraform Cloud to Spacelift, they have slashed the time they spend troubleshooting deployments, freeing them for more productive work.

Spacelift customer case study

Read the full story

Spacelift also supports private workers, so you can execute workflows inside your own security perimeter. For configuration details, refer to the documentation.

Optionally, Spacelift can manage your Terraform state for you — offering a backend synchronized with the rest of the platform for convenience and security. You can also import existing state during stack creation, which makes migrating legacy configurations straightforward.

For a closer look at remote state management, see this blog post. To get started, create a free account or book a demo.

Key points

This article covered creating and managing an AWS ECS cluster with Terraform, including examples and Terraform ECS task definitions.

Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.

Discover better way to manage Terraform at scale

Spacelift helps manage Terraform state, build more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization, and many more.

Start free trial

Frequently asked questions

  • What is the difference between ECS on EC2 and ECS on Fargate?

    With the EC2 launch type, you provision and manage the underlying EC2 instances yourself — giving you more control over instance type, pricing, and configuration. Fargate is serverless: AWS manages the compute entirely, and you only define CPU and memory at the task level. Fargate is simpler to operate; EC2 gives you more flexibility and is often more cost-effective at scale.

  • What is an ECS task definition in Terraform?

    A task definition is a blueprint for your container workload. It defines the container image, CPU and memory allocation, port mappings, IAM execution role, logging configuration, and the operating system platform. In Terraform, it is managed using the aws_ecs_task_definition resource. Every ECS service references a task definition to know what to run and how.

  • Do I need a load balancer to deploy ECS with Terraform?

    No, a load balancer is optional for getting an ECS service running. However, without one, you would need to access containers directly via EC2 instance IPs, which are not stable across deployments. For any real workload, an Application Load Balancer (ALB) is strongly recommended as it handles traffic distribution, health checks, and allows zero-downtime deployments.

Article sources

Amazon Elastic Container Service Developer Guide. What is Amazon Elastic Container Service?. Accessed: 21 October 2025

Amazon Elastic Compute Cloud User Guide. What is Amazon EC2?. Accessed: 21 October 2025

AWS Builder Center. Let’s understand ECS clusters!. Accessed: 21 October 2025

 

Terraform Project Structure
Cheat Sheet

Get the Terraform file & project structure

PDF cheat sheet.

terraform files cheat sheet bottom overlay
Share your data and download the cheat sheet