Elevating IaC Workflows with Spacelift Stacks and Dependencies 🛠️

Register for the July 23 demo →


What is AWS Fargate? Definition, Tutorial, Examples

What is AWS Fargate? Definition, Tutorial, Examples

With the growing popularity of containers and the increasing adoption of Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS) in the AWS ecosystem, Fargate has emerged as a game-changing service. Serverless computing is revolutionizing how applications are deployed and managed, and Fargate is at the forefront of this transformation, enabling us to focus on building our applications while it takes care of the underlying infrastructure for us.

In this article, we’ll dive into the capabilities, benefits, and security aspects of AWS Fargate and how it seamlessly integrates with EKS and ECS.

  1. What is AWS Fargate?
  2. What is serverless and why is it game-changing?
  3. AWS Fargate Features
  4. Benefits of AWS Fargate
  5. AWS Fargate Drawbacks
  6. AWS Fargate tutorial
  7. Resource right-sizing using AWS Fargate
  8. AWS Fargate security
  9. AWS Fargate pricing
  10. AWS Fargate vs. AWS EC2
  11. AWS Fargate vs. AWS Lambda

What is AWS Fargate?

AWS Fargate is a serverless compute engine offered by Amazon Web Services (AWS) for running containerized applications. It enables you to run containers without having to manage the underlying infrastructure, eliminating the complexity of administrative tasks.

Let’s understand that better.

Over the years, computing infrastructure has evolved significantly. We transitioned from using physical, bare metal servers to leveraging virtual machines in the cloud, managed by providers such as AWS. The next leap forward came with the introduction of containers, enabling us to run multiple applications in isolation on a single virtual server. Containers provided the benefit of creating applications once and deploying them anywhere. However, the challenge of managing, patching, and securing the underlying servers persisted.

Enter AWS Fargate. Fargate is serverless containers as a service (CaaS) offered by AWS. It functions as an abstraction layer above the hardware and operating system, allowing us to focus solely on developing and running our applications without the burden of handling any operational tasks. That eliminates all the associated overhead, freeing the user from the complexities of infrastructure management.

Fargate says: Focus on building your applications, letting me worry about managing the infrastructure for you.

The introduction of AWS Fargate was game-changing as it led to a significant drop in the entry barrier to using Orchestration services like ECS and EKS. It eliminates the complexity of administrative tasks such as provisioning, configuration, patching, and securing the nodes that host our containers, making infrastructure management effortless, dare I say, a walk in the park 😉

Before we dive deeper into understanding even more about Fargate, let’s start with a quick introduction to serverless.

What is serverless and why is it game-changing?

Despite what the name suggests, “serverless” doesn’t mean there are no servers. Serverless still has servers, but they are abstracted from the user and managed by the provider. This abstraction allows users to concentrate on development rather than the routine tasks of administering server resources, managing operating systems, and scaling resources up and down.

The game-changing aspect of serverless technology lies in its ability to significantly reduce the barrier to deploying applications in the cloud. It achieves this by provisioning and managing resources on demand, eliminating the operational burden traditionally associated with managing servers.

An added benefit is that the provider takes care of provisioning the right-sized resources out of the box with on-demand auto-scaling of the resources.

Within the AWS ecosystem, AWS Fargate is simply one example of serverless. There are numerous others like AWS Lambda and even databases like Amazon Aurora. Later in the article, we will learn how AWS Fargate compares to AWS Lambda.

AWS Fargate Features

Here are some of the AWS Fargate key features:

  1. Flexible configurations
  2. Load Balancing
  3. Networking
  4. Resource-based pricing
  5. Monitoring and logging
  6. Auto-scaling
  7. Permission tiers

Benefits of AWS Fargate

AWS Fargate offers several advantages:

  1. Easy to get started
    Getting started with Fargate is simple, making it accessible for all levels of expertise. AWS handles most of the underlying infrastructure, allowing users to focus solely on application development.
  2. No operational overhead
    Fargate eliminates all overhead associated with infrastructure management on the data plane. Everything from setting up the right-sized servers to auto-scaling is taken care of by AWS.
  3. Autoscaling
    Fargate nodes automatically scale up or down in response to changes in deployed pods or tasks, ensuring optimal resource allocation.
  4. Security
    AWS Fargate is designed with a security-first approach. Pods or tasks are run in complete isolation which helps in eliminating threats and minimizing attack vectors.
  5. Resource right-sizing
    One of the major benefits of using AWS Fargate is that it right-sizes the resources required to run the pods automatically, so you only pay for the resources required.
  6. Cost
    AWS Fargate operates on the pay-as-you-go model, eliminating idle instances and reducing operational costs. Time and resources saved from not having to manage infrastructure further contribute to cost savings.
  7. Seamless monitoring
    Fargate provides seamless monitoring capabilities with integration to Amazon CloudWatch Container Insights. Third-party tools can also be used to gather metrics and logs.
  8. Integrates well with other AWS services
    Pods in AWS Fargate can seamlessly integrate and talk to other AWS services using Kubernetes service accounts.
  9. Guaranteed priority for pods on EKS
    Amazon EKS Fargate ensures that each pod runs on its dedicated node, avoiding pod eviction due to resource shortages. That guarantees priority for all pods.
  10. Compliance
    Managed services such as AWS Fargate offload the burden of compliance to AWS relieving users of the responsibility to ensure compliance and documentation, which can save time and resources.

AWS Fargate Drawbacks

While AWS Fargate offers many benefits, it also has some limitations to consider:

  1. Daemonsets are not supported on EKS Fargate, so functionalities like observability have to run in sidecar containers in each pod instead of running directly inside cluster nodes.
  2. Lack of image caching: Fargate does not support container image caching. This can result in longer pod startup times as images are not cached on the node. This limitation can impact efficiency, especially for applications with frequent pod creation. Follow the issue on GH for more updates.
  3. Privileged containers are not supported on AWS Fargate. While this is a security measure, it can be limiting, particularly for use cases that require privileged containers, such as Docker in Docker.
  4. Host network: Pods running on Fargate don’t have access to underlying host resources such as ports and network which means pods cannot specify HostPort or HostNetwork in the pod manifest file.
  5. Limited Configuration Options: While Fargate’s zero-configuration approach is convenient, it can be a drawback when you require more fine-grained control over node configuration.
  6. No GPU Support: As of October 2023, AWS Fargate does not support GPU instances. Users with GPU-dependent workloads may need to explore alternative solutions. Follow the issue on GH for more updates.

It’s crucial to assess these drawbacks in the context of your specific use case and requirements to determine whether AWS Fargate is the right choice for your workload.

With this out of the way, let’s see AWS Fargate in action with ECS and EKS.

AWS Fargate tutorial

How to use Fargate with EKS?

Elastic Kubernetes Service (EKS) is a managed Kubernetes service to run Kubernetes on AWS. EKS automates the management of the Kubernetes control plane, with node creation, container runtime installation, and more, whilst also ensuring its availability and scalability.

However, before Fargate, users had no option but to take responsibility for setting up, configuring, and overseeing the worker nodes in the data plane, which comes with the operational overhead of updating, patching, and securing these nodes. AWS Fargates does a great job of addressing and solving these challenges. Let’s learn how Fargate compares to the other options available on EKS in the data plane.

Self-managed nodes vs Managed node groups vs AWS Fargate

  1. Self-managed
    The learning curve for setting up self-managed nodes is steep as it comes with having the expertise to choose the right server type, the number of CPUs, RAM, and other configurations. Moreover, users are burdened with the responsibilities of securing, maintaining, and patching the operating system of Amazon EC2 instances.
  2. Managed node groups
    While autoscaling and managed groups can notably reduce operational overhead, they still require users to decide the server type, size, and the number of instances upfront. Although AWS streamlines server management in this scenario, the responsibility still lies with the users.
  3. AWS Fargate
    Fargate, in contrast, eliminates the need for infrastructure configuration and management. It takes care of provisioning resources dynamically, incorporating auto-scaling capabilities. Fargate takes charge of every aspect, from creating worker nodes to the routine tasks that accompany them.

Here is a quick comparison between the options.

aws eks fargate

Source: AWS re:Invent

Refer to the AWS page to learn more about the differences.

Using Fargate with EKS is pretty straightforward. It involves the creation of a new profile to configure AWS Fargate for running pods within the EKS cluster.

Fargate Profile

A Fargate profile is essentially a configuration that determines which pods should run on Fargate infrastructure based on the selectors configured within the profile. We have the flexibility to configure a profile to run all pods from a specific namespace or only the ones having specific labels.

The picture below depicts examples of profiles that run pods within the default and kube-system namespaces, respectively.

aws fargate lambda

Fargate profiles are crucial as they make it possible for Fargate to be used in a mixed mode alongside other options of self-managed or managed nodes. They provide us with the control to choose which pods to run on the Fargate infrastructure and which ones to direct to other options when using multiple options concurrently.

EKS leverages selectors in the Fargate profile to decide between the fargate-scheduler and the default-scheduler for scheduling pods.

aws fargate workflow

Source: AWS re:Invent

It is possible to check the scheduler for a pod by using the kubectl describe command under the Events section.

kubectl describe -n <namespace> pods <pod-name>
aws fargate pricing

The above picture shows that the given pod was scheduled by the fargate-scheduler which means it will be run on the Fargarte infrastructure.

Running pods on AWS Fargate infrastructure

If you’re keen on configuring EKS with Fargate and experiencing it firsthand, you can begin by following the AWS guide to kickstart your setup. Remember to set up a fargate profile before deploying pods.

The YAML snippet provided below defines an nginx deployment with three replicas in the default namespace. As we deploy it, we expect Fargate to take over the creation of nodes to execute these pods.

apiVersion: apps/v1
kind: Deployment
 name: nginx-deployment
   app: nginx
 replicas: 3
     app: nginx
       app: nginx
       - name: nginx
         image: nginx:1.14.2
           - containerPort: 80

We can deploy the nginx deployment using the following command:

kubectl apply -f nginx-deployment.yaml

Note: In case you are following along, remember to set up a fargate profile to schedule nodes from the default namespace before deploying the deployment.

Let’s observe the rollout status and wait for the deployment to be updated.

kubectl rollout status deployment/nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "nginx-deployment" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "nginx-deployment" rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx-deployment" successfully rolled out

Once the deployment is rolled out successfully, let’s verify whether Fargate provisioned nodes for running these pods. We can check for existing worker nodes with the following command.

$ kubectl get nodes
fargate-     Ready	   <none>   4m24s   v1.27.1-eks-2f008fe
fargate- 	Ready	   <none>   4m21s   v1.27.1-eks-2f008fe
fargate- 	Ready	   <none>   4m19s   v1.27.1-eks-2f008fe

Great! Fargate successfully provisioned new nodes to run the deployed nginx workload. You can also confirm this in the EKS console by navigating to the “Compute” tab.

aws fargate aws console

An intriguing detail worth noting here is that Fargate assigns a dedicated node for each pod. This is important for security reasons and we will learn more about this later in the article.

Another detail to note here is that all nodes created by Fargate share the prefix “fargate-” followed by their IP addresses. As these nodes are internally managed by AWS, they do not appear under instances in the EC2 console.

aws fargate pricing

Next, let’s explore how fargate handles autoscaling.

How to use AWS Fargate autoscaling?

As per the official Kubernetes documentation, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), to automatically scale the workload to match demand.

It’s essential to understand that Fargate is an on-demand serverless compute and not a container orchestrator. As a result, it does not directly influence the scaling of pods. Instead, it acts in response to changes in the number of pod instances made by HorizontalPodAutoscaler to create or delete nodes.

Let’s explore how Fargate responds when pods need to scale up or down. We’ll simulate this using the kubectl scale command to increase the number of pod instances in the nginx-deployment from three to five.

kubectl scale -n default deployment/nginx-deployment --replicas=5

Observe the rollout status and wait for the deployment to be updated.

kubectl rollout status deployment/nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 3 of 5 updated replicas are available...
Waiting for deployment "nginx-deployment" rollout to finish: 4 of 5 updated replicas are available...
deployment "nginx-deployment" successfully rolled out

Next, we’ll verify if AWS Fargate responded to the pod scaling by creating new nodes.

$ kubectl get nodes
fargate-     Ready	   <none>   1m25s v1.27.1-eks-2f008fe
fargate-      Ready	   <none>   1m40s v1.27.1-eks-2f008fe
fargate-     Ready	   <none>   11m   v1.27.1-eks-2f008fe
fargate- 	Ready	   <none>   11m   v1.27.1-eks-2f008fe
fargate- 	Ready	   <none>   11m   v1.27.1-eks-2f008fe

We observe that Fargate automatically scaled up the number of nodes to accommodate new pods. What happens when we delete this deployment?

$ kubectl delete deployment nginx-deployment
deployment.apps "nginx-deployment" deleted

$ kubectl get nodes

Fargate automatically cleans up all nodes running the deployment once the deployment is deleted.

In this section, we learned about Fargate’s automatic resource provisioning in response to pod autoscaling.

Next, let’s look into how to use Fargate with ECS.

How to use AWS Fargate with ECS?

ECS is a fully managed container orchestration service designed to handle the complete lifecycle of containerized applications, including deployment, management, and scaling. It seamlessly integrates with AWS Fargate right out of the box.

We will get started by creating an ECS cluster on the AWS Fargate infrastructure.

When creating a new cluster, we have three options to choose from, including AWS Fargate for serverless deployments, Amazon EC2 instances, or External instances using ECS Anywhere. Additionally, we have the flexibility to customize our infrastructure by selecting multiple options simultaneously. In our case, we will opt for the AWS Fargate option.

The AWS Fargate option comes with the pay-as-you-go billing model and requires zero maintenance overhead.

aws fargate ecs

After successfully creating the cluster, you’ll observe that the count of registered container instances is 0, signifying no explicitly provisioned EC2 instances are associated with this cluster.

what is fargate in aws

We’ll put the newly created Fargate cluster to the test by creating a new task definition. In this task definition, we’ll set the launch type to AWS Fargate and run a basic nginx task. Refer to the AWS guide to learn more about creating task definitions on ECS.

fargate vs ecs

Next, we will create a service with Fargate as the capacity provider. This service will be responsible for running two instances of the task definition on the Fargate ECS cluster.

fargate ecs cluster

Note: We have the option to choose Fargate Spot or a combination of both Fargate and Fargate Spot. Fargate Spot is a feature within AWS Fargate that allows you to run tasks that can tolerate interruptions at a discounted rate of up to 70% compared to standard Fargate pricing.
It’s worth mentioning that as of October 2023, Fargate Spot is available for use with ECS but is not yet supported within EKS. For further updates regarding the inclusion of this feature in EKS, you can refer to the issue tracking page.

If things go as expected, we will observe that the tasks have been successfully picked up and deployed on the resources provisioned by AWS Fargate.

how to use aws fargate

When using AWS Fargate with ECS and EKS, we noticed that instances of the worker nodes are spun up on demand. Regardless of the number of pods or tasks, Fargate adeptly manages the creation of worker nodes.

AWS Fargate takes care of spinning instances on demand, but how does it allocate the right-sized resources? Let’s explore this in detail.

Resource right-sizing using AWS Fargate

One of the major benefits of using AWS Fargate is that it right-sizes the resources the pods or tasks run on automatically, ensuring we only pay for the resources we actually require.

But how does AWS Fargate know what is the right size?

Let’s specifically examine the scenario of using AWS Fargate with EKS.

Kubernetes provides the ability to define resource requests and limits for pods. AWS Fargate leverages these specifications to create instances of the right size that the pods will run on. This dynamic resource allocation ensures optimal utilization and cost-efficiency.

fargate right sizing

Source: AWS re:Invent

Fargate calculates the resource size based on the resource requests for init and long-running containers. The vCPU and memory are calculated separately as follows:

For vCPU (CPU units) calculation:

  • Determine the maximum request among all init containers for vCPU.
  • Calculate the sum of requests for long-running containers for vCPU.
  • Select the larger value between the two as the vCPU allocation.

For memory calculation:

  • Find the maximum request among all init containers for memory.
  • Calculate the sum of requests for long-running containers for memory.
  • Choose the larger value between the two as the memory allocation.
  • Fargate additionally adds 256MB to each pod for Kubernetes components (kubelet, kube-proxy, and containerd)

It’s important to note that Fargate considers resource requests but not limits in this calculation. The reason behind this is that Amazon EKS Fargate runs only one pod per node, eliminating the need for eviction scenarios due to insufficient resources. All Amazon EKS Fargate pods operate with guaranteed priority, so the requested CPU and memory must be equal to the limit for all of the containers.

Lastly, Fargate adjusts the calculated resource configuration to closely match one of the available combinations for running pods on Fargate, ensuring efficient resource utilization.

aws fargate vcpu

Note: If no resource configurations are provided, then the smallest combination is used (0.25 vCPU and 0.5 GB memory)

At this point, we are in the right position to talk about one of the most important factors to consider when choosing to use any infrastructure: SECURITY!

AWS Fargate security

AWS Fargate has been designed keeping security in mind. It can be argued that using AWS Fargate is more secure than managing your own nodes, especially when lacking the expertise required to manage the nodes and handle routine tasks that come with it, such as patching, security upgrades, and more.

aws fargate security

By default, all workloads on AWS Fargate run in an isolated virtual environment. That means resources such as kernel, network interfaces, ephemeral storage, CPU, and memory are not shared with other pods or tasks.

It’s worth noting that containers within the same pod or task run on the same host and do share underlying resources such as the kernel, network interfaces and more.

This pod isolation plays a pivotal role in security by containing potential attacks. It limits any lateral movement beyond the boundaries of the pod, effectively reducing the scope of any security breach. This approach ensures that an attacker who gains access to one compromised pod is unable to traverse to other pods, enhancing overall security and mitigating potential threats.

AWS prioritizes security and employs various measures to harden the security of the AWS Fargate infrastructure:

These comprehensive security measures make AWS Fargate a robust choice for secure container deployments.

  • No privileged containers or access: AWS Fargate does not allow privileged containers, which possess root capabilities on the host machine. This restriction helps prevent attackers from gaining complete access to the host machine.
  • Limited access to Linux capabilities: Fargate restricts certain Linux capabilities that could potentially lead to container breakout. It carefully controls which Linux capabilities can be used by containers running on the Fargate infrastructure, enhancing overall security. Here are the ones that are available to use.
  • No access to underlying host: Pods or tasks running on Fargate are not allowed to access the underlying host’s resources, including the filesystem, devices, networking, and container runtime. Furthermore, it is not even possible to directly connect to any of the hosts created by Fargate as they are managed internally by AWS.
  • Networking security: AWS Fargate offers network security features such as security groups and network ACLs to manage inbound and outbound traffic for pods or tasks. For additional security, EKS network segmentation at the pod level can be done using security groups for pods, allowing fine-grained control over traffic to and from pods.
  • Patching and security updates: AWS Fargate assumes responsibility for the nodes it manages, including the automated patching and management of security updates. This shift in responsibility from the consumer to AWS ensures that nodes remain up-to-date and secure.
  • Storage security
    AWS Fargate supports two types of storage:
    • Ephemeral storage
    • Amazon EFS volumes
    Pods or tasks launched on ephemeral storage have server-side encryption by default, starting from platform version 1.4, and EFS allows encrypting the volumes to secure the data.
  • Enhancing Fargate security: AWS Fargate offers integration with third-party security tools to enhance security further:
    • Aqua Security: The Aqua Cloud Native Security platform supports container security for AWS Fargate and provides automated security and compliance solutions, along with real-time threat prevention.
    • Palo Alto Networks: Palo Alto Networks offers features like vulnerability scanning and runtime scanning for tasks, contributing to improved security posture.
    • Sysdig: Sysdig provides real-time container visibility and runtime security, featuring swift threat detection and response strategies. Furthermore, it enhances observability through performance and health monitoring, contributing to a comprehensive view of your environment.

AWS Fargate pricing

AWS Fargate operates on the pay-as-you-go model, meaning there are no upfront costs, and you are billed solely for the compute resources your workload consumes and not for any idle time

As we saw earlier, AWS Fargate optimizes resource provisioning to eliminate over or undersized resources, thus contributing to cost savings.

The pricing is determined based on vCPU, memory, Operating Systems, CPU architecture, and storage resources utilized, starting from the moment you begin downloading your container image until the tasks or pods terminate. Billing is rounded up to the nearest second.

Let’s see what a direct cost comparison between Fargate compute and EC2 compute looks like when comparing on-demand pricing for a t2.large Linux instance (with two vCPUs and 8GB of memory) in us-east-1, compared to equivalent Fargate capacity in the same region.

As of October 2023, the cost of a t2.large EC2 instance in us-east-1 stands at $0.0928 per hour. For an equivalent Fargate task in the same region, the calculation is as follows: ($0.04048 x 2 vCPU) + ($0.004445 x 8GB) = $0.08096 + $0.03556 = $0.11652 per hour.

In this example, Fargate costs 25% more than EC2. However, it’s important to note that cost comparisons can vary depending on the generations of compute used.

Additionally, Fargate offers savings by not charging for idle capacity, a cost incurred with EC2. Savings also manifest in the form of time saved by eliminating operational burden and the associated “cost of ownership” tied to resources. Routine management tasks require human resources, which can be costly.

A case study featuring Samsung demonstrates the effectiveness of AWS Fargate, achieving the required availability and reliability for their portal while reducing monthly costs by approximately 44.5% (compute cost only). That showcases the potential for substantial savings when adopting Fargate.

Let’s now explore how AWS Fargate compares to AWS EC2 and AWS Lambda.

AWS Fargate vs. AWS EC2

Even though AWS Fargate and EC2 are both compute services, they differ significantly. Here’s a quick comparison between them:


AWS EC2 instances are virtual machines on AWS that offer secure and resizable compute capacity for virtually any workload.

AWS Fargate is a containers-as-a-service infrastructure that runs containerized applications within pods or tasks.

AWS EC2 is an independent service whereas Fargate is used in conjunction with services like ECS, EKS or AWS Batch.

Abstraction Level

Fargate is a serverless compute engine dedicated to running containers, abstracting away all infrastructure management. EC2 provides virtual machines with complete control over the underlying infrastructure and operating system.


EC2 allows complete control over the infrastructure, letting you choose the number of CPUs, memory, network interface, operating system and more. AWS Fargate offers limited flexibility and lacks user control over the underlying infrastructure, and the operating system.

Pricing Model

Both AWS Fargate and AWS EC2 charge based on the number of vCPU and memory.

AWS Fargate charges only for the time the resources are used and not for any idle time. Conversely, AWS EC2 charges for all the time the instances are running even when the instances are idle and not running any workload.

Operational Burden

EC2 instances come with a lot of responsibilities like provisioning, configuration, patching, and more.

Fargate trades flexibility with a reduced operational burden. Operational tasks such as provisioning, scaling, and maintenance are taken care of by AWS.


EC2 requires you to be responsible for securing, upgrading, and patching instances on a routine basis. On the other hand, AWS is responsible for the security of Fargate infrastructure.


Fargate automatically scales the infrastructure in response to the number of pods or tasks, making it ideal for variable workloads. EC2 relies on the manual configuration of auto-scaling groups for scaling based on the demand.

Use Cases

Fargate is well-suited for containerized workloads, offering ease of use and efficient resource management. EC2 accommodates a broad range of workloads, making it suitable for use cases requiring more control and configuration.

AWS Fargate vs. AWS Lambda

While both AWS Fargate and AWS Lambda fall under the category of serverless services, they differ significantly from each other in various aspects.


AWS Fargate is a container as a service (CaaS), whereas AWS Lambda is a function as a service (FaaS).

CaaS services, as the name suggests, provide container management as a service, encompassing the deployment, creation, and management of the lifecycle of containers and containerized workloads

FaaS services run and manage applications as functions, abstracting away the underlying hardware. Functions are event-driven and triggered in response to events.

aws fargate vs lambda

CaaS services offer a higher level of control over containers and runtime, as they abstract over the hardware and operating system.

FaaS services typically limit users’ ability to manage the runtime of functions. However, AWS Lambda stands out by offering flexibility that enables users to create custom lambda runtimes.

Direct comparison

AWS Fargate falls somewhere between EC2 and AWS Lambda. EC2 offers the highest level of configurability but carries the most operational burden, while AWS Lambda provides the least configurability but the least operational burden.

AWS Fargate leans more towards AWS Lambda, as both are serverless.

fargate ec2

The additional complexity with AWS Fargate, when compared to AWS Lambda, arises from the need to customize the runtime by creating container images for the workloads. In contrast, AWS Lambda offers a wide array of runtime options to directly pick from when deploying applications.

It’s important to note that AWS Lambda is a standalone AWS service, whereas AWS Fargate cannot be used independently and must be integrated with services like ECS, EKS, or AWS Batch.

Use cases

AWS Lambda is straightforward to use due to minimal configuration options, resulting in a lower operational burden. It excels in scalability and event-driven scenarios, making it an ideal choice for asynchronous tasks. 

However, Lambdas are not suitable for background tasks and have a maximum timeout of 900 seconds (15 minutes) as of October 2023.

AWS Fargate, in contrast, supports both long-running and short-running use cases. Applications deployed on AWS Fargate can be event-driven or run continuously in the background. It offers flexibility for various workload types.

A great use case for AWS Lambda could be image compression before storing images in Amazon S3.

On the other hand, AWS Fargate is well-suited for running a high request frequency HTTP server for an e-commerce application.

It’s important to note that these examples are not mutually exclusive, and can be implemented using either AWS Lambda or AWS Fargate based on the specific requirements and use case.


Latency is a critical factor to take into account when deploying any application.

When deploying an application on AWS Lambda, it’s essential to consider and plan for cold starts. Lambdas typically start quickly, within a few seconds.

If low latency is crucial and the function execution frequency is relatively low, you can use provisioned concurrency to maintain some instances of the function in a warm state. Keep in mind that provisioned concurrency comes with an additional cost.

In the case of AWS Fargate, launching a new Fargate instance takes longer compared to AWS Lambda. However, once it’s up and running, cold starts are not a concern, as Fargate can efficiently handle multiple requests without the delay associated with Lambda cold starts.


Both AWS Lambda and AWS Fargate offer full scalability.

AWS Lambda manages availability and scalability implicitly, with the ability to scale up to 1000 concurrent instances (subject to soft limits) and scale down to zero instances within seconds.

AWS Fargate efficiently handles scalability, seamlessly scaling up and down as needed with pods or tasks, making it suitable for dynamic workloads.


AWS Fargate pricing is determined by factors such as vCPU, memory, Operating Systems, CPU architecture, and storage resources used starting from the moment you begin downloading your container image until the Amazon ECS Task or Amazon EKS2 Pod terminates. Pricing is rounded up to the nearest second.

AWS Lambda pricing is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest millisecond. The price depends on the amount of memory you allocate to your function.

A comprehensive cost comparison is beyond the scope of this article. In general, AWS Lambda provides better cost per request when the load is low with less frequent requests. On the other hand, AWS Fargate becomes cost-effective when dealing with high loads and frequent requests. The choice depends on your specific workload and usage patterns.


Both AWS Lambda and AWS Fargate integrate seamlessly with other AWS services.

Applications running within Lambda functions can communicate with other AWS services using the Lambda execution role.

Applications running on Fargate can use a Kubernetes service account associated with an AWS IAM role that grants the necessary permissions to access AWS services. This ensures smooth interoperability with various AWS services.

Monitoring and observability

AWS Lambda offers automatic monitoring of functions and sends metrics to AWS CloudWatch by default. Users can visualize metrics, errors, invocations, and more through the Lambda Insights dashboards.

For applications running on Amazon EKS or Amazon ECS with AWS Fargate, you can utilize AWS Distro for OpenTelemetry (ADOT). ADOT collects system metrics and transmits them to CloudWatch Container Insights dashboards, providing comprehensive monitoring and observability capabilities.

Key Points

AWS Fargate stands out as a powerful serverless compute service that simplifies the deployment and management of containerized applications enabling them to focus solely on their application logic.

Its benefits, including ease of use, automatic resource provisioning, scalability, security, and cost-effective pay-as-you-go model, make it a great choice for a wide range of use cases.

Here you can learn more about self-hosting Spacelift in AWS, to ensure your organization’s compliance, control ingress, egress, internal traffic, and certificates, and have the flexibility to run it within GovCloud. You can also see Spacelift integration with AWS, with our Cloud Integrations section and our update to support account-level AWS integrations.

The Most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide