The cloud has changed how organizations build and deploy applications and introduced new security challenges. While Amazon Web Services (AWS) provides powerful security capabilities, it’s difficult to implement them effectively, especially when transitioning from traditional on-premises environments where effective security can look quite different.
This guide aims to provide a focused introduction to AWS security and give you actionable steps for protecting your cloud resources. You’ll learn how AWS’s shared responsibility model fundamentally changes your security approach, discover the most critical risks facing AWS cloud environments today, and get practical strategies to address them.
What we’ll cover:
- AWS cloud security basics
- Common AWS security risks
- AWS cloud security best practices
- AWS security tools and services
As organizations move their operations and workflows to major cloud providers like AWS, they often struggle to shift their mindset and security approaches from traditional infrastructure to cloud-native environments.
This creates significant security risks in cloud environments, which differ fundamentally from traditional setups — a single security misconfiguration can expose systems that would normally be protected by multiple security layers and private networks.
In the cloud, security problems can quickly affect multiple interconnected services, making incidents potentially more serious than traditional local area network breaches.
The costs add up quickly, too, from handling the incident and paying fines (like GDPR penalties that can reach 4% of global revenue) to dealing with legal issues and lost business due to damaged reputation.
Recent data shows that an average breach costs $4.45 million, up 10% from the previous year. Regulated industries face even higher costs. Understanding cloud security is really important if you’re planning on migrating your workloads to AWS or expanding them.
On that note, let’s talk more about how AWS security is different from traditional approaches, which you might be more familiar with.
What is AWS cloud security?
AWS cloud security is a set of tools, practices, and policies provided by Amazon Web Services to protect cloud-based infrastructure, data, and applications. It includes features like identity and access management (IAM), encryption, network security, threat detection, and compliance monitoring. AWS offers shared responsibility, where AWS secures the infrastructure, and customers are responsible for securing their data and workloads.
Effective cloud security requires implementing multiple defense layers through native AWS security features alongside complementary third-party solutions to protect your data, applications, and infrastructure.
These defense layers can include anything from network security controls to workload monitoring, data encryption, access management policies, and even regulatory compliance frameworks.
To understand which layers we need to protect, we first need to explore arguably the most important cloud concept (and the aspect that initially trips up most organizations): how security responsibilities are shared between AWS and its customers.
Understanding the shared responsibility model
AWS security works through what’s called a shared responsibility model. This means both AWS and you (the customer) have specific security responsibilities. While this might seem complex at first, it’s actually straightforward.
Think of it this way: AWS handles security “of” the cloud (the basic infrastructure), while you manage security “in” the cloud (how you use AWS services).
What is the difference between traditional on-premise IT security and AWS security?
The primary difference between traditional on-premise IT security and AWS security lies in responsibility ownership. In an on-premise setup, the organization is fully responsible for all aspects of security, including physical infrastructure, networking, software, and data protection.
With AWS, security responsibilities are shared between the customer and AWS, and the division depends on the type of service used: infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS).
Understanding the following primary threats can help you prioritize your security efforts effectively and better understand cloud-related risks.
- Compromised access credentials
- Excessive access permissions
- Public or misconfigured S3 buckets
- Firewall and networking misconfigurations
- Poor encryption practices
- Inadequate logging, monitoring, and threat detection
- Outdated systems and software
- Shadow resources
- Lack of backup and recovery planning
- Third-party security risks
Stolen access credentials continue to pose a major security risk in AWS environments. When threat actors obtain passwords or long-term access keys that are used to authenticate against the AWS API, they can access whatever those credentials provide permissions for. This can lead to:
- Unauthorized access to sensitive data and resources
- Unexpected costs from the misuse of resources
- Disruption of critical business services
How does that happen, though?
Some of the most common ways credentials get stolen include:
- Phishing attacks
- Malware that steals login information
- Accidentally exposed credentials in code
The principle of least privilege (PoLP) is essential for cloud security. This principle states that users and services should only have the minimum permissions to do their jobs.
Using least privilege IAM policies helps block unauthorized access and reduces potential damage if there’s a security breach, but it’s challenging to set up proper IAM permissions.
Because fine-tuning access rights is complex, it’s easy to take risky shortcuts, like giving users too much access or using the AdministratorAccess
policy. Worse, the person granting permissions may not truly understand the access they’re providing and think they’re following the least privilege.
All of this creates two main risks:
- Insider threats from users who have more access than they need
- Greater potential damage if threat actors steal credentials with high levels of access
Many of the news articles you’ll see about AWS breaches will be from these first two risks, and they’ll go something like this:
- A threat actor gets an access key or login credentials.
- They discover excessive access permissions and a privilege escalation path.
- They elevate privileges (if even needed) and create a back door.
- They execute their attack.
Amazon S3 is one of AWS’s best-known services. It stores all sorts of data, including critical and sensitive data, which makes it a juicy target for threat actors.
While storage buckets start out private when you first create them, misconfigurations can make them publicly accessible. This typically happens when:
- Bucket permissions are set too broadly
- Access control settings aren’t properly configured or are disabled
When storage buckets aren’t properly secured, data breaches occur. Unauthorized users can then access private information and potentially exfiltrate it or hold it for ransom.
In some cases, threat actors can even use an organization’s buckets to store malware that they can then distribute. For example, if you store executables in your buckets and someone can access them, they could replace them with a version infected by their malware and continue serving those to your customers.
Despite AWS making storage buckets more secure by default, data leaks still happen because of poor security practices. It’s still far too common for companies to face serious security incidents after leaving their S3 data exposed.
Firewalls can work differently in the cloud than on-prem, and this can take some getting used to. After all, you can’t physically access a firewall device, and AWS has to abstract away configuration.
As well as AWS firewall services such as the Network Firewall and the AWS Web Application Firewall (WAF), there’s also the concept of Network ACLs (NACLs) and Security Groups (SGs). NACLs and SGs are used with Virtual Private Cloud (VPCs) to control what traffic is allowed in and out of subnets (subnetworks) within your VPCs. In a nutshell, they control traffic from/to the open Internet and from/to other private networks in your VPC.
If not set up correctly, they can leave your cloud instances open to attack because they’ll allow traffic from sources you don’t want.
Common misconfigurations include:
- Leaving ports open after testing or troubleshooting
- Setting rules that allow access from any IP address (0.0.0.0/0), especially for SSH
- Not checking and updating security settings regularly, particularly as services change
Consider a database you might be hosting physically on-prem, which is hidden behind many layers of local area networks. Even with ports locked down as securely as possible, making a mistake would require a threat actor to laterally move within multiple security layers in your private networks to reach open ports.
If you’re not careful, your database port on the cloud can be completely exposed to the open Internet.
These mistakes can be dangerous. Threat actors can use traditional tools like Shodan and Nmap to scan quickly for open ports and vulnerable systems, making them easy targets for attacks.
Encryption in the cloud is often misunderstood, so let’s talk about it.
Broadly speaking, data encryption protects information in two states:
- At rest: think S3 buckets or EBS volumes (storage volumes for EC2 instances)
- In transit: traffic flowing between AWS services, apps, or external systems
Data protected at rest is only protected from unauthorized physical access (e.g., a rogue AWS employee or contractor). It is not protected from other means of access.
If someone steals your access keys and gains access to sensitive S3 data, even if it’s encrypted at rest with the best encryption available, the threat actor will be able to see your data in plaintext.
You also need to consider encryption of data as it flows from various locations:
- From end-users to your cloud apps
- From on-prem to your cloud resources
- Within AWS itself
Many vendors address data encryption, and AWS-native solutions are also available.
A significant risk in AWS environments is not having proper visibility into what’s happening across your accounts and resources. This goes beyond just collecting logs, it’s about having the right combination of logging, monitoring, and threat detection working together.
Let’s break this down with a real-world scenario: imagine running a large office building. You wouldn’t just install security cameras (logging) — you’d also have security guards watching those camera feeds (monitoring), as well as smart detection systems that can alert you when something suspicious is happening (threat detection).
The same concept applies to your AWS infrastructure.
The challenge with cloud environments is that threats can move quickly. An attacker who gains access to your environment might try to:
- Spin up resources for cryptocurrency mining
- Access sensitive data in your S3 buckets
- Move laterally between networks or accounts
- Deploy malware in your EC2 instances
This can all happen in a matter of minutes.
Without proper logging, monitoring, and threat detection, these activities might go unnoticed until you have an unexpected bill, a data breach, or service disruptions.
Running outdated systems and software in AWS creates known security gaps that attackers actively seek and exploit. Apart from zero-days, vulnerabilities are often well-documented. This means they usually have proof-of-concept exploit code available, which means they’re frequently targeted by automated attacks.
This can be especially problematic if you take a “lift and shift” approach and deploy existing apps as-is to AWS. Organizations then face a challenging decision between continuing to maintain vulnerable legacy systems or investing in comprehensive modernization — a choice that becomes more expensive and complex the longer it’s delayed.
In AWS environments, this risk is amplified because cloud resources are typically more exposed than traditional on-premises systems. Your EC2 instances, containers, and applications are constantly being probed by automated scanning tools seekingr known vulnerabilities to exploit. Attacks can happen within minutes of finding a vulnerability. That’s why this risk is especially important in cloud environments.
One of the benefits of the cloud is that it enables innovation at a rapid pace. You can deploy services in minutes with just a few lines of code, which enables developers to push out new features and products very quickly.
This positive benefit can quickly turn into a security management nightmare because it enables the proliferation of resources that either no one knew existed or remembers why they were deployed. In many cases, these resources were created for testing or temporary purposes but were never cleaned up afterwards.
This kind of sprawl leads to “shadow resources” and can happen very quickly when you’re dealing with hundreds of thousands of containers, instances, or serverless functions. It’s especially common in development and staging environments, where teams might be experimenting with different configurations or testing new features.
However, because no one is monitoring them or knows what they’re for, no one is ensuring they’re either up-to-date or, completely removed from your environments. These forgotten resources often run on outdated versions with known vulnerabilities, making them perfect targets for compromise, enabling an attacker to laterally move throughout your cloud resources and potentially access your production environments.
The problem becomes particularly acute in organizations with multiple AWS accounts and development teams, where tracking and managing resources across environments becomes increasingly complex without proper controls and visibility.
While AWS provides highly available infrastructure, it doesn’t automatically protect you from data loss, ransomware attacks, or accidental deletions. Let’s talk about these three risks in more detail.
A challenge that organizations migrating from on-prem face is the need to reevaluate their applications to fit into this flexible infrastructure model. A common approach is to “lift and shift” existing applications into the cloud and then worry about building high availability and fault tolerance later. This approach has the benefit of helping the team get familiar with how AWS operates before making architectural changes that can take time to implement.
However, if the region you’ve deployed your resources into goes down during an outage, your production application will go down with it.
When it comes to ransomware, you need to plan for the worst-case scenario. Traditional thinking assumes that having backups will protect you from ransomware, but that’s no longer the case. Modern ransomware attacks are becoming increasingly sophisticated, with many variants specifically designed to target cloud environments.
These attacks don’t just encrypt your primary data — they actively seek out and compromise backup files, snapshots, and even your disaster recovery systems. This means that organizations without proper backup isolation and protection can find their entire recovery strategy destroyed in a single attack.
This situation leaves businesses with an impossible choice: lose their data or pay a ransom with no guarantee of recovery.
Beyond ransomware, there are several other risks that inadequate backup and recovery planning can expose:
- Accidental deletion of resources by team members
- Regional service disruptions
- Compliance violations in regulated industries that require specific backup retention periods
- Loss of historical data needed for auditing or business intelligence
- Extended downtime during recovery attempts due to a lack of tested recovery procedures
With that in mind, let’s talk about best practices to address these risks.
Third-party security, or supply chain security (SCS), is becoming an increasing concern. We all rely on third-party solutions because they enable us to move faster and focus on our core competencies. Instead of building everything from scratch, we can leverage existing solutions for monitoring, logging, automation, and much more.
Attackers know this too. Instead of trying to breach well-protected cloud environments directly, they often target the weakest links in the supply chain — the third-party services and integrations we trust. We’ve seen this strategy succeed with incidents like SolarWinds, where attackers compromised a trusted third-party service to gain access to thousands of customer environments.
The challenge is that we often have limited visibility into these services’ security practices. When we integrate a third-party service with our AWS environment, we’re essentially extending our trust boundary to include their security practices – whether we want to or not.
Once our data leaves our AWS environment and enters their systems, we often lose direct control over how it’s stored and protected. This can create significant compliance challenges, especially with regulations like GDPR or HIPAA that have strict requirements about data handling.
The complexity only grows as we add more integrations. Each third-party service needs to be monitored, maintained, and updated. Their permissions need to be reviewed and adjusted regularly, and their security practices need to be assessed and validated.
AWS cloud security best practices are a set of recommended actions to help secure your cloud infrastructure and data. These practices are designed to reduce risk, ensure compliance, and maintain the integrity of systems deployed on AWS.
AWS cloud security best practices include:
- Deploy AWS Identity Center
- Enforce least privilege and add guardrails
- Secure the S3 data
- Follow best practices for networking security
- Always use encryption
- Implement logging, monitoring, and threat detection
- Address outdated systems and software in AWS
- Use resource tagging strategies
- Enable self-service with AWS Service Catalog
- Automate resource management
- Backup data regularly
- Limit access for third-party solutions
- Limit blast radius
Instead of creating IAM Users in each of your AWS accounts for each of your employees or contractors, you can deploy a service called Identity Center. Identity Center can connect to your existing identity provider (IdP) to enable SSO, which helps prevent the need for additional users.
This approach also eliminates the need for long-term access keys and console login passwords because the Identity Center lets you deploy permission sets across your AWS accounts.
These permission sets deploy what AWS calls IAM roles. These roles replace long-term credentials with short-term credentials that automatically expire after a set amount of time.
IAM roles are also how you grant AWS services access and permissions to your other AWS services, resources, and data. They can even be used outside of AWS in your CI/CD pipelines, for example, with something called IAM Roles Anywhere, replacing the need to hardcode AWS credentials throughout your DevOps lifecycle.
For other types of secrets your code still requires, you can use secrets management solutions to prevent accidental exposure of credentials (like API keys) in code.
Two commonly used solutions native to AWS include their service, Secrets Manager, and AWS System Manager Parameter Store. Other popular third-party options include HashiCorp’s Vault and Sops.
Services like the IAM Access Analyzer, IAM Permission Boundaries, Service Control Policies (SCPs), and Resource Control Policies (RCPs) are useful for enforcing least privilege and adding guardrails.
- IAM Access Analyzer can help detect unwanted permissions within your own accounts and for external accounts. This is a great starting point for detecting excessive privileges and correcting them.
- IAM Permission Boundaries are a powerful option for delegating permissions without exposing yourself to privilege escalation and without requiring constant approval from the security team. For example, if a manager needs to grant additional permissions to their team, the security team can use permission boundaries to provide that ability while enforcing limits.
- Service Control Policies (SCPs) give you central control over the maximum available permissions for identities in your accounts. They override IAM or resource-based policies that you could set in member accounts, which means you can use them to establish guardrails related to your IAM users and roles across entire accounts and Organizational Units (OUs), which are account groupings.
- Resource Control Policies (RCPs), instead, control the maximum available permissions for resources in our organization. An example of when this would be helpful is for securing secrets stored in Secrets Manager. If you have secrets in accounts that you know should never, ever be accessible by external AWS accounts, then you can enforce that with RCPs.
Use the following AWS cloud services to secure S3 data:
- S3 Block Public Access — A security feature now enabled by default and that should rarely be disabled, but that may need to be enabled for older buckets
- Amazon Macie — A service that can discover sensitive data, alert on exposed buckets, and monitor access patterns. It can also enable automated remediation by working with other AWS services, such as Amazon EventBridge and AWS Lambda
- AWS Config — A tool that helps you assess, audit, and evaluate configurations of AWS resources, including S3 buckets. You can use it to create rules and look for non-compliant resources on a continuous basis
- AWS Lambda — A serverless service that can work in conjunction with most AWS cloud services for automated remediation, enrichment, or other quick compute tasks
- IAM Access Analyzer — This service can help identify buckets that are accessible outside of the intended scope.
- KMS— AWS’ key management and encryption service that not only helps encrypt your data with the key of your choice, but also enables you to set an additional layer of access policies by configuring exactly who has access to use your encryption keys
- AWS Backup — As you consider your backup strategy for data stored in S3, AWS Backup can help.
Beyond making sure that you follow best practices when creating your VPCs, NACLs, and SGs, consider the following services:
- AWS WAF — For your web applications, the AWS WAF can sit in front and filter traffic for known malicious requests.
- AWS Shield — DDoS protection is a must, and Shield is AWS’ built-in protection service.
- Network Firewall — Once traffic enters your VPC, you can deploy the AWS Network Firewall to create firewall rules that will inspect and control traffic flowing across your VPCs and subnets.
- AWS Firewall Manager — Keeping track of all of those NACLs, SGs, and firewall rules across all of your AWS accounts is a difficult task, so AWS launched the Firewall Manager to enable centralized management.
- VPC Flow Logs — To see what traffic is flowing in your VPCs, you can enable VPC Flow Logs and feed those logs into a service like CloudWatch in order to monitor, analyze, and detect threats or troubleshoot connectivity issues.
- Amazon GuardDuty — While this service is not just for networking security, it uses intelligent threat detection and can look at networking events.
For secure connectivity, you can look at:
- AWS Transit Gateway — If you need to connect your VPCs with on-premises networks through a central hub, AWS Transit Gateway can act as your cloud router.
- AWS Site-to-Site VPN — If you’d like to create an IPsec VPN connection between your remote networks and Amazon VPC over the Internet, you can use Site-to-Site VPN.
- AWS Direct Connect — If you need to take it up a notch and directly connect your on-prem networks to your AWS resources while remaining on the AWS global network, you can deploy AWS Direct Connect.
- AWS PrivateLink — When you need to connect resources in VPCs to AWS services outside of the VPCs without using the open Internet, you can use AWS PrivateLink.
The main AWS services aimed at encrypting data include:
- AWS KMS — which lets you create, manage, and control encryption keys to encrypt or digitally sign data
- AWS CloudHSM — provides a dedicated Hardware Security Modules (HSMs) for customers requiring complete control over their encryption keys and cryptographic operations
- AWS Certificate Manager — for when you need to create and manage website TLS certificates
- AWS Private Certificate Authority — if you need to issue your own internal security certificates
We also previously discussed network security options like Site-to-Site VPN, Transit Gateway, and PrivateLink.
To address this risk, we need a layered approach that we can deploy in this order:
- Start with comprehensive logging
- Enable CloudTrail across all accounts and regions, and understand what’s logged by default by looking at the control plane and the data plane
- Use CloudWatch Logs for application-level logging.
- Consider what other sources of logs you need based on your workloads (ie, S3, VPCs, DNS, etc.).
- Implement proper monitoring
- Set up CloudWatch metrics and alarms.
- Deploy Security Hub for security posture monitoring.
- Configure AWS Config for resource and configuration monitoring.
- Enable threat detection
- Turn on GuardDuty for intelligent threat detection.
- Use Amazon Detective to investigate security events.
- Centralize security visibility
- Use a service like Security Lake to aggregate all security data.
- Consider using a dedicated cloud SIEM or your existing SIEM.
These services are all designed to work together in layers.. Without logging as your base layer, none of the other layers will work effectively. Finally, by centralizing security visibility, you make massive amounts of data actionable.
AWS provides several native tools that work together to help maintain up-to-date and secure systems. Amazon Inspector, AWS Config, and AWS Systems Manager (SSM) are particularly effective when used as part of a comprehensive update strategy.
Amazon Inspector
Amazon Inspector continuously scans your EC2 instances, container images, and Lambda functions for vulnerabilities.
It maintains an updated database of Common Vulnerabilities and Exposures (CVEs) and provides detailed information about what needs to be patched. It then prioritizes these vulnerabilities based on their severity and exploitability, helping you focus on what matters most.
AWS Systems Manager
AWS Systems Manager serves as your central platform for operational management, including patch management. Through SSM Patch Manager, you can automate the process of deploying security updates across your entire fleet of instances.
You can define patch baselines that specify which updates should be applied, set up maintenance windows to minimize disruption, and automatically track patch compliance across your environment.
AWS Config
AWS Config complements these tools by tracking your resource configurations over time and evaluating them against defined rules. You can create rules to check for outdated AMIs, flag instances running deprecated instance types, and ensure systems maintain compliance with your patch baselines.
Config can automatically trigger remediation actions when it detects non-compliant resources, helping maintain your desired state of system updates.
By enforcing tagging strategies across your accounts, including development and staging accounts, you can require certain tags to be attached to resources before they can be created.
These tags can then be used for multiple purposes: to track resources created, for cost management, and for attribute-based access control (ABAC).
AWS Service Catalog lets you create a catalog of pre-approved IT services and resources your teams can then deploy. This is a sort of self-service portal where users can deploy the resources they need, while following your security requirements and best practices.
For example, you can create templates for:
- EC2 instances with proper security groups, IAM roles, and tags already configured
- S3 buckets with encryption and access controls pre-configured
- Database deployments that follow your organization’s security requirements
This helps prevent shadow resources because teams no longer need to circumvent security to get their work done. Instead, they can quickly deploy what they need through an approved channel that has security built in by default.
Service Catalog helps prevent shadow resources from being created, but we still need automated solutions to detect and manage resources that might slip through the cracks. By using services like AWS Config and Systems Manager, we can deploy an automated approach to both prevent and detect shadow resources.
We can also introduce the Amazon EventBridge service by creating an automated response to specific AWS events. We can use it to trigger notifications, connect services for automated remediation, and run regular compliance checks.
The key is to make this process as automated as possible. You could track and clean up resources manually, but that approach doesn’t scale and is prone to human error. Instead, build automation that continuously monitors your environment and takes appropriate action based on your organization’s policies.
The good news is that AWS provides several native capabilities for implementing a robust backup and recovery strategy. They include:
- Multi-region and multi-account strategies
- Protecting against ransomware attacks
- Automating backups
The cloud provides fantastic multi-region and multi-account deployment strategies, from elastic load balancers to cross availability zone (AZ), cross-region, and cross-account functionality.
AWS also provides numerous backup strategies for S3, database services, and instance volumes. We can use those for cross-region, cross-account, and cross-storage-type purposes.
Some services also protect against built-in ransomware and accidental deletion. For example, S3 enables Object Lock and versioning with MFA enabled.
The key here is to make your backups immutable, so that even if ransomware infiltrates your environment, it can’t modify or delete your backup files.
Finally, we can use various services to back up data automatically.
Of course, a backup strategy requires regular testing. Determining whether backups function as expected is not something you want to find out when you actually need them.
Before rushing to grant access to a third-party solution in your AWS accounts, consider what IAM permissions they actually need. Some vendors may request more permissions than are strictly necessary to make integrations easier and reduce friction. You also need to be aware of the confused deputy problem.
Enforce boundaries and least privilege by setting guardrails with Organizations Service Control Policies (SCPs), Resource Control Policies (RCPs), and identity and resource-based policies. Then, use AWS IAM Access Analyzer to continuously monitor for unintended access and refer to our threat detection recommendations earlier in the article to identify anomalous behavior with external access.
SCS is an issue that can affect everything from cloud dependencies to software dependencies. Compromised third parties used in your software running in Lambda functions, containers, and EC2s can lead to privilege escalation.
Apply software SCS security strategies accordingly, and leverage AWS solutions such as AWS Signer to ensure code integrity, Amazon ECR to scan for container image vulnerabilities, and AWS CodeArtifact to maintain private code artifact repositories. Deploy Amazon Inspector for continuous software vulnerability scans.
To limit the blast radius of the threats we’ve just discussed, consider the benefits of using multiple AWS accountst for security purposes.
Even if you are extremely diligent in granting only the least privilege and implement all of the other security controls we’ve discussed, the ultimate boundary in AWS is the account boundary.
Accounts are completely separate unless you connect them, and you can lock down those connections, making it extremely difficult for a threat actor to move between accounts when properly configured.
Even if they find a vulnerability in one of your container images and figure out how to break out of the container, they won’t be able to access your most sensitive data because it’s stored in a completely different AWS account.
Running multiple accounts is the solution, but it also introduces additional complexity. You can manage this complexity with infrastructure as code (IaC), AWS Organizations, and AWS Control Tower, which is their native service for setting up and governing a multi-account environment.

1Password, a global leader in identity security, used to rely on a small team of cloud platform engineers to manage infrastructure-as-code (IaC) operations for the entire organization. However, with Spacelift’s guardrails and security in place, much of that IaC management is delegated to the teams that own it, while the cloud platform engineering team gets on with the business of providing expertise.
We’ve already mentioned a few AWS services we can deploy to address the ten risks we’ve discussed, but it can be helpful to view them broken down by category.
Here are the key security tools AWS provides to help protect your systems and data:
Category | Service | Description |
Identity and access management | AWS Identity and Access Management (IAM) | Controls who can access what in your AWS accounts |
AWS IAM Identity Center | Central hub for managing user access and SSO (e.g. connect your existing IdP) | |
Amazon Cognito | Handles app user login and permissions for web/mobile apps | |
Amazon Directory Service | Managed Microsoft AD that can connect to Identity Center | |
IAM Access Analyzer | Finds security gaps in permissions, including external access | |
AWS RAM | Securely share resources across multiple accounts | |
Network and application security | AWS Firewall Manager | Controls firewall rules across AWS accounts |
AWS Network Firewall | Filters network traffic for threats | |
AWS Shield | Protects against DDoS attacks | |
AWS Verified Access | Validates application requests, replaces VPN need | |
AWS WAF | Blocks web attacks (SQLi, XSS) and bot traffic | |
Data protection | Amazon Macie | Finds and protects sensitive data using machine learning |
AWS KMS | Create and manage encryption keys | |
AWS Secrets Manager | Stores and protects secrets like passwords, API keys | |
AWS Certificate Manager | Manages SSL/TLS certificates | |
AWS Private Certificate Authority | Manages internal security certificates | |
Threat detection and response | AWS Config | Monitors and remediates configuration issues |
AWS CloudTrail | Records and audits account activity and API calls | |
Amazon CloudWatch | Monitors AWS and on-prem resources | |
Amazon Inspector | Scans for vulnerabilities and risks | |
AWS Security Hub | Centralizes security findings and enforces best practices | |
Amazon Detective | Visualizes data to investigate threats | |
Amazon GuardDuty | ML-powered threat detection for account and workload threats | |
Amazon Athena | Analyzes data with SQL to support threat response | |
Amazon Security Lake | Centralizes security data for analysis with Athena, OpenSearch, etc. | |
Governance and compliance | AWS Organizations | Manages multiple AWS accounts centrally |
AWS Control Tower | Enforces best practices and controls across accounts | |
AWS Audit Manager | Monitors and assesses compliance and security risks | |
AWS Systems Manager | Manages resources, operations, and integrates with AWS Config |
Spacelift is not exactly a cloud automation tool, but it takes cloud automation and orchestration to the next level. It is a platform designed to manage infrastructure-as-code tools such as OpenTofu, Terraform, CloudFormation, Kubernetes, Pulumi, Ansible, and Terragrunt, allowing teams to use their favorite tools without compromising functionality or efficiency.
Spacelift provides a unified interface for deploying, managing, and controlling cloud resources across various providers. It is API-first, so whatever you can do in the interface, you could do via the API, the CLI it offers, or even the OpenTofu/Terraform provider.
The platform enhances collaboration among DevOps teams, streamlines workflow management, and enforces governance across all infrastructure deployments. Spacelift’s dashboard provides visibility into the state of your infrastructure, enabling real-time monitoring and decision-making. It can also detect and remediate drift.
You can leverage your favorite VCS (GitHub/GitLab/Bitbucket/Azure DevOps), and executing multi-IaC workflows is a question of simply implementing dependencies and sharing outputs between your configurations.
With Spacelift, you get:
- Policies to control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications
- Stack dependencies to build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that, for example, generates your EC2 instances using Terraform and combines it with Ansible to configure them
- Self-service infrastructure via Blueprints, or Spacelift’s Kubernetes operator, enabling your developers to do what matters – developing application code while not sacrificing control
- Creature comforts such as contexts (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code
- Drift detection and optional remediation
If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.
In this article, we’ve explored the risks organizations face in the cloud and some best-practice solutions.
The key takeaways are:
- Many traditional security risks also exist in the cloud, but they manifest differently and require cloud-native solutions.
- AWS provides powerful native security services, but they need to be properly configured and used together for maximum effectiveness.
- Account boundaries are your strongest security control — use multiple AWS accounts to limit blast radius, and then implement least privilege within those accounts.
- Create a crawl/walk/run model — start with the basics and build up your security posture over time.
Remember, the goal isn’t to implement everything at once. Start by understanding your current security posture, identifying your most critical risks, and addressing them systematically. Use AWS’s native security services as your foundation, and build upon them with in-house or third-party solutions based on your organization’s specific needs and compliance requirements.
Whether you’re just starting your cloud journey or looking to enhance your existing AWS security, the most important step is to begin. Every security improvement, no matter how small, helps reduce your overall risk and better protect your cloud environment.
Solve your infrastructure challenges
Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.