[Webinar] How 3 IaC Experts Rethink IaC Pipelines for Control, Scale, and Sanity

➡️ Register Now

Terraform

How to Create an AWS RDS Instance Using Terraform

terraform rds

🚀 Level Up Your Infrastructure Skills

You focus on building. We’ll keep you updated. Get curated infrastructure insights that help you make smarter decisions.

Managing relational databases across cloud platforms becomes much simpler with Terraform’s support for AWS RDS. Instead of handling complex manual setups, you can automate the provisioning, scaling, and configuration of RDS instances and clusters with ease. Using Terraform provides a secure, consistent, and declarative way to manage the full database lifecycle.

In this post, we’ll discuss key configuration options that teams and enterprises commonly need when deploying RDS with Terraform. Let’s explore the process step by step.

  1. What is AWS RDS?
  2. Configuring a basic RDS instance
  3. Provisioning an RDS instance in a VPC network
  4. Configuring backup and maintenance settings
  5. Configuring monitoring and performance insights
  6. Managing parameter groups
  7. Setting up access and security
  8. Managing HA and replication
  9. What is the AWS RDS Terraform module?
  10. Best practices for configuring AWS RDS with Terraform

Note: All code examples discussed here are available in this GitHub repository.

What is AWS RDS?

AWS RDS (Relational Database Service) is a managed service that simplifies setting up, operating, and scaling relational databases in the cloud. It supports engines like MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server and handles routine tasks such as backups, patching, and high availability.

Terraform integrates with RDS by using the AWS provider to define infrastructure as code. You can declare aws_db_instance or aws_rds_cluster resources in Terraform to create and manage RDS instances or clusters. 

Terraform plans changes, applies them consistently, and tracks the state of your RDS configurations, making infrastructure deployment repeatable and auditable.

Using Terraform with RDS ensures environment consistency, improves version control of database configurations, and enables automation in CI/CD pipelines.

Step 1. Configure a basic RDS instance

Creating an RDS database instance in AWS using Terraform is quite easy. The aws_db_instance resource block takes a few required parameters, which define the essential characteristics of the database to be provisioned.

An example is shown below. In the upcoming sections, we will build on the same configuration.

resource "aws_db_instance" "default" {
  allocated_storage = 10
  engine = "mysql"
  instance_class = "db.t3.micro"
  username = "foo"
  password = "foobarbaz"
  skip_final_snapshot = true // required to destroy
}

Let’s examine the attributes:

  1. allocated_storage: Memory allocated in GB for this database instance.
  2. engine: Choice of database engine. We have selected MySQL as the desired database engine. We can also select Postgres, MariaDB, Oracle, etc. You can refer to this for all possible values.
  3. instance_class: Defines the dimensions of the instance to provision based on various factors like CPU, memory, networking, and storage. More options.
  4. username: The user name of the main database user (admin).
  5. password: Password for the main database user (admin). This should be supplied in a secure way. For example, environment variables or key store.
  6. skip_final_snapshot: Determines if a snapshot should be taken before deleting the database. The database can be created without setting this setting. However, while destroying this database instance using Terraform, if this value is not set to true, the database is not destroyed.

Example source directory

Step 2. Provision an RDS instance in a VPC network

Often, databases are placed in a private subnet with secure network access defined in security groups and network ACLs.

In this section, we first create the VPC and then update the basic configuration for RDS, which was introduced in the previous section.

The config for VPC is found here. It creates:

  1. A VPC
  2. Two subnets
  3. A Terraform security group with public ingress settings

To place RDS instances in VPC subnets in AWS, you would need to create RDS Subnet groups. A subnet group is a group of subnets where we can create and manage database instances.

The Terraform configuration defines the same. It creates a subnet group that includes the subnets created by VPC previously.

resource "aws_db_subnet_group" "my_db_subnet_group" {
  name = "my-db-subnet-group"
  subnet_ids = [aws_subnet.subnet_a.id, aws_subnet.subnet_b.id]

  tags = {
    Name = "My DB Subnet Group"
  }
}

We updated the configuration below to associate this subnet group with the database instance we created in the previous section.

resource "aws_db_instance" "default" {
  allocated_storage = 10
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t2.micro"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  skip_final_snapshot = true
}

Here, we have also associated the database instance with a security group created while creating the VPC and added the engine_version attribute to use a specific version of MySQL for our database.

Run terraform plan and terraform apply to provision this database instance with a VPC.

Navigate to the AWS console > RDS > Databases > mydb > Connectivity & Security, and see if the database instance is placed in a VPC subnet and associated with the corresponding security group as configured above and shown in the screenshot below.

terraform rds module

Step 3. Configure backup and maintenance settings

In Amazon RDS, backup and maintenance are essential for keeping database instances highly available, reliable, and resilient. RDS simplifies this by offering built-in automation for both. 

Automated backups create regular snapshots, enabling point-in-time recovery and protection against data loss. At the same time, RDS handles key maintenance tasks, like applying OS and engine updates, to keep systems secure and compliant.

You can customize backup retention and maintenance windows to align with your workload needs, reducing disruption and improving control. This automation eliminates the overhead of manual processes, letting teams focus on building applications while RDS handles the heavy lifting in the background.

To activate these features in an RDS MySQL instance, you can include specific attributes in your Terraform configuration, as shown below.

resource "aws_db_instance" "default" {
  allocated_storage = 10
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t2.micro"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7 # Number of days to retain automated backups
  backup_window = "03:00-04:00" # Preferred UTC backup window (hh24:mi-hh24:mi format)
  maintenance_window = "mon:04:00-mon:04:30" # Preferred UTC maintenance window

  # Enable automated backups
  skip_final_snapshot = false
  final_snapshot_identifier = “db-snap”

}

The inline comments for the additional attributes in the code above explain the purpose.

Note that we have now set the skip_final_snapshot to false, as the backups are created as snapshots. The snapshot_idenfier is a required attribute when skip_final_snapshot is disabled.

Once the Terraform configuration is provisioned, navigate to the database and select the Maintenance & Backup tab.

Verify that the maintenance window is set for next Monday (or a weekday of your choice) at 4 a.m. UTC. The backup window and retention period are set to the time specified in the configuration.

The configuration above results in the setting below.

terraform rds cluster

Step 4. Configure monitoring and performance insights

Monitoring and Performance Insights are key to efficiently managing Amazon RDS database instances. RDS integrates with Amazon CloudWatch to provide real-time metrics like CPU usage, memory consumption, disk I/O, and storage capacity, helping track overall system health.

For deeper analysis, Performance Insights highlights the most resource-intensive SQL queries, offering a visual breakdown of query load and wait events. This enables precise tuning and faster resolution of performance bottlenecks.

Together, these tools support proactive monitoring, faster troubleshooting, and data-driven optimization, ensuring your RDS workloads remain performant and responsive to application demands.

To enable monitoring and performance insights in our RDS database instance, we need to provide a couple of attributes:

  1. monitoring_interval – to specify the interval to collect logs for monitoring
  2. performance_insights_enabled – to be set to true to enable performance insights

Enabling monitoring creates a CloudWatch log group where all the logs are collected. This requires an IAM role with appropriate access to CloudWatch.

First, we create the config for the IAM role, as shown below. Here, we are creating a role and attaching a policy to access CloudWatch to create monitoring logs.

resource "aws_iam_role" "rds_monitoring_role" {
  name = "rds-monitoring-role"

  assume_role_policy = jsonencode({
  Version = "2012-10-17",
  Statement = [
      {
        Action = "sts:AssumeRole",
        Effect = "Allow",
        Principal = {
        Service = "monitoring.rds.amazonaws.com"
      }
    }
  ]
})
}

resource "aws_iam_policy_attachment" "rds_monitoring_attachment" {
  name = "rds-monitoring-attachment"
  roles = [aws_iam_role.rds_monitoring_role.name]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole"
}

Update the RDS configuration as shown below.

The comments explain why we are setting the additional attributes to enable monitoring and performance insights for our database instance.

Also, note that we have updated the allocated_storage and instance_class attributes to higher values. This is because performance insights are not supported for instances with lower configurations.

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"

  skip_final_snapshot = false
  final_snapshot_identifier = “my-db”

  # Enable enhanced monitoring
  monitoring_interval = 60 # Interval in seconds (minimum 60 seconds)
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn

  # Enable performance insights
  performance_insights_enabled = true

}

To provision the above Terraform configuration, navigate to the RDS database and click on the Monitoring tab. It should show various CloudWatch monitoring metrics in the form of graphs, as shown below.

terraform rds parameter group

As shown below, click on the “Monitoring” dropdown at the top right to access the Performance Insights dashboard.

terraform rds example

Clicking on Performance Insights opens up a new tab/window, as shown below.

terraform rds security group

Thus, we have successfully configured monitoring and performance insights for the database instance using Terraform.

Example source directory

Step 5. Manage parameter groups

Until now, we have configured various database features, such as monitoring, performance insights, backup, and maintenance, directly in the aws_db_instance resource.

However, database management deals with a vast range of features, and referring to the Terraform documentation, not all of them can be configured in the same aws_db_instance resource. Features like query optimization, caching, memory allocation, etc., need to be managed in a different way (or resource block).

Amazon RDS implements these features in parameter groups and associates them with relevant databases.

A parameter group is a configuration setting that lets us tailor the behavior of our RDS database instances. It includes a collection of database engine parameters that affect various aspects of performance, behavior, and functionality. Using parameter groups, we can adjust settings like memory allocation, caching, query optimization, and replication to suit the specific needs of our application.

These groups play a key role in optimizing database performance and aligning the database with application requirements. They can be linked to RDS instances either during setup or afterward, giving us the flexibility to change settings without any downtime. This makes it easier to fine-tune the environment for better performance, reliability, and scalability based on the workload.

Different sets of parameters are available depending on the type of database instance being used, and they can be applied at both the cluster and instance levels.

For the complete list of parameters for various kinds of databases, refer to this document. For the MySQL database instance provisioned in this example, we want to update the default value assigned to connect_timeout parameter from 10 seconds to 15 seconds.

Note: The links provided in this document for MySQL might cause some confusion. You can either use the link for “Aurora MySQL parameters” while working with RDS MySQL databases or directly refer to the MySQL documentation to configure the desired parameters.

We begin by creating a resource for a parameter group, which includes all the parameters we wish to configure. The resource block aws_db_parameter_group defines the parameter group in Terraform IaC.

It has to be named and associated with a specific database family, as shown below.

resource "aws_db_parameter_group" "my_db_pmg" {
  name = "my-db-pg"
  family = "mysql5.7"

  parameter {
    name = "connect_timeout"
    value = "15"
  }

  # more parameters...
  # parameter {
    # name = "<parameter name>"
    # value = "<valid value>"
  # }
}

Specifying the family ensures that the parameters being configured using this resource block exist or are valid for the given family of databases, avoiding confusion. The parameters are added in a nested block in a “name-value” format, which is straightforward. 

Depending on the level of customization desired, the parameter group resource block can potentially consume many lines of code while defining IaC.

Finally, we update the configuration for the database instance itself to associate it with the parameter group defined above. The last line in the configuration below refers to the parameter group to be associated with.

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true

  # Associate with parameter group
  parameter_group_name = aws_db_parameter_group.my_db_pmg.name
}

Apply this updated Terraform configuration and check if the parameter group is created with the desired values and is associated with the database instance.

Parameter group with updated connect_timeout value.

terraform rds option group

Navigate to the Configuration tab of the database record in the AWS management console and verify if the parameter group is being associated with it.

terraform aws rds

Step 6. Set up access and security

Access and security are top considerations for safeguarding sensitive data and maintaining the integrity of our RDS database infrastructure. 

To ensure a robust security posture, it’s crucial to implement multi-factor authentication (MFA) for AWS accounts, utilize IAM roles and policies for controlled access, and adhere to network security practices by placing RDS instances within a VPC and configuring security groups and NACLs. 

We have already placed our RDS Database instance in VPC Subnets with basic network settings, which do not really satisfy the security requirements. Read more about how to configure a VPC using Terraform IaC.

Encryption

Regular updates, database user privilege management, auditing, and monitoring mechanisms contribute to early threat detection and mitigation. Encryption of data at rest and in transit should be enforced using AWS KMS encryption and SSL/TLS protocols.

To enable encryption for our example RDS database, first create the KMS key using the resource configuration below.

resource "aws_kms_key" "my_kms_key" {
  description = "My KMS Key for RDS Encryption"
  deletion_window_in_days = 30

  tags = {
    Name = "MyKMSKey"
  }
}

Update the RDS database resource configuration to enable encryption, and provide the key from the above resource as below. Comments indicate the additional attributes for this purpose.

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true
  # Enable storage encryption
  storage_encrypted = true
  # Specify the KMS key ID for encryption (replace with your own KMS key ARN)
  kms_key_id = aws_kms_key.my_kms_key.arn

  parameter_group_name = aws_db_parameter_group.my_db_pmg.name
}

Apply the Terraform configuration and verify if encryption is enabled on the RDS database and is associated with the KMS key.

terraform-aws-modulesrdsaws

AWS RDS security is an ongoing commitment that necessitates a holistic approach, incorporating industry best practices and vigilant adaptation to evolving security threats.

Example source directory

Step 7. Manage HA and replication

AWS RDS delivers strong high availability (HA) and replication features to keep our database workloads both reliable and scalable. With Multi-AZ (Availability Zone) deployment, the system automatically fails over to a standby instance in another AZ if the primary one goes down. This seamless switch helps minimize any disruption.

RDS also supports read replicas, which let you offload read operations from the primary instance. Spreading the load across replicas not only improves performance but also supports disaster recovery when configured across regions. By setting up HA and replication wisely, we can improve both uptime and performance, meeting application needs while preserving data integrity.

For our example database, we’ll start by enabling Multi-AZ to boost fault tolerance. To do this, add the “multi_az” attribute to the database resource block and set its value to true. 

The updated configuration looks like this:

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true
  storage_encrypted = true
  kms_key_id = aws_kms_key.my_kms_key.arn

  parameter_group_name = aws_db_parameter_group.my_db_pmg.name

  # Enable Multi-AZ deployment for high availability
  multi_az = true
}

Next, we have to create a replica to improve database performance during high-traffic periods.

Since the replica is essentially a database, we have to create another resource block that is similar to the aws_db_instance resource block defined above. The only difference this time is that all the basic parameters are referred to from the main database’s ARN.

The resource block for the read replica database is defined below.

resource "aws_db_instance" "replica" {
  replicate_source_db = aws_db_instance.default.identifier
  instance_class = "db.t3.medium"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true
  storage_encrypted = true
  kms_key_id = aws_kms_key.my_kms_key.arn

  parameter_group_name = aws_db_parameter_group.my_db_pmg.name

  # Enable Multi-AZ deployment for high availability
  multi_az = true
}

Notice that we have replaced several attributes at the beginning with a single attribute named replicate_source_db, since the replica database instance refers to that information from the source database instance. The rest of the parameters are identical.

Applying this updated Terraform configuration creates the source database instance, a replica instance, and enables Multi-AZ on both of them as seen in the screenshot below. It may take time for the databases to be fully provisioned as the source database is created first, and then the replica database is created.

terraform rds performance insights

The database instances thus provisioned—both source and replicas—are provisioned in the same region, eu-central-1a. However, it is also possible to create automatically backed-up replicas in a different region altogether. The resource block below does the same.

The entire Terraform configuration so far has been provisioning all the resources in the eu-central-1 region. To create cross-region replicas, we have to explicitly provide a provider alias, as shown in the code below.

provider "aws" {
  region = "us-west-2"
  alias = "replica"
}

resource "aws_db_instance_automated_backups_replication" "default" {
  source_db_instance_arn = aws_db_instance.default.arn
  retention_period = 14
  kms_key_id = aws_kms_key.my_kms_key_us_west.arn

  provider = aws.replica
}

resource "aws_kms_key" "my_kms_key_us_west" {
  description = "My KMS Key for RDS Encryption"
  deletion_window_in_days = 30

  tags = {
    Name = "MyKMSKey"
  }

  provider = aws.replica
}

Apply the additional configuration above, and verify if a replica of the source database is created in the us-west-2 region. This makes the system fully proof against data loss for any reason.

aws rds proxy terraform

What is the AWS RDS Terraform module?

Optionally, you can also use this AWS RDS module published on the Terraform registry

The AWS RDS Terraform module is a prebuilt, reusable set of Terraform configurations that simplifies provisioning of Amazon RDS instances. It abstracts common RDS setup patterns, including engine selection, instance class, storage, multi-AZ deployment, and security groups, allowing consistent and maintainable deployments with minimal code. 

Popular community modules like terraform-aws-modules/rds/aws support PostgreSQL, MySQL, MariaDB, Oracle, and SQL Server and efficiently handle encryption, backups, and monitoring configurations.

Let’s look at an example below:

module "rds_instance" {
  source = "terraform-aws-modules/rds/aws"
  version = "6.1.1"  # Specify the version of the module you want to use

  identifier = "my-db"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t2.micro"
  allocated_storage = 20
  name = "mydb"
  username = "dbuser"
  password = "dbpassword"
  parameter_group_name = "default.mysql5.7"
  skip_final_snapshot = true

  // Other configuration options as needed
}

However, note that to enable specific RDS database configurations via this module, it becomes imperative to know about various parameter group configurations and other attribute settings.

Best practices for configuring AWS RDS with Terraform

To effectively configure AWS RDS with Terraform, define modular, secure, and scalable infrastructure using Terraform’s resource and data structures.

Best practices include:

  • Use modules to encapsulate RDS configuration, including parameter groups, subnet groups, and security groups, for reuse and clarity.
  • Separate environments using workspaces or directory structures to manage dev, staging, and production independently.
  • Enable encryption by setting storage_encrypted = true and specifying a KMS key with kms_key_id for data at rest.
  • Apply multi-AZ deployments with multi_az = true to ensure high availability for production environments.
  • Restrict access using Security Groups and avoid public accessibility (publicly_accessible = false) unless explicitly required.
  • Use parameter groups and option groups to customize DB behavior and configurations consistently.
  • Store sensitive credentials like DB usernames and passwords in AWS Secrets Manager or SSM Parameter Store, and reference them securely via Terraform data sources.

Deploying Terraform resources with Spacelift

Terraform is really powerful, but to achieve an end-to-end secure Gitops approach, you need to use a product that can run your Terraform workflows. Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:

  • Policies (based on Open Policy Agent) – You can control how many approvals you need for runs, what kind of resources you can create, and what kind of parameters these resources can have, and you can also control the behavior when a pull request is open or merged.
  • Multi-IaC workflows – Combine Terraform with Kubernetes, Ansible, and other infrastructure-as-code (IaC) tools such as OpenTofu, Pulumi, and CloudFormation,  create dependencies among them, and share outputs
  • Build self-service infrastructure – You can use Blueprints to build self-service infrastructure; simply complete a form to provision infrastructure based on Terraform and other supported tools.
  • Integrations with any third-party tools – You can integrate with your favorite third-party tools and even build policies for them. For example, see how to integrate security tools in your workflows using Custom Inputs.

Spacelift enables you to create private workers inside your infrastructure, which helps you execute Spacelift-related workflows on your end. Read the documentation for more information on configuring private workers.

You can check it for free by creating a trial account or requesting a demo with one of our engineers.

Key points

Managing AWS RDS database instances using Terraform ensures consistency, flexibility, and ease. With Terraform, we can easily create, adjust, and control your RDS databases without any mistakes. We can set things up just the way we want, such as how often backups happen, who can access the database, and even make copies in different places. 

We can also automate tasks like updates and making your databases bigger when needed. This combination helps your databases stay flexible and strong, adapting to what our business needs.

Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.

Terraform management made easy

Spacelift effectively manages Terraform state, more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and includes many more features.

Start free trial

The Infrastructure Automation

Report 2025

Our research shows that teams are overconfident

and race toward faster deployments,

sacrificing governance and falling into

the Speed-Control Paradox.

Get the Report