How to Automate Terraform Deployment using GitLab CI/CD
This documentation outlines the steps to automate Terraform deployments using GitLab CI/CD. The process involves pushing Terraform code to a GitLab repository, creating a CI/CD pipeline, and configuring it to validate, plan, apply, and destroy infrastructure changes.
Prerequisites
GitLab Account: You need an active GitLab account.
Terraform Installed: Ensure Terraform is installed on your local machine.
AWS Account: An AWS account to create and manage infrastructure.
Git Installed: Git should be installed on your local machine.
Steps to Automate Terraform Deployment
i) Directory Structure
Create the following directory structure in your local machine(inside the root directory):
.
├── backend.tf
├── ec2
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── main.tf
├── provider.tf
├── variables.tf
└── vpc
├── main.tf
├── output.tf
└── variables.tf
ii) Write the Terraform Code
./provider.tf:
terraform { required_providers { aws = { source = "hashicorp/aws" version = "5.55.0" } } } provider "aws" { region = "us-east-1" }
The provided code snippet configures Terraform to use the AWS provider with version 5.55.0 and sets the AWS region to "us-east-1".
./vpc/main.tf
resource "aws_vpc" "myvpc" { cidr_block = "10.0.0.0/16" enable_dns_hostnames = true enable_dns_support = true tags = { Name = "myvpc" } } resource "aws_subnet" "pub_sub1" { vpc_id = aws_vpc.myvpc.id cidr_block = "10.0.0.0/24" map_public_ip_on_launch = true availability_zone = "us-east-1" tags = { Name = "pub_sub1" } } resource "aws_security_group" "sg" { vpc_id = aws_vpc.myvpc.id name = "my_sg" description = "public Security" ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
This will define three AWS resources: a VPC with DNS support enabled and a specific CIDR block, a public subnet within that VPC, and a security group allowing inbound SSH access and all outbound traffic.
./vpc/output.tf
output "pub_sub1" { value = aws_subnet.pub_sub1.id } output "sg" { value = aws_security_group.sg.id }
The provided Terraform configuration defines two output values: the ID of the public subnet (
pub_sub1
) and the ID of the security group (sg
). These outputs are set up to capture and export these values for use in the ec2 module, allowing reference to the subnet and security group IDs when creating EC2 instances../ec2/main.tf
resource "aws_instance" "server1" { ami = "ami-01b799c439fd5516a" instance_type = "t2.micro" subnet_id = var.sn security_groups = [var.sg] tags = { Name = "myserver" } }
The provided Terraform resource configuration creates an AWS EC2 instance using a specified AMI and instance type. The instance is placed in a subnet and associated with security groups, both of which are specified using variables (
var.sn
for the subnet ID andvar.sg
for the security group). The instance is tagged with the name "myserver"../ec2/variables.tf
variable "sg" { } variable "sn" { }
Leaving the variable definitions blank in
variables.tf
indicates that the values for these variables (sg
andsn
) will be provided or assigned elsewhere in the Terraform configuration(values will be sourced from the outputs defined in./vpc/
output.tf
)./main.tf
module "vpc" { source = "./vpc" } module "ec2" { source = "./ec2" sn = module.vpc.pub_sub1 sg = module.vpc.sg }
The
main.tf
file in the root directory orchestrates two modules:vpc
andec2
. It integrates the VPC module (./vpc
) to manage networking components like subnets and security groups. The EC2 module (./ec2
) is configured to deploy instances using outputs from the VPC module, ensuring integration of computing resources within the defined network infrastructure.
Now we can use terraform init
to initialize the directory. Then run terraform validate
and also terraform plan
in the root directory to ensure no inconsistency or syntax error in the code.
Analyze the output from terraform plan
and check whether all the resources will be created or not(a total of 4 resources will be created).
Create an s3 bucket in AWS by specifying any name using the UI. Then create a DynamoDb table using the following command:
aws dynamodb create-table \ --region us-east-1 \ --table-name terraform-lock \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
This will create a DynamoDB table with the name terraform-lock(DynamoDB locks the state file inside s3 and prevents nonconcurrent access to the state file by multiple users)
Inside
./backend.tf
write the backend code to store the state file inside S3.(Storing the Terraform state file in S3 and locking it with DynamoDB)terraform { backend "s3" { bucket = "<bucket_name>" key = "state" region = "us-east-1" dynamodb_table = "terraform-lock" encrypt = true } }
Use terraform apply -auto-approve
to check whether the state file is getting created within the s3 bucket or not. If everything works perfectly create a .gitignore
file in the root directory since we are now ready to push the code to GitLab repository. (for contents of the .gitignore file, just type- 'terraform .gitignore file' in your browser and select the first GitHub search that comes up first. Then copy and paste the contents to your .gitignore file)
iii) Pushing the code to Gitlab
In the root directory initialize a git repository using
git init
command.Log in to your GitLab account and create a new repository. Copy the commands to set that repository as the origin repository and run them in your root directory.
Use a new branch to push the code(Don't use the main branch as it is not a best practice to follow). You can create and switch to a new branch using
git checkout -b dev
Push the code using
git add .
,git commit -m "1st commit"
,git push -u origin dev
(Then you will be asked to provide your GitLab username and password)This will push the whole code into the Gitlab repository as the dev branch.
iv) Creating GitLab CI
Take the dev branch and create a new file in the root directory named as
.gitlab-ci.yml
(we use the name '.gitlab-ci.yml' because GitLab CI/CD pipelines automatically recognize and use this specific filename for configuration.)Give the contents of the
.gitlab-ci.yml
file as:image: name: registry.gitlab.com/gitlab-org/gitlab-build-images:terraform entrypoint: - '/usr/bin/env' - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' variables: AWS_ACCESS_KEY_ID: ${MY_AWS_KEY} AWS_SECRET_ACCESS_KEY : ${MY_AWS_ACCESS_KEY} AWS_DEFAULT_REGION: "us-east-1" before_script: - terraform --version - terraform init stages: - validate - plan - apply - destroy validate: stage: validate script: - terraform validate plan: stage: plan dependencies: - validate script: - terraform plan --out="planfile" artifacts: paths: - planfile apply: stage: apply dependencies: - plan script: terraform apply -input=false "planfile" when: manual destroy: stage: destroy script: - terraform destroy -auto-approve when: manual
In this code we specifies the Docker image for running jobs (
registry.gitlab.com/gitlab-org/gitlab-build-images:terraform
) and sets the entrypoint with environment configurations. Then we define environment variables (AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,AWS_DEFAULT_REGION
) for AWS credentials and region settings, sourced securely from GitLab CI/CD environment variables. We prepare the environment by checking Terraform's version and initializing Terraform configurations withterraform init
. This setup automates stages (validate
,plan
,apply
,destroy
) for the pipeline to execute Terraform commands.Configure AWS Access Keys in GitLab CI/CD
-> Go to your GitLab repository.
-> Navigate to Settings > CI/CD.
-> Expand the Variables section.
-> Add the following variables:
--
MY_AWS_KEY
(AWS Access Key ID)--
MY_AWS_ACCESS_KEY
(AWS Secret Access Key)
v) Trigger the Pipeline
Merge the
dev
branch into themain
branch and create a merge request.Review and merge the request.
The GitLab CI/CD pipeline will automatically trigger and perform the following steps:
Validate: Runs
terraform validate
to check the configuration.Plan: Runs
terraform plan
to create a plan file.Apply: Manually triggered to run
terraform apply
and deploy the infrastructure.Destroy: Manually triggered to run
terraform destroy
and tear down the infrastructure.
vi) Verify the Deployment
Check the AWS Management Console to verify that the resources have been created as specified in your Terraform configuration.
vii) Clean Up
To clean up resources, manually trigger the destroy
stage from the GitLab CI/CD pipeline.
Summary
This guide demonstrates how to automate Terraform deployments using GitLab CI/CD by creating a pipeline that validates, plans, applies, and destroys infrastructure changes. The setup ensures that changes are automatically applied to your infrastructure without manual intervention, streamlining the deployment process while following industry best practices.
NOTE: I do not own this project, this is merely the adaptation of the main project done by Cloud Champ, you can view the full video of this project using the following link: youtube.com/watch?v=oqOzM_WBqZc