Continuous Integration and Continuous Deployment with Terraform Cloud

Katoria Henry
Level Up Coding
Published in
9 min readFeb 18, 2023

--

This week we’re coming right back at you with more Terraform, in which we’ll be leveraging Terraform Cloud as our CI/CD tool to verify our build. If you’ve never heard of Terraform Cloud, one of the top reasons as to why it’s used in DevOps is AUTOMATION — from compliance, to management of various cloud service providers, data centers, to infrastructure automation for provisioning, Terraform Cloud does that for you. It’s basically a SaaS that could be run locally and enables Terraform to retrieve and save state files from it as well.

Additional fun facts about Terraform Cloud are as follows:

  • You have visibility into state files that are being accessed and by whom
  • Versioning and backing up state files is as easy as 1,2,3
  • You can talk directly to various CSPs (remote state)
  • You can set up your runs so that they are approved before deployment; There’s only one run at a time
  • And of course, it improves overall collaboration

*I would highly recommend that you check out some of the tutorials for Terraform Cloud to become more familiar with it. They can be found here.

Today we’ll be deploying multiple AWS resources via Terraform Cloud to see some of the bells and whistles that it has. For starters, it’s best to get your Github repo all set up as that will be needed for this walkthrough. Similar to what we did last week, we will be leveraging modules this week as well, and this is the scenario we will attempt to execute:

  • Create a highly available two-tier AWS architecture containing the following:
  • Custom VPC with:

2 Public Subnets for the Web Server Tier

2 Private Subnets for the RDS Tier

  • Launch an EC2 Instance with your choice of Web Server in each private web tier subnet (Apache, NGINX, etc).
  • One RDS MySQL Instance (micro) in the private RDS subnets
  • Security Groups properly configured for resources ( Web servers, RDS)
  • Use module blocks for ease of use and re-usability
  • Replace the EC2 Instances with an Auto Scaling Group for the Web Server (in the Public Web Server subnets) with min of 2 and max of 5 spread across the 2 subnets
  • Internet-facing Application Load Balancer targeting the Web Server Auto Scaling Group.
  • ALB Security Group with permissions and modifications required for the Web Servers SG to reflect the new architecture
  • Use Terraform Cloud as the CI/CD to check the build

Note: I’ve done a dry run of this locally and I was having issues resolving these two errors below 🥴, so we’ll see if we can get it fixed using Terraform Cloud!

Resources/PreReqs:

  • As always, Confidence to get it done!
  • AWS Account
  • GitHub Account integrated with VSCode
  • Terraform Installed
  • Terraform Cloud Account: Sign up Here
  • Source Code Editor (I use Visual Studio Code (VS Code)) w/ the Terraform Extension Installed
  • AWS IAM generated Access/Secret Key

Part 1

If you haven’t already done so, be sure to sign up for Terraform Cloud, preferably using your GitHub account as that will be needed for this walkthrough. I actually used my email when I first signed up for my account, so no biggie if you choose not to sign up using Github. Because there are various steps that we’ll be walkthing through, let’s highlight a few of the files that we will be using today:

Main.tf (root)

Compute (main.tf)

Load Balancer (main.tf) — Be sure to add your VPC ID to line 17

Your file structure should resemble this once completed:

For a full list of files and modules that will be used, feel free to clone my repo for this walkthrough that can be found here:

Part 2

Now that we have signed up for Terraform Cloud, let’s start with some of the basics of getting things set up for our new deployment. Follow the steps below to kick things off:

  • As previously mentioned, if you’ve signed up using your Github account, select “Sign in with Github”, as shown below. Otherwise, continue with your email and password:
  • If you signed up for MFA, you may be prompted to enter the 6-digit code that’s provided before being redirected
  • Now that you’ve successfully logged on, you’ll see “Projects & Workspaces”, in which for most will be empty if this is your first time using it:
  • Since this is a new deployment, let’s start by selecting “New Workspace”, in which you will then select “Version control workflow
  • Connect to Github (or if you’re using one of the other version providers, select that option). Select “Github.com”, and then authorize Terraform Cloud. From there, you’ll want to select the repositories that will be used for this walkthrough:
  • Be sure to give the Workspace a name and pay close attention to the advanced options. If you’re like me and would prefer NOT to have changes automatically pushed to your repo and/or applied to your configuration automatically (though auto apply is more convenient), keep your apply method as “Manual”, and then select “Create workspace
  • We’ll have to configure our variables next (this will be your AWS access/secret key that were a pre-req), so let’s click on “Continue to workspace overview” before starting our new plan. Start by clicking on “Configure Variables”, and you will add a Terraform variable for access to the Bastion Host. Following this, you will add the Environment variables for AWS, and mark them as Sensitive to prevent the actual value from being displayed:

Part 3

Previous versions of Terraform Cloud would require you to start a new plan immediately after adding in your required variables, but with that option being changed, we will proceed with our first step of starting a new run, as shown below:

  • Let’s give our run a name and leave the default run type as is for now, and click on “Start Run”:
  • If you’re seeing “GREEN” on your end, that means your configuration was valid and Terraform Cloud will be able to create the resources as listed. To continue, select “Confirm & Apply”, followed by “Confirm Plan”, and your AWS resources will be created:
  • Out of the 33 resources that we were hoping to create, it appears as though we have the SAME two errors from our dry-run that was completed earlier ☹️:
  • All other resources were successfully created, so let’s see if we can fix our Load Balancer Module to get rid of one of the two errors. We’ll start by removing “var.vpc_id”, and applying the VPC-ID to the target group directly:
  • Stage your changes to this file, and then push to Github. It should automatically trigger a new run in Terraform Cloud:
  • Now, let’s retry our run to see what happens:
  • It appears as though we cleared one error, but we are still having issues with that autoscaling group for RDS…let’s try one more update to see if we can resolve the error. We will follow the exact same steps and make updates to our “Compute” variables.tf file. It’s showing that our instance type is invalid for the launch template, which is an easy fix:
It worked! (Okay I knew that it would but still wanted to create some errors)
  • You’ll also notice that the output provided us with our alb_dns address, so let’s copy that and paste it into our browser, to see what happens:
  • After waiting for roughly 10 min, I gave it another shot to access the Application Load Balancer, and…New Error

Part 4

With our final error now being resolved, we can head into our AWS Console to verify some of the resources that were created:

VPC

Load Balancer across 3AZs

Internet Gateway

Auto Scaling Groups

EC2 Connect

Terraform Cloud List of Resources Created

Part 5

I hope you’ve enjoyed using Terraform Cloud CI/CD for your AWS resources. We’ve succeeded, failed, and had several “eye-brow-raising” moments with this walkthrough, but we made to the end! If you’re ready to destroy everything that you’ve deployed, let’s start with the steps below:

  • Head over to your workspace settings, and scroll down to “Destruction and Deletion
  • Select “Queue Destroy Plan”, and confirm your Workspace Name
  • And one-by-one, you will start to see the resources being destroyed:
  • Remember, you must select “Confirm & Apply”, followed by “Confirm Plan” to ensure the resources are deleted:

As a best practice, double check that the resources no longer exist by visiting your AWS Console as well! And that wraps it up for this week 👏

Until next time everyone! Follow me on LinkedIn, and @theCaptN21 on GitHub!

Level Up Coding

Thanks for being a part of our community! Before you go:

🚀👉 Join the Level Up talent collective and find an amazing job

--

--

Platform Engineering | DevOps | Chaos Engineering | Cloud Engineering | High Availability | Cloud Security | 👉🏽 https://www.linkedin.com/in/katoria-henry-2018