blog by OSO

Terraform 101: Master the Power of Terraform

Sion Smith 2 February 2017

This will be Part 1 out of 2 of the Terraform primer series.

Welcome to Terraform 101, where we dive deep into the world of Terraform and its powerful capabilities. In this comprehensive guide, we will explore Terraform 101, Terraform 101, and Terraform 101 once again, understanding what Terraform is and how it can revolutionize your infrastructure provisioning process. Get ready to unlock the full potential of Terraform and embark on an exciting journey of automation and efficiency.

Terraform 101

What is Terraform? Terraform is fairly easy to pickup and make sense of mostly due to the comprehensive and informative Getting Started guide. It does a great job at introducing the core components of Terraform such as resources, inputs, outputs, etc.

Here you’ll be focusing on how to bring all this components together to create real-world infrastructures. We’ll start by provisioning a single web server and move on to provision the infrastructure for a highly available Web Application.

You can find complete sample code for this post at: https://github.com/osodevops/terraform-azure-confluent-platform. Note that all the code samples are written for Terraform 0.8.x.

Requirements

  • Terraform AWS Account

Install Terraform

Simple follow the [install instructions here] to install Terraform. To confirm that the installation is successful, simply run the terraform command.

terraform_install

 Configure Terraform

Now that you’ve installed Terraform, you need to give it access to our AWS account to be able to provision resources. There’re a couple of ways this can be done;

Environment Variables

You can export your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables and Terraform will automatically use them before performing any operation.

export AWS_ACCESS_KEY_ID=(your access key id)

export AWS_SECRET_ACCESS_KEY=(your secret access key)


Terraform Provider

Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Atlas, DNSimple, CloudFlare).

A Terraform provider is responsible for understanding API interactions and exposing resources. Since you’re using AWS, you need to configure it by providing our credentials.



# Configure the AWS Provider

provider "aws" {
    access_key = "(your access key id)"
    secret_key = "(your secret access key)"
    region = "us-east-1"
}

This tells Terraform that you are going to be using the AWS provider and that you wish to deploy your infrastructure in the “eu-youst-1” region.

The above block of code will typically be at the top of a file named main.tf which you will look at in more detail as you progress.

Provisioning Infrastructure

Terraform code is written in the HCL syntax in files that have the extension “.tf”. Being a declarative language, our goal is to describe our infrastructure and have Terraform create it for us.

Server

Let’s start off by deploying a single server using Terraform. This will be in the form of and Amazon EC2 instance.

resource "aws_instance" "example" {
    ami = "ami-2d39803a"
    instance_type = "t2.micro"
}

Each resources specifies a type (“awsinstance”), a name (“example”) and a set of parameters to configure it.

In a terminal, go into the folder where you created main.tf, and run the “terraform plan” command:

terraform_plan_single

The plan command lets you see what Terraform will do before actually doing it. Resources with a plus sign (+) are going to be created, resources with a minus sign (-) are going to be deleted, and resources with a tilde sign (~) are going to be modified.

To actually create the instance, run the “terraform apply” command:

terraform_apply_single

That’s it! you’ve now deployed a server on AWS using Terraform. This isn’t very exciting so lets make this server more useful.

Web Server

Let’s start by giving the server you just created a name by adding a tag.

resource "aws_instance" "example" {

    ami = "ami-2d39803a"

    instance_type = "t2.micro"


    tags {

        Name = "webserver"

    }

}


Let’s run the plan command to see our changes:

terraform_plan_webserver_tag

Now you can apply the change:

terraform_apply_webserver_tag

Our server still doesn’t do anything useful. Let’s turn it into a web server. Let’s fix that by running a simple web server that always returns the text “Hello, World”.

#!/bin/bash

echo "Hello, World" > index.html

nohup busybox httpd -f -p 8080 &

To keep this example simple, you’re going to run the script above as part of the EC2 Instance’s User Data, which AWS will execute when the instance is booting:

resource "aws_instance" "example" {

    ami = "ami-2d39803a"

    instance_type = "t2.micro"


    user_data = <<-EOF
                #!/bin/bash
                echo "Hello, World" > index.html
                nohup busybox httpd -f -p "${var.server_port}" &
                EOF

    tags {
        Name = "webserver"

    }

}


The “<<-EOF” and “EOF” are Terraform’s heredoc syntax, which allows you to create multiline strings without having to put “\n”.

By default, AWS does not allow any incoming or outgoing traffic from an EC2 Instance. To allow the EC2 Instance to receive traffic on port “${var.serverport}”, you’ll create a security group that allows incoming requests on port “${var.serverport}” from any IP

resource "aws_security_group" "example" {
   name = "example-securitygroup"
   ingress {
     from_port = "${var.server_port}"
     to_port = "${var.server_port}"
     protocol = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
   }
}

Note that in the security group above, we copied & pasted port 8080. To keep your code DRY and to make it easy to configure the code, Terraform allows you to define input variables:

variable "server_port" {
  description = "The port the server will use for HTTP requests"
  default = 8080
}

You should put the above code in the same directory as main.tf in a file called variables.tf.

We can now use this via the interpolation syntax:

resource "aws_security_group" "example" {
   name = "example-securitygroup"
   ingress {
     from_port = "${var.server_port}"
     to_port = "${var.server_port}"
     protocol = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
   }
}

In Terraform, every resource has attributes that you can reference using the same syntax as interpolation. You can find the list of attributes in the documentation for each resource. For example, the awssecuritygroup attributes include the ID of the security group. We can now attach this to our EC2 instance by adding this to the server resource block by referencing the id security group that’ll be created.

The syntax is “${TYPE.NAME.ATTRIBUTE}”. When one resource references another resource, you create an implicit dependency.

resource "aws_instance" "server" {
    ....

    vpc_security_group_ids = [
        "${aws_security_group.example.id}"
    ]
}

If you run the plan command, you’ll see that Terraform wants to replace the original EC2 Instance with a new one that has the new user data (the “-/+” means “replace”) and to add a security group:

terraform_plan_webserver_security_group

Everything looks good so you will apply the plan:

terraform_apply_webserver_security_group

We need to get the IP address of the server so you can confirm that our server is running and displaying the “Hello World” message you expect. We could simple apply this change, wait for the server to be launched and log into the AWS console and look for the Elastic IP assigned,fortunately, you can do better by specify an output variable:

output "public_ip" {
  value = "${aws_instance.example.public_ip}"
}

Put the above code in the same directory as main.tf in a file called outputs.tf. Terraform will by default parse all “.tf” files in the working directory.

Let’s run apply again:

terraform_apply_output_group

We now have our public IP and can test our webserver:

> curl http://54.173.220.62:"${var.server_port}"
Hello, World

Success!

 

Terraform 101

Learn How Other Companies Use Kafka for Best Practices

In Blog Subscription CTA

"*" indicates required fields

Highly Available Web Servers

Running a single webserver is great but in the real world, you’d want to architect for high availability to protect ourselves from having a single point of failure. The solution is to run a cluster of servers, and adjusting the size of the cluster up or down based on traffic.

In the AWS world, there’re 3 components to this;

  • Launch configuration
  • AutoScaling Group (ASG)
  • Elastic Load Balancer (ELB)

The first step in creating an ASG is to create a launch configuration, which specifies how to configure each EC2 Instance in the ASG.

resource "aws_launch_configuration" "example" {
  image_id = "ami-2d39803a"
  instance_type = "t2.micro"
  security_groups = ["${aws_security_group.example.id}"]
  user_data = <<-EOF
              #!/bin/bash
              echo "Hello, World" > index.html
              nohup busybox httpd -f -p "${var.server_port}" &
              EOF
  lifecycle {
    create_before_destroy = true
  }
}

The only new addition is the lifecycle block, which is required for using a launch configuration with an ASG. You can add a lifecycle block to any Terraform resource to customize its lifecycle behavior. One of the available lifecycle settings is createbeforedestroy, which tells Terraform to always create a replacement resource before destroying an original (e.g. when replacing an EC2 Instance, always create the new Instance before deleting the old one).

resource "aws_security_group" "example" {
   ....

   lifecycle {
       create_before_destroy = true
   }
}

Now you can create the ASG itself using the awsautoscalinggroup resource:

resource "aws_autoscaling_group" "example" {
  launch_configuration = "${aws_launch_configuration.example.id}"
  min_size = 2
  max_size = 5
  tag {
    key = "Name"
    value = "terraform-asg-example"
    propagate_at_launch = true
  }
}

This ASG will run between 2 and 5 EC2 Instances (defaulting to 2 for the initial launch), each tagged with the name “terraform-example.” The configuration of each EC2 Instance is determined by the launch configuration that you created earlier, which you reference using Terraform’s interpolation syntax. You also need to specify the availability zones the ASG should be created in. Each AZ represents an isolated AWS data center, so by deploying your Instances across multiple AZs, you ensure that your service can keep running even if some of the AZs fail. You could hard-code the list of AZs (e.g. set it to [“us-east-1a”, “us-east-1b”]), but each AWS account has access to a slightly different set of AZs, so you can use the awsavailabilityzones data source to fetch the exactly list for your account:

data "aws_availability_zones" "all" {}

A data source represents a piece of read-only information that is fetched from the provider (in this case, AWS) every time you run Terraform.

You can now reference the data source using the now familiar interpolation syntax:

resource "aws_autoscaling_group" "example" {
  launch_configuration = "${aws_launch_configuration.example.id}"
  availability_zones = ["${data.aws_availability_zones.all.names}"]

  ....
}

Now that you have many instances, a load balancer is required to distribute load amongst them. Let’s create one:

resource "aws_elb" "example" {
  name = "terraform-asg-example"
  availability_zones = ["${data.aws_availability_zones.all.names}"]
}

Now you need to tell the load balancer how to route requests. To do that, you add one or more “listeners” which specify what port the ELB should listen on and what port it should route the request to:

resource "aws_elb" "example" {
  name = "terraform-asg-example"
  availability_zones = ["${data.aws_availability_zones.all.names}"]

  listener {
      lb_port = 80
      lb_protocol = "http"
      instance_port = "${var.server_port}"
      instance_protocol = "http"
  }

}

The ELB can periodically check the health of your EC2 Instances and, if an instance is unhealthy, it will automatically stop routing traffic to it. Let’s add an HTTP health check where the ELB will send an HTTP request every 30 seconds to the “/” URL of each of the EC2 Instances and only mark an Instance as healthy if responds with a 200 OK:

resource "aws_elb" "example" {
  ....

  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:"${var.server_port}"/"
  }

  ....
}

By default, ELBs don’t allow any outgoing or incoming traffic so you need to create a security group that’ll be attached to the ELB that’ll permit incoming traffic on port 80 and outgoing traffic on any port:

resource "aws_security_group" "elb" {
  name = "terraform-example-elb"

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

You now need to attach it to the ELB:

resource "aws_elb" "example" {
  ....

  security_groups = ["${aws_security_group.elb.id}"]

  ....

}

Right now the ELB does not have knowledge of the EC2 instances that’ll be created by our ASG using the Launch Configuration. You can use the loadbalancers parameter of the awsautoscaling_group resource to tell the ASG to register each Instance in the ELB when that instance is booting:

resource "aws_autoscaling_group" "example" {
  ....

  load_balancers = ["${aws_elb.example.name}"]
  health_check_type = "ELB"

  ....
}

Notice that we’ve also configured the healthchecktype for the ASG to “ELB”. This tells the ASG to use the ELB’s health check to determine if an Instance is healthy or not and to automatically restart Instances if the ELB reports them as unhealthy.

Let’s add the ELB DNS name as an output so it’s easier to test if things are working:

output "elb_dns_name" {
  value = "${aws_elb.example.dns_name}"
}

Run the plan command and apply command to provision the resources:

terraform_apply_asg

Once completed we can now test the elb_dns_name output:

terraform_test_asg

You now have a fully working cluster of web servers!

Cleanup

Now that you’ve provisioned our web application using Terraform, you’re ready to cleanup our resources to avoid AWS charges. Terraform makes it very easy to do this as it keeps track of the resources you’ve deployed. All you need to do is issue the destroy command.

terraform_destroy

Once you confirm you’re happy for our resources to be destroyed, Terraform will utilise the State file and delete all our resources in the correct order. The Destroy command also runs in a parallel mode so multiple resources that do not have dependencies are deleted simultaneously.

Conclusion

You should now have a basic grasp of how to use Terraform to provision infrastructures. The declarative language makes it easy to describe your infrastructure you want to create.

We’ve only just scratched the surface of Terraform. In the next part of this series you’ll look at how to create reusable infrastructure using Terraform modules to make our code more efficient and reduce code duplication.

If you need help with Terraform, DevOps practices, or AWS at your company, feel free to reach out to us at OSODevOps.

At OSO DevOps, our experts can maintain your DevOps platform and be responsible for day-to-day operational issues, allowing you to develop and ship your product without the need for internal DevOps hires.

Fore more content:

How to take your Kafka projects to the next level with a Confluent preferred partner

Event driven Architecture: A Simple Guide

Watch Our Kafka Summit Talk: Offering Kafka as a Service in Your Organisation

Successfully Reduce AWS Costs: 4 Powerful Ways

Get started with emerging technologies today

Have a conversation with one of our experts to discover how we can work with you to adopt emerging technologies to keep your business growing.

Book a call