Automation using AWS and Terraform

Hello!

Have you ever tried doing everything on aws by writing the code and not doing anything manually,like (launching instance,creating volume,attaching the volume,formatting,mounting the volume on /var/www/html,creating s3,creating cloudfront distribution,and then launching your page),with the help of terraform???May be yes or may be no??

So I got the same task in my training which I am going to share with you all,where I did automation using aws and terraform.So first of all go through the task that I have got.

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

The very first thing that I will do is creating the directory,for writing my codes.

mytask.tf is my main file name

Now let’s start the task.

Before creating key,I have to give my profile name and aws as providers.

provider “aws” {
region = “ap-south-1”
profile= “arifiya”
}

To configure your profile,

aws configure  — profile <profilename>

Now I am creating the key using the codes below,

When you are creating a key,your private key will not be downloaded.So to get the private key I used output.

variable “keyname” {
default = “arifiyaKey1”
}
resource “tls_private_key” “privateKey” {
algorithm = “RSA”
rsa_bits = 4096
}
module “key_pair” {
source = “terraform-aws-modules/key-pair/aws”
key_name = “arifiyaKey1”
public_key = tls_private_key.privateKey.public_key_openssh
}
output output1 {
value = tls_private_key.privateKey.private_key_pem
}

Then,

terraform init

terraform apply

Let’s check If my key has been created in aws or not.

Yes,My key has been created.

KeyName= arifiyakey1

So now according to the task I have to create a security group that allows port number 80.

resource “aws_security_group” “allow_http_protocol_task1” {
name =”allow_http_protocol_task1"
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “SSH from VPC”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “HTTPS from VPC”
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “allow_http_protocol_task1”
}
}

I will keep on checking if everything is working the way I want or not.So now I will check if my security group is created or not.

My key and security group is created.So Now I will launch my ec2 instance with the same key and security group.

resource “aws_instance” “task_ec2_instance1” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = “arifiyaKey1”
security_groups = [“allow_http_protocol_task1”]
tags = {
Name = “ec2_firstTaskOs”
}
}

My instance is launched and it is using the same key and security group.

Now I have to launch ebs volume which is like a hard disk to which I am giving the size 1GiB.

use terraform apply,after every changes you make in your code.

resource “aws_ebs_volume” “Taskebs_volume1” {
availability_zone = aws_instance.task_ec2_instance1.availability_zone
size = 1
tags = {
Name = “Taskebs_volume1”
}
}
output “outp2” {
value = aws_instance.task_ec2_instance1.availability_zone
}

And then I will attach my instance with this volume,

resource “aws_volume_attachment” “attach_volume1_task” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.Taskebs_volume1.id
instance_id = aws_instance.task_ec2_instance1.id
}

In aws our ebs will look something like this,where the state is showing in-use because I have attached the volume otherwise it will show “available”

I have uploaded my code and image on gitHub.You may upload it manually or through gitbash.

How will I download the code from github in my ec2 instance????

For this we have to download git and httpd for my page that I want to deploy.

These codes should be written inside the ec2 instance.Here I have given my private key so that it can easily go to my instance.

Also sudo will be given before every command,because only root has the power to download or change something.So we have to write sudo.

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.privateKey.private_key_pem
host = aws_instance.task_ec2_instance1.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
“sudo git clone https://github.com/Arifiya-khan/terraform_task.git /var/www/html/”
]
}
while downloading screen will look something like this

I can check by going to aws that httpd and git is installed or not.Also you can check that your file from github has been cloned or not.

Now to format and mount that volume into /var/www/html,I am using the code below.

resource “null_resource” “nullremote” {depends_on = [
aws_volume_attachment.attach_volume1_task,
]
connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.privateKey.private_key_pem
host = aws_instance.task_ec2_instance1.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/Arifiya-khan/terraform_task.git /var/www/html/"
]
}
}

After all these processes are done,I have created s3 bucket.Bucket name should be written using hyphen and lower case letters otherwise the name will not be accepted.

resource “aws_s3_bucket” “task_bucket” {
bucket = “task-bucket-s3”
acl = “private”
tags = {
Name = “task_bucket”
}
screen’s look while downloading

resource “aws_s3_bucket_public_access_block” “s3_BlockPublicAccess” {
bucket = aws_s3_bucket.task_bucket.id
block_public_acls = true
block_public_policy = true
restrict_public_buckets = true
}
locals {
s3_origin_id = “myS3Origin”
}

My s3 bucket is now created.Now I can upload my files to s3 bucket,using the below codes.Don’t forget to give forward slash instead of backward slash in path.

resource "aws_s3_bucket_object" "object" {
bucket = "your_bucket_name"
key = "new_object_key"
source = "path/to/file"
etag = "${filemd5("path/to/file")}"
}

Then I will create cloudfront Distribution,

// Creating Origin Access Identity for CloudFront
resource “aws_cloudfront_origin_access_identity” “origin_access_identity” {
comment = “Tera Access Identity”
}
resource “aws_cloudfront_distribution” “s3_distribution_task” {
origin {
domain_name = “${aws_s3_bucket.task_bucket.bucket_regional_domain_name}”
origin_id = “${local.s3_origin_id}”
s3_origin_config {
# origin_access_identity = “origin-access-identity/cloudfront/ABCDEFG1234567”
origin_access_identity = “${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}”
}
}
enabled = true
is_ipv6_enabled = true
comment = “Terra Access Identity”
default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “${local.s3_origin_id}”
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = “/content/immutable/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”, “OPTIONS”]
target_origin_id = “${local.s3_origin_id}”
forwarded_values {
query_string = false
headers = [“Origin”]
cookies {
forward = “none”
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = “redirect-to-https”
}
# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = “/content/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “${local.s3_origin_id}”
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = “redirect-to-https”
}
price_class = “PriceClass_200”
restrictions {
geo_restriction {
restriction_type = “blacklist”
locations = [“CA”]
}
}
tags = {
Environment = “production”
}
viewer_certificate {
cloudfront_default_certificate = true
}
retain_on_delete = true
}

AFter that, establishing the Bucket policy for cloudfront

// AWS Bucket Policy for CloudFront
data “aws_iam_policy_document” “s3_policy” {
statement {
actions = [“s3:GetObject”]
resources = [“${aws_s3_bucket.task_bucket.arn}/*”]
principals {
type = “AWS”
identifiers = [“${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}”]
}
}
statement {
actions = [“s3:ListBucket”]
resources = [“${aws_s3_bucket.task_bucket.arn}”]
principals {
type = “AWS”
identifiers = [“${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}”]
}
}
}
resource “aws_s3_bucket_policy” “s3BucketPolicy” {
bucket = “${aws_s3_bucket.task_bucket.id}”
policy = “${data.aws_iam_policy_document.s3_policy.json}”
}

To launch my webpage I have used the following command,

resource “null_resource” “null_remote1” {depends_on = [
aws_cloudfront_distribution.s3_distribution_task,
]
provisioner “local-exec” {
command = “start chrome <url>"
}
}

Now After terraform apply,my code will automatically open my html page on chrome.

Hence,my page has been launched.

So finally,My whole task has been completed in the way I wanted.You my try this method for performing this task.If still got stuck,then feel free to ask any queries.

Happy learning!!!😊