Launching Webpage on AWS using EFS Service

Arifiya Khan
8 min readAug 7, 2020

Task :-

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

To do the task,we must know about EFS.

What is EFS??

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

To start the task,I first provide the provider so that our code can make changes in my AWS account.

#provider
provider "aws" {
profile = "username"
region = "ap-south-1"
}

For EFS,I need subnets and for subnets I need VPC.So to accomplish my task I have to create VPC,subnets,internet gateway and routing Table

VPC:-

For creating VPC we first need to initialize it using

terraform init

So that it can download all the plugins required.On downloading the plugins the screen will appear like this.

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

For creating VPC,I am using the below terraform code.

# vpc
resource “aws_vpc” “vpc” {
cidr_block = “10.0.0.0/16”
enable_dns_support = true
enable_dns_hostnames = true
tags= {
Name = “task2_vpc”
}
}

To run the code The command that is used is

terraform apply

It is good to check everytime through GUI that whether it has been created or not.

So yes,My VPC is created

Now I will create subnet which is required while creating EFS.

# subnet
resource “aws_subnet” “subnet” {
depends_on = [
aws_vpc.vpc
]
vpc_id = aws_vpc.vpc.id
availability_zone = “ap-south-1a”
cidr_block = “10.0.1.0/24”
map_public_ip_on_launch = true
tags= {
Name = “task2_subnet”
}
}

A subnet, or subnetwork, is a network inside a network. Subnets make networks more efficient. Through subnetting, network traffic can travel a shorter distance without passing through unnecessary routers to reach its destination

Checking through GUI.

Now,creating internet gateway I am using the below command

# internet gateway
resource “aws_internet_gateway” “ig” {
depends_on = [
aws_vpc.vpc
]
vpc_id = aws_vpc.vpc.id
tags = {
Name = “task2_ig”
}
}

An Internet gateway is a network “node” that connects two different networks that use different protocols (rules) for communicating. In the most basic terms, an Internet gateway is where data stops on its way to or from other networks. Thanks to gateways, we can communicate and send data back and forth with each other.

Now creating route table and associating it.

For this I am using the below terraform code

# route table
resource “aws_route_table” “route” {
depends_on = [
aws_vpc.vpc
]
vpc_id = aws_vpc.vpc.id
route {
cidr_block = “0.0.0.0/0”
gateway_id = aws_internet_gateway.ig.id
}
tags = {
Name = “task2_route”
}
}
# route association
resource “aws_route_table_association” “association” {
depends_on = [
aws_subnet.subnet
]
subnet_id = aws_subnet.subnet.id
route_table_id = aws_route_table.route.id
}

A routing table is a set of rules, often viewed in table format, that is used to determine where data packets traveling over an Internet Protocol (IP) network will be directed. All IP-enabled devices, including routers and switches, use routing tables.

For creating EC2 instance I need security group.Also I am allowing all traffics in this security group.

For creating security group I am using the below written code.

#securitygroup
resource “aws_security_group” “sg1” {
name = “task2_sg”
description = “Communication-efs”
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “task2_sg”
}
}

Checking through GUI

Next step is to create EFS and then mounting it.

# create efs
resource “aws_efs_file_system” “efs” {
creation_token = “tf-EFS-task2”
tags = {
Name = “Tak2_EFS”
}
}
# mount efs
resource “aws_efs_mount_target” “mount” {
depends_on = [
aws_efs_file_system.efs,
aws_subnet.subnet,
aws_security_group.sg1
]
file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_subnet.subnet.id
security_groups = [aws_security_group.sg1.id]
}
# access point efs
resource “aws_efs_access_point” “efs_access” {
depends_on = [
aws_efs_file_system.efs,
]
file_system_id = aws_efs_file_system.efs.id
}

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Now,For launching Ec2 instance,we should also provide connection and sudo to get inside the instance and install httpd ,git and php for launching our html page.

We are installing git because our code is in github.so to access it we need git.

#ec2 instance launch
resource “aws_instance” “task2_ec2_webserver” {
depends_on = [
aws_vpc.vpc,
aws_subnet.subnet,
aws_efs_file_system.efs,
]
ami = “ami-08706cb5f68222d09”
instance_type = “t2.micro”
subnet_id = aws_subnet.subnet.id
security_groups = [ aws_security_group.sg1.id ]
key_name = “mynewkey”

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/Arifiya khan/Desktop/Cloud_Credentials/mynewkey.pem”)
host = aws_instance.task2_ec2_webserver.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo su <<END”,
“yum install git php httpd amazon-efs-utils -y”,
“rm -rf /var/www/html/*”,
“/usr/sbin/httpd”,
“efs_id=${aws_efs_file_system.efs.id}”,
“mount -t efs $efs_id:/ /var/www/html”,
“git clone https://github.com/Arifiya-khan/terraform_task.git
/var/www/html/",
“END”,
]
}
tags = {
Name = “webserver”
}
}

Checking through GUI

Some of the outputs that occurs while connecting and installing php httpd and git.

Now I will create S3 Bucket for uploading the image or adding the object that we want in our webpage.

For this I will use the below code.

# s3 bucket
resource “aws_s3_bucket” “tf_s3bucket” {
bucket = “task2-bucket-s3”
acl = “public-read”
tags = {
Name = “task2-bucket-s3”
}
}
# adding object to s3
resource “aws_s3_bucket_object” “S3_image_upload” {
depends_on = [
aws_s3_bucket.tf_s3bucket,
]
bucket = aws_s3_bucket.tf_s3bucket.bucket
key = “maxresdefault.jpg”
source = “C:/Users/Arifiya khan/Desktop/maxresdefault.jpg”
acl = “public-read”
}

Checking through GUI.

Also it should be public so that anybody can access it

Now finally I will make Cloudfront Distributions.

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as . html, . css, . js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations.

# cloudfront variable
variable “oid” {
type = string
default = “S3-”
}
locals {
s3_origin_id = “${var.oid}${aws_s3_bucket.tf_s3bucket.id}”
}
# cloudfront distribution
resource “aws_cloudfront_distribution” “S3_distribution” {
depends_on = [
aws_s3_bucket_object.S3_image_upload,
]
origin {
domain_name = “${aws_s3_bucket.tf_s3bucket.bucket_regional_domain_name}”
origin_id = “${local.s3_origin_id}”
}
enabled = true
default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “${local.s3_origin_id}”
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = “none”
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/Arifiya khan/Desktop/Cloud_Credentials/mynewkey.pem”)
host = aws_instance.task2_ec2_webserver.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo su <<END”,
“sudo echo \”<img src=’http://${aws_cloudfront_distribution.S3_distribution.domain_name}/${aws_s3_bucket_object.S3_image_upload.key}' height=’200' width=’200' >\” >> /var/www/html/index.php”,
“END”,
]
}
}

Checking through GUI,

So Now Everything is done.

Finally I will write a code that will launch my webpage automatically using public IP of my EC2 istance.For this,I am using the below code.

#opening via chrome
resource “null_resource” “website” {
depends_on = [
aws_cloudfront_distribution.S3_distribution,
]
provisioner “local-exec” {
command = “start chrome http://${aws_instance.task2_ec2_webserver.public_ip}/"
}
}

As soon as I run this code,My webpage Automatically gets Launched.

WebPage

Finally,My Page has been Launched on the top of AWS using EFS service.

Hope My this blog would help.

Thank you and Happy Learning!!😊

--

--