DEV Community

Cover image for AWS Automation using Terraform Micro Byte
Nitesh Thapliyal
Nitesh Thapliyal

Posted on

AWS Automation using Terraform Micro Byte

In this blog we are going to discuss about the micro byte on the topic AWS automation using Terraform, You might be thinking what is micro byte?

Micro Byte

Micro Bytes are short learning experiences where readers can learn a topic by doing the Micro-Byte. It teaches an important concept very clearly with learn-by-doing exercises i.e. learning by executing tasks in a hands-on way. It Roughly takes 1 hour of learning time for learners to go through it. It Has custom illustrations/imagery to explain concepts. It is Entertaining and engaging with a unique spin/angle to learn the topic.

crio

In this micro byte we will learn How we can automate the aws cloud by automating it by launching resources with the help of terraform.

What is openCV?

open cv

OpenCV is a library that works with almost all programming languages therefor known as cross-platform library which is mainly used for Image Processing that is used in Computer Vision. OpenCV is used for image processing, Video analyses with feature like face detection and Object detection.

What is Terraform?

Terraform

Terraform is an Infrastructure as code tool that let's us build, change and version cloud. It let's us define both cloud and on-prem resources in human readable configuration files that we can reuse and share.

What OpenCV and Terraform will do?

OpenCV will allow us to detect our face and as soon as our face is detected Terraform configuration file will get executed which will launch the AWS services.

architecture

Microbyte contains various Activities that will help to perform the task:

Activity-1

Create a Dataset of face images

Why we need to create Dataset?

  • We need to create a Dataset of face images to train the face recognition machine-learning model

  • Here we are going to use the haarcascade_frontalface model for face detection

To create the dataset we will make use of OpenCV, it will capture your face image 100 times and store it in a directory, which will be used by the model for face detection.

use the following code below and fill the required data as mentioned

import cv2
import numpy as np

# Load HAAR face classifier
face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# Load functions
def face_extractor(img):
    # Function detects faces and returns the cropped face
    # If no face detected, it returns the input image

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    faces = face_classifier.detectMultiScale(gray, 1.3, 5)

    if faces is ():
        return None

    # Crop all faces found
    for (x,y,w,h) in faces:
        cropped_face = img[y:y+h, x:x+w]

    return cropped_face

# Initialize Webcam
cap = cv2.VideoCapture(0)
count = 0
# Collect 100 samples of your face from webcam input
while True:

    ret, frame = cap.read()
    if face_extractor(frame) is not None:
        count += 1
        face = cv2.resize(face_extractor(frame), (200, 200))
        face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)

        # Save file in specified directory with unique name
        #path
        file_name_path = 'Path_of_dir/' + str(count) + '.jpg'
        cv2.imwrite(file_name_path, face)

        # Put count on images and display live count
        cv2.putText(face, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
        cv2.imshow('Face Cropper', face)

    else:
        print("Face not found")
        pass

    if cv2.waitKey(1) == 13 or count == 100: #13 is the Enter Key
        break

cap.release()


cv2.destroyAllWindows()
Enter fullscreen mode Exit fullscreen mode
  • Add your directory path
  • Execute the file by pressing key ctrl + enter or the run button in jupyter notebook.

output:

face image

Activity 2 - Train the Model

In this activity, we are going to train the face recognition model so that it can recognize our face as soon as the web camera turns on

To train the model we will use the dataset that we have created in Activity-1

Now to train the model:

Use the code below to your jupyter notebook and fill the required data as mentioned.

import cv2
import numpy as np
from os import listdir
from os.path import isfile, join

# Get the training data we previously made
data_path_1 = 'dataset_path'
onlyfiles_1 = [f for f in listdir(data_path_1) if isfile(join(data_path_1, f))]


# Create arrays for training data and labels
Training_Data_1, Labels_1 = [], []
# Create arrays for training data and labels

# Create a numpy array for training dataset 1
for i, files in enumerate(onlyfiles_1):
    image_path = data_path_1 + onlyfiles_1[i]
    images = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
    Training_Data_1.append(np.asarray(images, dtype=np.uint8))
    Labels_1.append(i)   

# Create a numpy array for both training data and labels for person 1
Labels_1 = np.asarray(Labels_1, dtype=np.int32)

# Initialize facial recognizer
#model = cv2.face.createLBPHFaceRecognizer()
# NOTE: For OpenCV 3.0 use cv2.face.createLBPHFaceRecognizer()
# pip install opencv-contrib-python
#model = cv2.createLBPHFaceRecognizer()

Nitesh_model  = cv2.face_LBPHFaceRecognizer.create()
# Let's train our model 
Nitesh_model.train(np.asarray(Training_Data_1), np.asarray(Labels_1))
print("Model trained sucessefully")
Enter fullscreen mode Exit fullscreen mode
  • Add your dataset path

Output:

model

Activity 3 - Get the details from AWS

Before we start creating the configuration file, we should know the following things:

  • Make sure Terraform is installed in your system, if not install it by following the step in Pre-Requisite
  • The region where you are going to launch the service, in this case, EC2 service
  • The AMI-id of the instance which you will be using

If you are not able to find the region name in AWS then:

refer to Images/Activity-3/region.png

For getting the AMI-id, follow the following steps:

  • Login to your AWS account
  • Go to the EC2 service
  • Click on Launch Instance button

    refer to Images/Activity-3/launch_instance.png

  • Select the AMI that you want to work on and you will find the AMI-id just below the AMI

Now we have the required information, we can add these details to our terraform configuration file

Activity 4 - Create Terraform Configuration file

With the help of Terraform Configuration file, you can launch the resources in the cloud platform and also attach the resource like EBS volume to the EC2, with the help of it you can launch the resources in the multiple cloud platform at once like you can launch resources in AWS(Amazon Web Service), GCP(Google Cloud Platform), etc.

  • To create the Terraform file to launch the AWS resources: > Copy the code below and add the detail as mentioned
provider "aws" {
region = "<your_region_name>"
profile = "Nitesh"
}

# Launching instance
resource "aws_instance" "os1"{
ami="<your_ami_id>"
instance_type = "t2.micro"
security_groups = [ "<your_scurity_group_name>" ]
tags={
Name="<your_ami_instance_name>"
}
} 

# To retrieve the detail information about the instance
output "os1"{
value = aws_instance.os1
}

#To retrieve the public ip of instance
output "os2"{
value = aws_instance.os1.public_ip
}

#To retrieve the availability zone of an instance
output "os3"{
value = aws_instance.os1.availability_zone
}

# Launching EBS Volume
resource "aws_ebs_volume" "st1"{
availability_zone= aws_instance.os1.availability_zone
size = 5
tags = {
name = "<Name_your_ebs_volume>"
}
}

# To check about volume in detail
output "os4"{
value=aws_ebs_volume.st1
}

# Attach Volume to EC2 instance
resource "aws_volume_attachment" "ebs_att"{
device_name="/dev/sdh"
volume_id=aws_ebs_volume.st1.id
instance_id=aws_instance.os1.id
}

# To check in detail about volume attachment
output "os5"{
value=aws_volume_attachment.ebs_att
}

Enter fullscreen mode Exit fullscreen mode
  • Add your region name
  • Add your ami-id
  • Add your instance name
  • Add your Security group name that you have created in Activity-3 Task
  • Add your EBS volume name

Note: We don't have to run the terraform file now because we need to integrate the terraform file with the face recognition program in the next activity

Activity 5 - Integrate Terraform file with the Face Recognition Program

In this activity, we are going to integrate the terraform configuration file with the face recognition program so that our terraform file gets executed as soon as our face gets recognized.

To perform the activity:

Copy the file from and paste it to the location where your terraform file is present

import cv2
import numpy as np
import os


face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

def face_detector(img, size=0.5):

    # Convert image to grayscale
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    faces = face_classifier.detectMultiScale(gray, 1.3, 5)
    if faces is ():
        return img, []


    for (x,y,w,h) in faces:
        cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,255),2)
        roi = img[y:y+h, x:x+w]
        roi = cv2.resize(roi, (200, 200))
    return img, roi


# Open Webcam
cap = cv2.VideoCapture(0)

while True:

    ret, frame = cap.read()

    image, face = face_detector(frame)

    try:
        face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)

        # Pass face to prediction model
        # "results" comprises of a tuple containing the label and the confidence value
        results = Nitesh_model.predict(face)

        if results[1] < 500:
            confidence = int( 100 * (1 - (results[1])/400) )
            display_string = str(confidence) + '% Confident it is User'

        cv2.putText(image, display_string, (100, 120), cv2.FONT_HERSHEY_COMPLEX, 1, (255,120,150), 2)

        if confidence > 70:
            cv2.putText(image, "Hello Nitesh", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
            cv2.imshow('Face Recognitioned', image )
            if cv2.waitKey(1)==13:
                cap.release()
                cv2.destroyAllWindows()
                !terraform init
                !terraform apply --auto-approve


        else:
            cv2.putText(image, "Unrecognised Face", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
            cv2.imshow('Face Recognition', image )

    except:
        cv2.putText(image, "Face Not Found", (220, 120) , cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
        cv2.putText(image, "Searching for Face....", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
        cv2.imshow('Face Recognition', image )
        pass

    if cv2.waitKey(1) == 13: #13 is the Enter Key
        break

cap.release()
cv2.destroyAllWindows()

Enter fullscreen mode Exit fullscreen mode
  • Execute the file by pressing ctrl + enter or run button in the jupyter notebook

As soon as the file gets executed you will see that your WebCam gets opened your face gets recognized as shown below

face recognize

  • Now as soon as our face gets recognized, the terraform configuration file gets executed and you will see the output as below
aws_instance.os1: Refreshing state... [id=i-07c3d200a11ce50e0]
aws_ebs_volume.st1: Refreshing state... [id=vol-0e4577c488a197d1f]
aws_volume_attachment.ebs_att: Refreshing state... [id=vai-991301012]
aws_ebs_volume.st1: Destroying... [id=vol-0e4577c488a197d1f]
aws_ebs_volume.st1: Destruction complete after 0s
aws_instance.os1: Creating...
aws_instance.os1: Still creating... [10s elapsed]
aws_instance.os1: Still creating... [20s elapsed]
aws_instance.os1: Still creating... [30s elapsed]
aws_instance.os1: Creation complete after 33s [id=i-0d1e951772b6e049d]
aws_ebs_volume.st1: Creating...
aws_ebs_volume.st1: Still creating... [10s elapsed]
aws_ebs_volume.st1: Creation complete after 11s [id=vol-06c89d52fc685bcea]
aws_volume_attachment.ebs_att: Creating...
aws_volume_attachment.ebs_att: Still creating... [10s elapsed]
aws_volume_attachment.ebs_att: Still creating... [20s elapsed]
aws_volume_attachment.ebs_att: Creation complete after 21s [id=vai-977186061]

Apply complete! Resources: 3 added, 0 changed, 1 destroyed.

Enter fullscreen mode Exit fullscreen mode
  • Now you can check in the AWS that the resources that we have mentioned in the terraform configuration file are launched successfully

Instance launched successfully

instance

EBS volume of 5 GB as mentioned launched successfully

ebs

To learn more about it then do checkout the Micro Byte

Happy Learning✨

Top comments (1)

Collapse
 
priyanshupardhi profile image
priyanshu pardhi

Well explained :)