Building Websites and then Copying the code manually to the server via ftp is now a thing of the past. Having an automated system deploying the merged developer code in repo not only saves you time but also ensures that no changes are missed on the production deployent. Deployments have come a long way in the 10 years and the general goal today is to automate code deployments as much as possible. Let’s explore how we can deploy code from GitHub to EC2 using CodePipeline.
The most heard question as the effort initially might be higher in cases.Put simply, the more repeatable your deployments are, the less downtime you will have. Automated deployments are easier to test, easier to troubleshoot, lend themselves to scalability, can perform rollbacks on failure, support blue/green and canary deployments.
In ideal deployment you treat your deployment as cattle and not like pets, if it gets sick you shoot it and replace it, Automation eases this process and makes it flawless.
To initiate an auto-deployment
For ease of representation and understanding, I'm gonna use a new GitHub repo for the process
git clone email@example.com:nhadiq97/AWSDeployDemo.git touch demo.html echo "This is prod" > demo.html git add . git commit -m "Init commit" git push origin master
In order to have a separate test environment lets create a test branch in addition to the master branch.
git branch test git checkout test echo "This is test" > demo.html git add . git commit -m "Create test branch" git push origin test
For obvious reasons, you would be working on a well-formatted app/ web document and not a bank HTML page with a line of text.
Now that we have a repository build hosting the source code next step would be to create the deployment target. I'm gonna be using the AWS GUI for demo purpose but you could use the CLI or Terraform or cloud formation for this.
It is always a good practice to create a new subnet inside your VPC for your deployment.
Okay, next up are the EC2 instances themselves. I’m going to make two: one for test and one for production. Obviously this is just a dummy deployment but CodePipeline is versatile enough to allow deployments across hundreds of resources if needed.
Adding up a couple of tags to identify the deployment
you could also add Name tage with value such that it will be identifiable in dashboard.
Now that both instances are un and running we have what we need for the setup.
Before we can build a pipeline, we need to configure CodeDeploy and it’s agent. CodeDeploy requires a small agent daemon installed on every machine that is a deployment target.
You can check if you have the CodeDepoy agent installed by connecting to your instance and running (on Amazon Linux or RHEL):
sudo service codedeploy-agent status
It will either fetch an error or service is stopped. If its stoped just run
sudo service codedeploy-agent start
If it fetches an error it means its not installed and you would have to install it.So it becomes a bit more involved. Here’s the docs page on how to install the CodeDeploy agent. Essentially it boils down to the following commands:
sudo yum update -y sudo yum install ruby wget -y cd /home/ec2-user wget https://bucket-name.s3.region-identifier.amazonaws.com/latest/install chmod +x ./install sudo ./install auto
In the wget command above, you’ll need to replace both bucket-name and region-identifier with the values that corresponds to your region.
Once installed, check one last time to make sure the agent is running.
sudo service codedeploy-agent status
In order for CodeDeploy to work, we need to assign it a service role with the correct permissions. Let’s create a role with the AWSCodeDeployRole policy attached to it.
Now that the instance roles are setup, it’s time to turn to CodeDeploy.
Creating an application in CodeDeploy is straightforward. Just give it a name and tell it that you plan on deploying code to EC2 instances.
Creating a deployment group is a bit more complicated but not difficult at all. Give it a name and assign the CodeDeploy service role you created previously. This will allow CodeDeploy to access the resources specified in this deployment group.
Next, choose EC2 instances for deployment. You can also deploy to autoscaling groups but for now since its a demo I’ll deploy to a single instance. Don’t forget to add the env:test tag as it marks the test environment ec2 instance. You should see one matched instance that corresponds to the test instance created earlier.
We don’t need a special deployment setting so just leave the default AllAtOnce setting in place. Uncheck Enable load balancing since we haven’t set that up.
Repeat the last steps with env: prod as we need to set up the nontest branch too.
CodePipeline needs to know what to do with the files in your Git repository when you deploy. A file called appspec.yml is CodePipeline’s way of defining the tasks you want to run when deploying code. There’s too much to cover here, but AWS has examples of how to build out an appspec.yml file.
For this project, I’m just going to copy in some text files from the Git repo to test that the pipeline is working. You’ll need to do your own research on how to build an app spec file to suit your deployment. If you want to follow along with my dummy deployment, put this in a file at the root of your git repo. Don’t forget to name it appspec.yml!
version: 0.0 os: linux files: - source: ./demo.html destination: /
We need to build two pipelines for this to work. One pipeline that deploys the test branch to test instances and one pipeline that deploys the master branch to production instances.
Creating a pipeline is a straightforward process. I suggest letting CodePipeline create your service role for you. It’s just easier that way. Alternatively, you can also create pipelines
Create a stage that uses your Github repository and the test branch as the source.
Skip the build stage for now. If you want, you can come back later and setup CodeBuild to do integration tests or build binaries if you need that sort of thing.
Select the CodeDeploy provider and choose the application and deployment group for your test instances.
Click next and deploy the pipeline. You’ll see it try to download your source and deploy it to your test instances.
Great, we deployed to our test instances. But what about production? Click “Edit” at the top of the pipeline page. And then click “Edit” on each stage of the pipeline. From here, you can add another source and deploy action for the master branch and production deployment group.
Now, let’s just check and see if the deployments worked. First, let’s check the test instance.
ssh ec2-user@test-instance Last login: Sun Apr 26 16:00:02 2020 from <IP> \_\_| \_\_|\_ ) \_| ( / Amazon Linux 2 AMI \_\_\_|\_\_\_|\_\_\_| https://aws.amazon.com/amazon-linux-2/ [ec2-user@test-instance ~]$ cat /demo.html this is test
Next, check the production instance.
ssh ec2-user@prod-instance Last login: Sun Apr 26 16:00:02 2020 from <IP> \_\_| \_\_|\_ ) \_| ( / Amazon Linux 2 AMI \_\_\_|\_\_\_|\_\_\_| https://aws.amazon.com/amazon-linux-2/ [ec2-user@prod-instance ~]$ cat /demo.html This is prod
Now that it worked we are ready to go
There are several ways you can customize your pipeline.
For one thing, you’ll want to start with building out an appspec.yml file that reflects the structure of your application. Copy over all the files you need, run scripts to set up dependencies, etc.
Make a custom AMI for the instances in each of your groups. Include any dependencies your code needs to run (and the code deploy-agent) in the AMI rather than installing it every deployment. This will increase the reliability and speed of your deployments.
Instead of creating deployment groups with specific EC2 instances identified, consider deploying to autoscaling groups instead so you can apply scale-in and scale-out rules.