DEV Community

Discussion on: Automate Deployments with AWS CodePipeline

Collapse
 
nicobuchhalter profile image
NicoBuchhalter • Edited

Hey Raphael, Thanks for this!
I have two questions:
1) Im not being able to find what to use instead. of "web" in. the buildspec. Where do you find your container name? Is it related to the task definitions? If that's so, how would we handle the sidekiq and web different build commands?
2) How would you configure two have a production and a staging environment?

Collapse
 
raphael_jambalos profile image
Raphael Jambalos

Hi Nico, addressing your concerns below:

" Im not being able to find what to use instead. of "web" in. the buildspec."

  • Yup, it is inside the task definition. When you open up the most recent version of your task definition, scroll down to find the "Container Definitions". There should be a table of some sorts there where you will find a column for "Container Name".

"how would we handle the sidekiq and web different build commands?"

  • The build commands should be the same, at least for this tutorial series. What's different is the command used to start each container. For rails, its a variant of "rails server -p 3000". For sidekiq, its something like "bundle exec sidekiq -C sidekiq.yml". You need not worry about this in the CI/CD for this should have already been differentiated in the task definition

"How would you configure two have a production and a staging environment?"

  • Two separate CI/CD pipelines will be best.
Collapse
 
nicobuchhalter profile image
NicoBuchhalter

Thank you!! I could run the deployments!!

One more thing maybe you know how to do it. Before I did it manually but its the same thing, when I update a ECS service with a new task definition, it takes forever to update the task definition of the service and now with CodePipeline its the same thing, the Deploy stage its not ending because the deployment is not being done.
In the past I solved this by stopping the task but its not good for production environment of course.
Is there some configuration that I may be missing or is just how it is?
Thanks again!

Thread Thread
 
raphael_jambalos profile image
Raphael Jambalos

Hi Nico,

Ahh yes, it does take awhile. This is because you are using ECS deployment controller. Essentially, during deployment it creates new tasks with the new version while the old version is running. The way I understood it is traffic is only redirected when the container reaches a healthy status and it passes the load balancer target group health check. To quote AWS documentation:

"If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total."

Now, the default for "load balancer target group health check to return a healthy status" is 5 consecutive periods of 30seconds. So that's at least 2.5minutes after your containers are marked as healthy that traffic starts to get to your instances.

Thread Thread
 
raphael_jambalos profile image
Raphael Jambalos

I'd totally recommend going for AWS Code Deploy's blue/green deployment so your traffic (or at least part of it) will be shifted to the new version right away (or in parts, over a period of time). Rollbacks are also so much easier with this design.

Thread Thread
 
nicobuchhalter profile image
NicoBuchhalter

Perfect! Yes, that's the type of deployment Im looking for. I will try to configure it.
I figured out that my problem is that the container has 1024 MB of CPU units available and with both my tasks (each with 512), it reaches this limit so when trying to do a new deployment, I guess it tries to create the new task first and then stop the old one, but there's no CPU to have the "third" task running simultaneosly. Do you know where I can configure how much CPU does the container have?

Or should I use different containers for web and sidekiq? Maybe add to the buildspec to create two artifacts and configure each deployment to look at a different json file? What do you think?

Thread Thread
 
raphael_jambalos profile image
Raphael Jambalos • Edited

Hi Nico,

" I guess it tries to create the new task first and then stop the old one, but there's no CPU to have the "third" task running simultaneosly. "

  • This is usually the case so probably you need to scale up to 2 EC2 instances so you'd have 2 instances with 1024 units of CPU. There's also the concept of ECS Capacity Providers so you can auto scale the EC2 instances based on the number of instances required by the containers being deployed (docs.aws.amazon.com/AmazonECS/late...).
  • The old approach was to deploy an Auto Scaling Group behind those EC2 instances but the problem with that is you often have a demand for instances to serve the containers but the EC2 instances themselves don't have a spike in their CPU utilization.

Do you know where I can configure how much CPU does the container have?

  • By the looks of this, you are trying to have the web and sidekiq in one Task Definition. So you probably have 2 containers in your task definition's container definition. I recommend just have separate task definitions for web and for sidekiq.
  • But if you want this setup, you can find the CPU and Memory options inside the container definition.

Or should I use different containers for web and sidekiq?

  • Different Task definitions, I believe. Can you elaborate on this?

Maybe add to the buildspec to create two artifacts and configure each deployment to look at a different json file? What do you think?

  • Yes, if its 2 containers inside one task definition, this can be the case.
  • I reco having separate task definitions for them so u can deploy them as separate ECS Services. So you can decouple them. Like if you have a spike in web, you don't need those extra containers deployed for sidekiq.