DEV Community

Cover image for Building a Standardized VPC
Joe Terlecki
Joe Terlecki

Posted on

Building a Standardized VPC

Building a Standardized VPC

AWS VPC

As someone who is always constantly tinkering and wanting to learn new technology, I seemed to fall into a profession with a never-ending buffet of tools and concepts, the DevOps/Cloud Infrastructure space. Often I find myself scouring Reddit and LinkedIn posts for "best practices!", "security!" and "cloud-native!; however, I always find myself leaving those sessions needing more input. To settle this void, I decided its time to build out my "Enterprise" worthy sandbox with my good old pal… AWS.

Working with AWS usually takes up 8 hours of my day to begin with, so I have grown quite comfortable with its features, nuances, and pricing models. It is about time I start building more stuff in my free time and blogging about it.

There is no better place to start than laying down the foundation with a solid, cost-effective VPC

VPC Decision Making

When planning out my network for my AWS account, I had to consider a couple of things.

  1. It had to adhere to industry best practices as much as possible

  2. Not burn a hole in my wallet

  3. Infrastructure as Code

  4. Support multi-account architecture

  5. No Pets(With 2 exceptions)

If anybody has worked with AWS for at least 30 days, they know how fast the charges can rack up if you are not careful due to their consumption pricing model. To keep costs as low as possible, I had to make 3 design decisions. One, use Terraform to spin up and down my entire environment(s) as needed. Two, have very minimal pet servers by using AWS ECS Fargate as much as possible. And three support a basic multi-account architecture.

VPC Best Practices

Checklist

To build something and throw it together with duct tape is always an option, but you are guaranteed to have a bad time when dealing with infrastructure and cloud services.
To ensure I don't stumble over my 2 feet and learn all the ins and outs of new services, I made sure I stuck to the best practices below except for using a NAT instance.

  1. The VPC must be highly available using 3 availability zones/subnets for the private internal network and the public-facing DMZ. This will give me a total of 6 subnets, 3 for the private and 3 for the public-facing network.

  2. Use a NAT appliance to forward traffic from the private subnets to access public resources. This will be necessary for download packages and updates. Due to the base price of a managed NAT Gateway of $35 plus data fees, I opted to use the non-managed solution, a NAT instance. Using an EC2 as my NAT appliance, I can turn on\off the instance only when needed and only be charged by the partial compute hours utilized.

  3. Configure a Bastion host for when the need arises to communicate to private services such as RDS instances and the rare occurrence of a non containerized application.

  4. S3 buckets for the Terraform state files, S3 server-side logging bucket, and an S3 endpoint. Even though I won't be utilizing the S3 endpoint fully at this time until I deploy some log analytics and visualization tools, the S3 endpoint won't incur extra costs and can only save me money.

  5. Gitlab-CI Runner using ECS Fargate configured with spot pricing for my CI/CD pipeline, which will be the cornerstone of my environment for infrastructure, configuration management, and custom AWS native services. I managed to deploy a small Ubuntu container to host my CI service to keep my costs down to an extreme minimum. Additionally, since the Gitlab runner is deployed as an ECS task, I can easily scale the environment down to 0 when not in use and only pay pennies for a fully-featured CI service.

Whats Next?

Rail Road

Now that the core network and services are deployed at an infrastructure level, deployed in Terraform, version controlled, and secure, It is time to dump my local IAM user for a SAML solution using G-Suite at least one more AWS account.

This "root" account will serve as my managed services, identity, logging and security, and tooling environment. In the real world, you would ideally have a "separation of concern" by utilizing separate accounts for each service. However, since I am not in the printing money business, paying google $6 per user/email and AWS the additional infrastructure costs is out of the question.

The other account(s) will exist as a mock application environment(s). Possible one at least one account for a DEV/STG and the other as PRD. To fill these accounts, I have found a few open-source full-stack projects on GitHub that will need to be migrated into Docker containers. That will be a fun task to take on soon; however, not before an Image/container bakery and an account vending machine :)

I hope you enjoyed my semi-coherent ramblings about AWS network design. Stay tuned and look out for future updates as I build an enterprise-worthy cloud architecture for a fictional company.

Cape Flying Child

Top comments (0)