Its great seeing peoples designs for modern solutions and especially serverless. What is more impressive is, where VPC services are in use, they are splitting them out into separate tiers and subnets.
😕 But why do so many people put things in public subnets that don't need to be?
In this article I'll look at what I think should be in public subnets and why you try not to put anything in a public subnet you don't need to.
What is a public subnet?
So first lets look at what a public subnet is.
The AWS VPC docs, a public subnet is defined as "[having] a direct route to an internet gateway. Resources in a public subnet can access the public internet."
Conversely, this also means that a resource in a public subnet is also accessible from the public internet. And, for me, that is a huge risk.
So why isn't this the best thing to do?
There are two reasons I think this is not the best thing to do.
Firstly is security.
If you put something in a public subnet it poses a significantly increased security risk. In general new resources are detected and scanned in less than 5 minutes of being available on the internet. This is a big risk for your services and data and if you don't have to put something in the public subnet I wouldn't.
Second is performance.
While you could secure you external facing devices, you risk increasing latency for internal services or under conditions of extra load by doing this for all traffic. I believe layering services gives the optimum solution with out trying to make one layer do all the hard work. This also means you can then bypass layers such as external egress for internal services or those over trusted connections such as a direct connect.
So what can I put in a public subnet?
There are only two services I would deploy in to a public subnet.
Firstly are gateways.
Whether these are NAT Gateways, IPv6 Egress Only Gateways, or Internet Gateways, these have to be in the public subnet if you want outbound internet access. Ideally though I would be looking at building solutions that don't need internet access from application/solution VPCs reducing the need for gateways as a whole.
Second are Layer 3/4 Load Balancers.
Whether that is a Gateway Load Balancers (GWLB), or Network Load Balancers (NLB) these act as the inbound gateway for traffic to your VPC. Use them to be the broker directing traffic to internal services. Placing these in front of Application Load Balancers (ALB) allows you to direct traffic to different load balancers and have an outer perimeter of security such as enforcing TLS1.3. While many will argue an ALB can also do this, by segregating the role of NLB and ALB, the ALB performance is increased, and internal traffic is not as impacted from the effect of external traffic load.
But what about... ?
So you will now probably have a "what about" component in mind that you think you should put in a public subnet. I'll look at the common ones I see and address both why I think its not a good idea and what you should be doing instead.
Bastion / Jump boxes
Why this is not a good idea:
These systems are often open to everywhere or have poor control to remove IP permissions. The issue here is that if someone breaches these boxes they generally then have a direct route to lots of other systems. They are also often configured with tools, and credentials to access data such as storage and databases. This puts a lot of risk of data exposure.
What to do instead:
If you need to access a EC2 instance I'd recommend migrating from bastions to SSM Sessions. This gives you control at a user level of what systems they can connect to and removes the need for extra infrastructure. It can also be logged in terms of connections and, if needed, commands performed. If you have to have a box to run tools within the environment, or bridge connectivity to AWS services, then migrate the box to a private subnet and use session manager to access via SSH or tunnel through to access items such as RDS.
Application Load Balancers (ALB)
Why this is not a good idea:
So many designs and documents will say just put ALB in public subnet. While this will work, and can be secured, in my view its not the best way to do things. This is especially true where you have services access directly over the internet and internally within either your AWS estate or On-Premise systems. By separating the logic and applying different controls on each you can better protect your service.
What to do instead:
Put your ALB in the same subnets as your application and allow internal services to talk directly to the ALB. Then have an NLB in the public subnet for external traffic. Ideally the NLB is behind CloudFront. Both ELBs can be protected by AWS WAF. This means that layer 4 protect can happen quickly by devices only exposed to the internet reducing load and impact on the layer 4 devices that are also used by internal traffic.
EC2 instances acting as a Firewall/IDS/IPS device
Why this is not a good idea:
While you might want this functionality on the perimeter, putting your EC2 instances directly in the path opens them up to attack. As with Bastions, an instance with a public IP will be constantly scanned and probed. The easiest way to reduce this is by not exposing the server.
What to do instead:
Put a Gateway Load Balancer (GWLB) in the public subnet and use it to divert traffic to the security appliance. Not only does this reduce the load on the appliance, it also means that the security devices can be deployed and manage centrally but service multiple VPCs. Take a look at this blog from AWS on how to implement a GWLB.
As always these are just my view based on using AWS services for over a decade and having to deal with the fall out of systems that are exposed and breached as well as troubleshooting operational performance and reliability.
I'd love your feedback on this article or suggestion on what you would put in a public subnet.
Top comments (0)