DEV Community

Cover image for Our guide to the AWS Well Architected Tool – Performance Pillar
The Serverless Edge
The Serverless Edge

Posted on • Originally published at theserverlessedge.com

Our guide to the AWS Well Architected Tool – Performance Pillar

We talk through the ins and outs of the AWS Performance Pillar that forms part of the AWS Well-Architected Tool Set. This is the fourth part of a series of talks. Performance efficiency is a work-based development conversation. If your business isn’t bringing in loads of money, you don’t need all this horsepower under the hood. It doesn’t need to be that efficient or effective from a performance point of view. The team starts with the kicker question ‘how do you select the best-performing architecture?’.


AWS Performance Pillar - the fourth in our series of talks on the Well Architected Framework

Dave Anderson

So we’re continuing our series talking about our favourite well-architected pillar! Which will be our favourite? Who knows? How exciting!

Today, we’re going to talk about the Performance Pillar, which I think is strangely interesting. It’s called performance efficiency. This one’s got a couple of different bits. Each of the pillars of well-architected usually has around 10 questions. This one has eight. And it’s got four sections: Selection, Review, Monitoring, and Trade-Offs.

It is really about the performance efficiency of your whole system. But the meaty part here is selection. There are five questions about the selection. The kicker question is the first one: ‘how do you select the best-performing architecture?’.

AWS Performance Pillar: Selection

Mark McCann

That is a good one. Because you don’t throw loads of technology at solving a problem if you don’t actually understand who your users are, and their needs. You should go really hard and deep to make sure you understand the problem you’re trying to solve for the users that are going to use a system. What are their needs? Once you have that to hand, do something like domain-driven design to break it up a little bit and make sure you have good boundaries and domains established. When you have all that you’re well informed. And now you can think about what’s the best architecture that can actually meet the needs of users.

Dave Anderson

I have been in this position three times in my career. It’s when your job is to pick an architecture for a big problem. And it’s a moment of responsibility because that architecture might need to last for 10, 15 or 20 years. And you have to be really careful about what it’s for, what it’s going to do and will do in the future. I think the few times I have done this, it has worked okay! At the start of a project, there’s always pressure to get something working. But you need to pause at the start and figure that out.

Mental model

The idea of the mental model of the system is really important. Can you explain to everyone in your company what it is? Is it X, Y, or Z? It’s like the mental model of a car. When you draw a car there’s an engine, wheels, steering wheel, brakes, and a cabin. People get the mental model. With architecture, your system needs to be that simple. This is what it is. There are lots of different ways to structure it. But you need to decide on a mental model that will work and that people will get. And that is going to evolve over time.

Future needs

Mark McCann

Evolution is critical. It might be the best architecture to meet the needs right now, but is there scope, capacity or room for it to evolve to meet unexpected future needs as well? Or have you painted yourself into a corner?

Mike O’Reilly

My experience over the last number of years of adopting the serverless first mentality is that AWS, GCP, and Azure have opinionated managed services that you can integrate and assemble. Dave, you touched on evolutionary architecture and the responsibility of building it. When you want to build fast, but also focus on the domain being correct with logic in the right place and thinking in terms of a socio-technical view of the organisation. I also don’t want to overthink or overdesign something. I want to move fast. But I reserve the right, at some point, as we scale up or the system evolves to pivot and change reasonably quickly. This is another factor with serverless. Because it’s event-driven, you’re forced to use event-driven style architectures so it lends itself to that sort of evolution. You can swap things out later on. If you need a container, a SAS, or an external vendor, it’s pluggable.

Performance efficiency

Dave Anderson

There’s something important about Performance efficiency and when you break your system down into domains and components. I would say to an engineer: ‘do you see that component? It’s got to do X, Y, Z and it’s got to work and be well architected. So figure out how to make that happen.’ And if that’s calling a managed service from AWS, then that is fine. That’s building something. But there are a bunch of non-functional requirements about that box that needs to be right. This is where you get into the idea of whether this is a commodity component. Or is it something that’s mission-critical to your business or a piece of IP? Do you need to build it? Or can you just rent it? Wardley mapping helps you to think whether or not to build. For example, we need global storage so let’s try and build that. The answer is no! Just use S3.

Mark McCann

Having a serverless first mindset and approach can help you with performance. Is there a managed service you can leverage? Does it have a serverless capability? Is it on the serverless spectrum? If it doesn’t, can you fall back to something that has serverless characteristics? So an example would be: does DynamoDB fit your needs for your data? If it doesn’t, can you fall back to something that’s still on the serverless spectrum, like Aurora serverless, if it’s a relational database? In summary, what’s the managed service I can leverage? What’s the serverless capability I can leverage? If it doesn’t meet the needs of your use case, you can fall back to something that’s further back on the serverless spectrum. That applies to compute storage, databases, and networks.

Facilitating conversations

Mike O’Reilly

Serverless is not standing still, it is improving over the years. We’ve seen cold starts reduce and we are seeing more conductivity across managed services, as opposed to directly through lambdas. You get those benefits without having to do anything. That’s another benefit to consider when deciding to take a serverless approach with your architecture. It allows you to focus on sensitive requirements like performance or the need to get it working. You can design those requirements into the workload: do we need the cash? Or are there things we optimise or streamline? You can facilitate those conversations as part of this review.

Mark McCann

I think one of the best things about a serverless approach to performance is that the cloud provider is constantly working at improving performance efficiency, reducing costs, speeding up, and adding more horsepower to your compute. By choosing smartly, with your architecture, you get a free underlying platform team that is constantly working on improving your performance. And you can just take advantage of it without having to worry. You can leverage that performance improvement.

Image description
Photo by Nicolas Hoizey on Unsplash.com

AWS Performance Pillar: Review

Dave Anderson

Moving onto the next section and relating to what you said how do you constantly review your architecture to take advantage of new releases? Cloud providers are constantly innovating and releasing stuff every week, so you want to be in a position where you can add new stuff quickly without breaking the whole architecture. You need to operate a ‘two-way door’ as Amazon calls it where you go in, do something and then get back out again. You don’t want a one-way door where you get trapped.

Mark McCann

This is where mapping can be really advantageous to your teams. If you’ve mapped out your tech stack, you understand the components and you can see where they are on the evolutionary axis. When a new capability or service comes out, you can immediately start to assess that against your current components. If you’re custom building a database you will spot the new managed service that meets your needs and evolve to use it.

Consider new managed services

Dave Anderson

We’ve done this in the past! We’ve been building something, but then you think that something’s going to come out that will do that job for me. So I build it in a way that I can replace it easily when something comes out.

Mike O’Reilly

Event Bridge is a good example. For a long time, we’ve been using an SNS SQS finite-type approach to the event. And the Event Bridge was released. The team was trying to get latency reduced and constantly looking at that. And then realise that when you get to a certain level, we’ll make that cutover. But it’s a good example of how to evolve and plan for that.

Mark McCann

We need another Serverless Craic episode on how to keep up with the pace of change and the firehose of informational updates. I think we need to explore it. There is a good return on investment on the use of time by having your radar up, being aware of how the environment around you is changing, and being open to adopting new capabilities that can save money and effort.

Monitoring and Trade-Offs

Dave Anderson

The next section is Monitoring and how to monitor your resources for performance, which is fairly straightforward. And the last one is Trade-Offs and how to use trade-offs to improve performance. A great example is Lambda Power Tuner, where you can tune your function based on memory or CPU to get that nice balance between cost and performance.

Mark McCann

That’s a great one. What are you willing to pay for? Is it worth it? Performance efficiency quickly becomes a work-based development conversation. If the business isn’t bringing in loads of money, you don’t need all this horsepower under the hood. It doesn’t need to be that efficient or effective from a performance point of view.

Dave Anderson

And there’s a sustainability thing there as well. Do you need a sub-second response time for something? Maybe a one-second response time will be okay. Don’t burn through everything, just for the sake of it.

Don’t over optimise

Mike O’Reilly

This is a good habit to get into. Is our lambda too big? What can we do to thin it down and shorten it? Anytime I run this, I normally see quite a lift. I’ve seen teams go from three-second response times to half a second response times because they’ve trimmed something down.

Mark McCann

There is a fear, when you get all this in place, that teams over-optimize. The engineering time isn’t worth the performance improvement. So you need to be mindful of that. But that’s for when you’re pretty far down the maturity curve.

Dave Anderson

Not a bad problem to have. Alright, so that’s the craic. That’s, that’s the performance efficiency pillar of well architected. Thanks for listening. There are more thoughts on the blog at TheServerlessEdge.com and on Twitter @ServerlessEdge. We are also on Medium, Dev.To and LinkedIn. Thanks very much!

Top comments (0)