I used this concept before for a personal backend project (a bitcoin trader bot). So in the project's budget, we only wanted to spend $5 for the digital ocean droplet.
We defined it in the beginning, but we only enforced it just before deploying to production. I prefer this way to prevent any premature optimization happening in the project. What ended up needing to be optimized was:
Memory usage, I think $5 droplets at that point in time limited you to 1Gb memory
Egress usage, because there was an element of scraping, we wanted to minimize the amount of egress network calls
Disk usage, because we were also running a db on the same machine as the server. So data retention became important
The main ripple effect was that we focussed more on "good enough" performance in the end. You don't really care about "best performance" unless it results in positive financial impact or it's a market differentiator.
Work
At work, performance budgets are merged with SLA metrics. I think the reason we can afford to do this on our team is that we are only backend focused. So we design for monitoring and deploy. Then based on data we define the performance budget AND make ourselves accountable for it. Then, more importantly, we revise and tweak this budget over time.
It's a different approach to the link and the way I did it for personal projects. Because we're working with cryptocurrencies each coin you work with is typically considered "new", you don't really know what's a realistic performance budget for a coin until you see live data. So we start with a reasonable guess based off another coin's history, then tune it from there.
The main effect or learning from there is to be feasible, don't go in with a strict budget. Instead, rather have phased budgets and satisfy them like that instead.
Lead Software Engineer at DEV/Forem. Passionate about building communities, learning/sharing & software engineering. Always looking to make a difference in this world 🌍
These are really interesting concepts. Thank you for explaining your process in depth.
At work, performance budgets are merged with SLA metrics. I think the reason we can afford to do this on our team is that we are only backend focused. So we design for monitoring and deploy. Then based on data we define the performance budget AND make ourselves accountable for it. Then, more importantly, we revise and tweak this budget over time.
I'm curious to know do you use any tools to incorporate the budget into your deploy tools?
For deploying on a backend, there isn't any pre deploy checks that we do as opposed to what you'll see on the frontend (like bundle sizes or app sizes). In go, you can run benchmarks as part of your CI using go benchmarks. This is one way of ensuring a budget is enforced pre deploy.
For monitoring, we just use Grafana and Prometheus alerts. These are integrated into slack. One of it's uses is to enforce performance budgets. So things like end to end, individual service and 3rd party call latencies have budgets. We also have a concept of "logging credits" to avoid too verbose logging.
So we don't really have reports for these. The reason is you only want to know about budgets that have breached, not ones that are "behaving". This I feel is something that's different to a payload based performance budget where the aim is to constantly reduce the payload size. I guess the way you interact with a performance budget changes with the context that you're working in.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Personal projects
I used this concept before for a personal backend project (a bitcoin trader bot). So in the project's budget, we only wanted to spend $5 for the digital ocean droplet.
We defined it in the beginning, but we only enforced it just before deploying to production. I prefer this way to prevent any premature optimization happening in the project. What ended up needing to be optimized was:
The main ripple effect was that we focussed more on "good enough" performance in the end. You don't really care about "best performance" unless it results in positive financial impact or it's a market differentiator.
Work
At work, performance budgets are merged with SLA metrics. I think the reason we can afford to do this on our team is that we are only backend focused. So we design for monitoring and deploy. Then based on data we define the performance budget AND make ourselves accountable for it. Then, more importantly, we revise and tweak this budget over time.
It's a different approach to the link and the way I did it for personal projects. Because we're working with cryptocurrencies each coin you work with is typically considered "new", you don't really know what's a realistic performance budget for a coin until you see live data. So we start with a reasonable guess based off another coin's history, then tune it from there.
The main effect or learning from there is to be feasible, don't go in with a strict budget. Instead, rather have phased budgets and satisfy them like that instead.
These are really interesting concepts. Thank you for explaining your process in depth.
At work, performance budgets are merged with SLA metrics. I think the reason we can afford to do this on our team is that we are only backend focused. So we design for monitoring and deploy. Then based on data we define the performance budget AND make ourselves accountable for it. Then, more importantly, we revise and tweak this budget over time.
I'm curious to know do you use any tools to incorporate the budget into your deploy tools?
For deploying on a backend, there isn't any pre deploy checks that we do as opposed to what you'll see on the frontend (like bundle sizes or app sizes). In go, you can run benchmarks as part of your CI using
go benchmarks
. This is one way of ensuring a budget is enforced pre deploy.For monitoring, we just use Grafana and Prometheus alerts. These are integrated into slack. One of it's uses is to enforce performance budgets. So things like end to end, individual service and 3rd party call latencies have budgets. We also have a concept of "logging credits" to avoid too verbose logging.
So we don't really have reports for these. The reason is you only want to know about budgets that have breached, not ones that are "behaving". This I feel is something that's different to a payload based performance budget where the aim is to constantly reduce the payload size. I guess the way you interact with a performance budget changes with the context that you're working in.