Dealing with performance concerns could be an intricated task that transform easily into a story and a sprint of refactoring , testing and code improvements. It's not the best experience you may have as a developer , especially if your team is in a rush and patience is a scarce luxury.
Throughout this article , I will be describing my journey with a similar performance improvement task and how I transformed it into amazing learning experience!
1 - Setting up testing environment
Simulating your production environment for testing your application is a crucial part of this task , as it helps gaining time, having a better visibility on the improvements you are making and excluding solutions that introduce more code complexity than gained milliseconds in terms of latency ! If you need more details about this step, you may refer to the following article.
2 - Identifying bottlenecks in your code
This is a strategic step toward fixing performance issues , because it helps identify improvement plans that will affect most of your endpoints at the same time. to do that , you start by identifying the methods/code logic that are referenced by multiple endpoints , and you focus on them , to check if there is any refactoring or code cleaning to be done.
Identifying the "start with" points is critical to build your strategy of performance enhancement, once you identify them, you proceed with the following steps :
- Check if there is any obvious improvements to be implemented
- Benchmark those methods to check their latency (using Benchmark.Net Library or the Time attribute of MethodTime.Fody library
- Searchand assess improvement options along with running tests to measure the enhancemnts!
Example : While working on a Api improvement , I started by identifying the a method that implements authorization logic , and it was referenced by almost any of the API endpoints. Refactoring and optimizing this methods' logic , helped a lot with improving the overall performance of the API.
3 - Look at the bigger picture
By this I mean , the real big picture , which includes your dev environment , your prod environment , your database management system , your ORMs, and the frameworks you are integrating into your application (if any !)
If a query seems to be slow for no specific reason , even when its logic is straightforward , It's a good idea to try running it on a different environment.
Example : We had a local dev environment composed of visual studio 2022, SQL Server , Redis Microsoft Azure storage Emulator ., and while testing the performance of an endpoint , the SQL profiler shows that a simple query was taking a couple of seconds to execute although it was a simple Select with only one filtering condition! Testing this same query on the Azure db was taking less than 5 ms , which made us question the environment and the area of code we were testing , and it seems that memory issues could cause similar problematic situations , expecially if you are debugging a heavy solution in visual studio and it is running along with an instance of SQL Server on the same machine instance ( a better description of this situation is in this stackoverflow post ). Having such kind of outlook on the situation could help you avoid lots of time investigating issues locally inside your code , while the main concerns are in the environment as whole!
The second path that you should explore in the process of performance checks is the way your are connecting your application to database , certainly there is plenty of choice when it comes to that , so you may be using an ORM ( EF Core, Dapper , Nhibernate ...) , or you may be relying on Simple queries and stored procedures. When working on performance , you should take those choices in consideration , how advanced is your application , and how deep it goes into implementing any of the previous options , comparing those options while taking in account the type / quantity of data you will be handling , considering the choice of adopting one instead of the other , testing some slow methods using a different db connection approach to see if that improves performance and in case of success, should you consider adopting the newly adopted method instead of the previously slow one ? all those are question you should answer and discuss with your team and client, based on the size and the advancement of the application !
Once you are comfortable with the choices you have made for your application ( Frameworks and libraries ) , or even if not , may be you are at the point where you cannot go back and start over with new options, in both cases , you still have alternatives . For example in the case of using EF Core , there is a couple of practices and methods you could adopt for you handle performance issues , you may also want to check the generated SQL of this ORM and find ways to implement it further !
Finally , if you are using frameworks that generate code , or setup the project structure for you ( Example of those are ABP framework ), you should double check the documentation of that framework ,looking for recommendations and best practices in there about performance and considerations. Besides if those frameworks are used to generate a boilerplate code for entities or CRUD operations, It's highly advised to thouroughly review this code and put in place a mechanism or code rules that improve and refactor any of the current/future generated code.
Example : using ABP Framework , could be super helpful as it save you the time to write all the boilerplate code for entities,DTOs and Crud operations , not only that , it does generate the code respecting the architecture you have already setup for your project ( Microservices, DDD ...). All seems awesome until you start the load testing process , and issues start revealing themselves. An example of the issues we had with the generated code was that it makes double db round trip each time we needed to query a list of entities : the first for getting the entities and the second for counting them , which is not a performance friendly approach. For that we had to implement some code refactoring options , and set up some code rule that make sure to fix and chnage these kind of logic everytime we generate new entities using ABP framework.
With the progress those AI tools are making , Github Copilot, OpenAPI , developers may start relying more on their generated code to save time and speed the process. However , the time we gained , we ought to spend some of it on reviewing the generated code instead of taking it as it is , as advanced those tools are, they are still away from perfect !
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.