DEV Community

Roy
Roy

Posted on

How to Optimize Your Code for Performance

The Benefits of Optimized Code

As technology advances, the demand for faster and more efficient software also increases. This has led to a greater focus on code optimization in order to improve the speed and performance of software programs. Optimized code can provide a number of benefits, including improved performance, reduced memory usage, and reduced processing time.
In addition to improved speed and performance, optimized code can also lead to reduced memory usage. When code is optimized, it is often more compact and uses less memory than non-optimized code. This can free up memory resources for other tasks, and can help to reduce the overall memory footprint of a software program.
Finally, optimized code can also lead to reduced processing time. Because optimized code is typically more efficient, it can take less time to execute. This can be a significant benefit, especially for software programs that are time-sensitive or resource-intensive.
Overall, optimized code can provide a number of benefits that can improve the speed, performance, and efficiency of software programs. When code is optimized, it can run faster, use less memory, and take less time to execute. This can lead to improved performance and efficiency, and can help to free up resources for other tasks.

How to Optimize Your Code

If you're a developer, you know that optimizing your code is important for a variety of reasons. Not only does it make your code run faster, but it can also make it more readable and easier to maintain.

There are a number of different ways to optimize your code, and the approach you take will depend on the language you're using, the platform you're targeting, and the specific goals you're hoping to achieve. In general, though, there are a few basic principles you can follow to optimize your code:

Use the right data types

Data types are the key to optimizing code performance. By using the right data types, you can minimize the number of conversions that take place and make your code more efficient. In general, you should use the smallest data type that can accurately represent the data you are working with. For example, if you are working with whole numbers that will never be larger than 255, you can use a byte data type instead of an int data type.

Keep your code as simple as possible

One of the most important principles of optimizing code is to keep it as simple as possible. This means minimizing the amount of code required to achieve a desired outcome. In many cases, simpler code is more efficient and easier to debug. It can also be easier to read and understand, which is important when working on large projects with multiple developers.

Avoid unnecessary computations

This principle states that you should only perform computations that are absolutely necessary, and avoid any unnecessary calculations. This can be a difficult task, as it can be hard to determine what is truly necessary and what is not. However, by carefully examining your code and its dependencies, you can usually find ways to eliminate unnecessary computations. Doing so can lead to significant performance improvements, as it can reduce the overall amount of work that your code must do.

Cache frequently used data

By caching data, you can avoid having to recalculate it every time it is needed, which can save a significant amount of time. This principle is particularly important in code that is executed frequently, such as in a loop.

Avoid unnecessary I/O

The principle of "avoid unnecessary I/O" is to make sure that your code only reads from and writes to files when it absolutely needs to. This means avoiding unnecessary reads and writes, and only reading and writing the data that you absolutely need. By avoiding unnecessary I/O, you can improve the performance of your code and make it more efficient.

Use the most efficient algorithms

This means using the algorithms that require the least amount of time and resources to complete the task at hand. For example, if you need to sort a large array of data, using a quick sort algorithm would be more efficient than using a bubble sort algorithm. Quick sort is a more efficient algorithm because it has a lower time complexity, meaning it will take less time to sort the data. In addition, quick sort requires less resources than bubble sort, so it will put less strain on your system.

If you're looking to optimize your code for performance, there are a few things you can do. First, make sure your code is well written and clean. Second, use a profiler to identify areas of your code that could be improved. And finally, don't forget to optimize your algorithms and data structures. By following these simple tips, you can improve the performance of your code significantly!

Star our Github repo and join the discussion in our Discord channel!
Test your API for free now at BLST!

Latest comments (9)

Collapse
 
gabrielmlinassi profile image
Gabriel Linassi • Edited

Avoid libraries that add too much runtime overhead. I work on the FE and I'm rebuilding an interface made with MUI + Emotion and performance score went from ~70 to 100.

Collapse
 
ekeijl profile image
Edwin

How can you talk about how to optimize code without mentioning how to measure performance? Premature optimization can make code unnecessarily complex. How do you know if you optimizations had any impact?

Collapse
 
cubiclesocial profile image
cubiclesocial

In general, you should use the smallest data type that can accurately represent the data you are working with. For example, if you are working with whole numbers that will never be larger than 255, you can use a byte data type instead of an int data type.

This isn't true. CPUs tend to be built to favor code/data alignment that matches the CPU bit size. So 64-bit CPUs tend to have instructions and pipelines that favor 64-bit integers and operations. And if a one byte int is put into a struct/class, the compiler will almost certainly pad to the alignment for the target CPU because compilers generally know how to optimize code better than most humans for performance (and some CPUs refuse to function with misaligned code/data - Intel-compatible is the main exception). The result is that, generally speaking, the same amount of space is used regardless of int size. At any rate, most people have no control over the size of their integers since they write code in high level, interpreted languages (Javascript, PHP, Python, etc).

Unless you are referring to files on disk. There I mostly agree that selecting proper sizes helps keep disk usage down and can improve overall read/write performance. But disk is pretty cheap per TB these days.

Keep your code as simple as possible

The simplest code is to just not write anything at all. Disconnect the Internet, defenestrate the modem, sell the computer, and call it a day. Problem solved.

Real world applications are naturally complex. The best approach is to code almost everything internally and not rely on more than a couple of third-party libraries for any given application. That way it is possible to quickly find where every performance bottleneck likely is and can directly fix any relevant issues.

Quick sort is a more efficient algorithm because it has a lower time complexity, meaning it will take less time to sort the data.

This is simply not true. Quicksort has a worst case complexity of O(n^2). The correct solution is to use multiple sorting algorithms together to leverage their strengths and switch to another algorithm during the sort when encountering the current algorithm's weaknesses. No standalone sorting algorithm is without issues/tradeoffs. A high performance sorting algorithm will combine and balance the sorting operation between 3-4 separate sorting algorithms. Been there, done this.

I'm a little disappointed that this article didn't really address the topic it set out to address: "How to optimize your code for performance."

As someone who takes pride in at least attempting to write highly optimized code, I'll add my two cents: Memory allocations are one of the slowest operations in an OS. It is always many times slower to do a bunch of little memory allocations than to precalculate up front how much memory is needed to perform an operation and allocate one large chunk of memory in a single system call and then carefully use the space to perform the same operation. The amount of code involved doubles to perform the precalculation step without allocating anything and then perform basically the same logic a second time after allocating the memory to do the real operation. The speedup is usually in the range of 100 to 10,000 times faster (e.g. 30 seconds down to 250ms) - the only difference being the number of memory allocations. To summarize, effectively executing double the code can be many, many times faster than just executing one copy of the code. Alternatively, sometimes a piece of code can get away with using the stack and avoid allocating anything from the heap at all. While the stack is very ideal, its limited size reduces its usefulness.

Testing the extremes (really large arrays, really large files, etc.) usually reveals performance bottlenecks. The problem is then figuring out what is causing each bottleneck in the first place and then coming up with a solution. Profiling tools can help but can also be their own problem as they tend to slow things down even further with their instrumentation logic. And not every language has profiling tools available. Most major performance bottlenecks occur around just a few lines of code and can generally be solved with some clever workarounds.

My general approach to optimizing the performance of code is to keep things "flat." The more layers there are, the more expensive the function call will likely be. Layers (e.g. classes that depend on classes that depend on other classes) allow for powerful functionality in less code, but generally comes at the cost of performance. Using the "flat" approach usually keeps me from getting into trouble in the first place. Sometimes layers are inevitable and then it is a matter of being aware that they will probably have performance issues somewhere.

Collapse
 
aarone4 profile image
Aaron Reese

Although I generally agree with your article, code should be written for
1) readability
2) testability
3) maintainability
4) performance
In that order. Using an obscure syntax (e.g. regex) to squeeze the last bit of compute is an optimisation not worth taking IMHO

Collapse
 
eljayadobe profile image
Eljay-Adobe

After profiling, and finding the performance bottlenecks, it may be warranted to sacrifice readability/testability/maintainability for performance.

Then, after butchering the code, profile again to ensure that the desired performance gains were actually achieved. (I worked with someone who worked the code over for sake of performance, and then was surprised to find out he had pessimized the code by an order of magnitude. Which he only became aware of because I profiled the before-and-after of his "optimization", which turned into a lesson about how powerful optimizing compilers were, and why his heavily tweaked code crippled the compiler's optimizer.)

Finally, extensively comment the portion of the code that was hand-optimized to explain what and why it is doing what it is doing. Also, consider keeping the original legible code around as a reference implementation.

Collapse
 
turowski profile image
Kacper Turowski

This really depends on the situation. In general, with today's computing power, performance isn't that much of a problem. But on high demand systems, performance should be moved way higher, maybe to the top. Code should be written for

1) set of priorities tailored to specific needs and conditions

In that order. Arguably, you could always say "just scale up", but why do that if you can just optimize the code at the cost of making it bit less readable and having to document it properly. You might cut the costs by a lot that way.

Video games are a great example of performance being put at the very end of the priorities list. Star Citizen being the prime example (yeah, it's in alpha but it's a good example) of this. That game constantly has frames below 30FPS on my Ryzen 3700X and RTX3060Ti.

I wouldn't say sacrificing performance for readability this way is a good way to go about programming.

Collapse
 
aarone4 profile image
Aaron Reese

Some cogent arguments. There are circumstances where performance is critical such as gaming but only in specific parts such as rendering or if you are writing embedded software on low power limited memory devices such as collision detection systems, but in general the cost of development and support will dwarf cost of compute (including operations wait time) so readability and supportability are more important than squeezing every last drop of performance from the system.
As for your last comment I take exactly the opposite stance: I wouldn't sacrifice readability for performance, but much of the time readability and performance are co-dependent and better structure to your code will be more performant anyway.

Thread Thread
 
turowski profile image
Kacper Turowski

Yeah, I totally agree with you, it's only sometimes when you should kick down the readability. Not saying you have to write unreadable spaghetti 🙂 but sometimes it's better to plug in a well documented block of gibberish (like you said, ex. regex) if this will improve performance.

Another decent example will be SQL code, I would call a 200 line long query anything but readable, but executing this abomination db-side might save you a lot of bandwidth and processing time on the server-side.

Thread Thread
 
aarone4 profile image
Aaron Reese

SQL is my bread and butter and yes, processing close to the metal will always be faster, especially with the cannononical N+1 problem where you make 1 call for the customer and the list of orders and then N calls for the order details which could have been done in a single pass at the database layer.
Badly formatted SQL is almost impossible to read and will likely run slowly. Refactored code so that sub-queries are dealth with at the top of the code as CTE's makes it much easier to read, test and maintain at no cost to performance and will likely open up avenues for code opitimisation that you didn't see before.