So I read a Wikipedia page about program optimization.
In computer science, resource consumption often follows a form of power law distribution, and the Pareto principle can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations. In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).
What exactly does this mean?
Ignore the figures as they come with different names and ratios such as 90/10, 80/20, and Pareto principle. These are all essentially the same and for me more of philosophical rules.
What these general rules say is that 10 (or 20) percent of what weβre doing accounts for 90 (or 80) percent of everything. Throughout my life experience, these are generally OK rules. The figures are just to illustrate how often we overlook or underestimate the impact of seemingly small yet important aspects of life.
How can 90% of the execution time be spent only executing 10% of the code?
When we apply the rule to software optimization context, it just means that
- Most of the program execution time is done in the small portion of the code
- Most of the performance issues are usually in the small portion of the code
(See how I omit the figures there?)
I'm using Firefox right now. The most work it's doing right now is probably rendering texts and images to display my open tabs to render websites. I bet the rendering code is complex; but to put things in perspective, it might be only a small portion of the whole Firefox codebase.
The general take of these rules is that, when you're optimizing your code, identify the top 10 bottlenecks of the program; e.g. by profiling it. From my experience, solving the first 1-2 bottlenecks will have a huge impact on the overall performance. Many software engineers often overlook this , instead they try to optimize small things in the code that bring insignificant performance gains.
To quote another Wikipedia page about Pareto principle
In computer science the Pareto principle can be applied to optimization efforts. For example, Microsoft noted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated. Lowell Arthur expressed that "20 percent of the code has 80 percent of the errors. Find them, fix them!" It was also discovered that in general the 80% of a certain piece of software can be written in 20% of the total allocated time. Conversely, the hardest 20% of the code takes 80% of the time. This factor is usually a part of COCOMO estimating for software coding.
What about the other 90% of the code then? How can they be executed in just 10% of the time?
The biggest portion of the code does not have to be necessarily executed in a small amount of time. It might only be executed rarely or occasionally as well.
Think about my Firefox. The rendering code is probably the most important thing when I'm running it. However, I only occasionally use other features, such as bookmarks, history, synchronization, etc. That doesn't necessarily mean that the code for bookmarking is executed quickly in comparison to the rendering. It just means that it is executed in a small amount of time (relative to my whole usage time of Firefox) because I only use these features occasionally.
Top comments (0)