DEV Community

Cover image for Should Frontend Devs Care About Performance??
Adam Nathaniel Davis
Adam Nathaniel Davis

Posted on • Updated on

Should Frontend Devs Care About Performance??

I was recently talking to an architect at Amazon and he made a very interesting comment to me. We were talking about the complexity of a given algorithm (discussed in Big-O notation), and before we even got too far into the explanation, he said:

I mean, it's not like we need to worry too much about this. After all, we're frontend devs!


I found this admission to be extremely refreshing, and it was entirely unexpected coming from someone in the Ivory Tower that is Amazon. It's something that I've always known. But it was still really nice to hear it coming from someone working for the likes of a FAANG company.

You see, performance is one of those subjects that programmers love to obsess about. They use it as a Badge of Honor. They see that you've used JavaScript's native .sort() method, then they turn up their nose and say something like, "Well, you know... That uses O(n log(n)) complexity." Then they walk away with a smug smirk on their face, as though they've banished your code to the dustbin of Failed Algorithms.


Image description

Smart Clients vs. Dumb Terminals

The terms "smart client" and "dumb terminal" have fallen somewhat by-the-wayside in recent decades. But they're still valid definitions, even in our modern computing environments.

Mainframe Computing

Way back in the Dark Ages, nearly all computing was done on massive computers (e.g., mainframes). And you interacted with those computers by using a "terminal". Those terminals were often called "dumb terminals" because the terminal itself had almost no computing power of its own. It only served as a way for you to send commands to the mainframe and then view whatever results were returned from... the mainframe. That's why it was called "dumb". Because the terminal itself couldn't really do much of anything on its own. It only served as a portal that gave you access to the mainframe.

For those who wrote mainframe code, they had to worry greatly about the efficiency of their algorithms. Because even the mainframe had comparatively-little computing power (by today's standards). More importantly, the mainframe's resources were shared by anyone with access to one of the dumb terminals. So if 100 people, sitting at 100 dumb terminals, all sent resource-intensive commands at the same time, it was pretty easy to crash the mainframe. (This is also why the allocation of terminals was very strict, and even those who had access to mainframe terminals often had to reserve time on them.)

PC Computing

With the PC explosion in the 80s, suddenly you had a lot of people with a lot of computing power (relatively speaking) sitting on their desktop. And most of the time, that computing power was underutilized. Thus spawned the age of "smart clients".

In a smart client model, every effort is made to allow the client to do its own computing. It only communicates back to the server when existing data must be retrieved from the source, or when new/updated data must be sent back to that source. This offloaded a great deal of work off of the mainframe, down to the clients, and allowed for the creation of much more robust applications.

A Return To Mainframe Computing (Sorta...)

But when the web came around, it knocked many applications back into a server/terminal kinda relationship. That's because those apps appeared to be running in the browser, but the simple fact is that early browser technology was incapable of really doing much on its own. Early browsers were quite analogous to dumb terminals. They could see data that was sent from the server (in the form of HTML/CSS). But if they wanted to interact with that data in any meaningful way, they needed to constantly send their commands back to the server.

This also meant that early web developers needed to be hyper-vigilant about efficiency. Because even a seemingly-innocuous snippet of code could drag your server to its knees if your site suddenly went viral and that code was being run by hundreds (or thousands) of web surfers concurrently.

This could be somewhat alleviated by deploying more robust backend technologies. For example, you could deploy a web farm that shared the load of requests for a single site. Or you could write your code in a compiled language (like Java or C#), which helped (somewhat) because compiled code typically runs faster than interpreted code. But you were still bound by the limits that came from having all of your public users hitting a finite set of server/computing resources.


Image description

The Browser AS Smart Client

I'm not going to delve into the many arguments for-or-against Chrome. But one of its greatest contributions to web development is that it was one of the first browsers that was continually optimized specifically for JavaScript performance. When this optimization was combined with powerful new frameworks like jQuery (then Angular, then React, then...), it fostered the rise of the frontend developer.

This didn't just give us new capabilities for frontend functionality, it also meant that we could start thinking, again, in terms of the desktop (browser) being a smart client. In other words, we didn't necessarily have to stay up at night wondering if that one aberrant line of code was going to crash the server. At worst, it might crash someone's browser. (And don't get me wrong, writing code that crashes browsers is still a very bad thing to do. But it's farrrrr less likely to occur when the desktop/browser typically has all those unused CPU cycles just waiting to be harnessed.)

So when you're writing, say, The Next Great React App, how much, exactly, do you even need to care about performance?? After all, the bulk of your app will be running in someone's browser. And even if that browser is running on a mobile device, it probably has loads of unleveraged processing power available for you to use. So how much do you need to be concerned about the nitty-gritty details of your code's performance? IMHO, the answer is simple - yet nuanced.

Care... But Not That Much

Years ago, I was listening to a keynote address from the CEO of a public company. Public companies must always (understandably) have one eye trained on the stock market. During his talk, he posed the question: How much do I care about our company's stock price? And his answer was that he cared... but not that much. In other words, he was always aware of the stock price. And of course, he was cognizant of the things his company could do (or avoid doing) that would potentially influence their stock price. But he was adamant that he could not make every internal corporate decision based upon one simple factor - whether or not it would juice the stock price. He had to care about the stock price, because a tanking stock price can cause all sorts of problems for a public company. But if he allowed himself to focus, with tunnel vision, on that stock price, he could end up making decisions that bump the price by a few pennies - but end up hurting the company in the long run.

Frontend app development is very similar in my eyes. You should always be aware of your code's performance. You certainly don't want to write code that will cause your app to run noticeably bad. But you also don't want to spend half of every sprint trying to micro-optimize every minute detail of your code.

If this all sounds terribly abstract, I'll try to give you some guidance on when you need to care about application performance - and when you shouldn't allow it to bog down your development.


Image description

Developer Trials

The first thing you need to keep in mind is that your code will (hopefully) be reviewed by others devs. This happens when you submit new code, or even when someone comes by months later and looks at what you've written. And many devs LOVE to nitpick your code for performance.

You can't avoid these "trials". They happen all the time. The key is not to get sucked into theoretical debates about the benchmark performance of a for loop versus the Array.prototype function of .forEach(). Instead, you should try, whenever possible, to steer the conversation back into the realm of reality.

Benchmarking Based Upon Reality

What do I mean by "reality"? Well, first of all, we now have many tools that allow us to benchmark our apps in the browser. So if someone can point out that I can shave a few seconds of load time off my app by making one-or-two minor changes, I'm all ears. But if their proposed optimization only "saves" me a few microseconds, I'm probably gonna ignore their suggestions.

You should also be cognizant of the fact that a language's built-in functions will almost always outperform any custom code. So if someone claims that they have a bit of custom code that is more performant than, say, Array.prototype.find(), I'm immediately skeptical. But if they can show me how I can achieve the desired result without even using Array.prototype.find() at all, I'm happy to hear the suggestion. However, if they simply believe that their method of doing a .find() is more performant than using the Array.prototype.find(), then I'm going to be incredibly skeptical.

Your Code's Runtime Environment

"Reality" is also driven by one simple question: Where does the code RUN??? If the code-in-question runs in, say, Node (meaning that it runs on the server), performance tweaks take on a heightened sense of urgency, because that code is shared and is being hit by everyone who uses the app. But if the code runs in the browser, you're not a crappy dev just because the tweak is not forefront in your mind.

Sometimes, the code we're examining isn't even running in an app at all. This happens whenever we decide to do purely academic exercises that are meant to gauge our overall awareness of performance metrics. Code like this may be running in a JSPerf panel, or in a demo app written on StackBlitz. In those scenarios, people are much more likely to be focused on finite details of performance, simply because that's the whole point of the exercise. As you might imagine, these types of discussions tend to crop up most frequently during... job interviews. So it's dangerous to be downright flippant about performance when the audience really cares about almost nothing but the performance.

The "Weight" Of Data Types

"Reality" should also encompass a thorough understanding of what types of data that you're manipulating. For example, if you need to do a wholesale transformation on an array, it's perfectly acceptable to ask yourself: How BIG can this array reasonably become? Or... What TYPES of data can the array typically hold?

If you have an array that only holds integers, and we know that the array will never hold more than, say, a dozen values, then I really don't care much about the exact method(s) you've chosen to transform that data. You can use .reduce() nested inside a .find(), nested inside a .sort(), which is ultimately returned from a .map(). And you know what?? That code will run just fine, in any environment where you choose to run it. But if your array could hold any type of data (e.g., objects that contain nested arrays, that contain more objects, that contain functions), and if that data could conceivably be of nearly any size, then you need to think much more carefully about the deeply-nested logic you're using to transform it.


Image description

Big-O Notation

One particular sore point (for me) about performance is with Big-O Notation. If you earned a computer science degree, you probably had to become very familiar with Big-O. If you're self-taught (like me), you probably find it to be... onerous. Because it's abstract and it typically provides no value in your day-to-day coding tasks. But if you're trying to get through coding interviews with Big Tech companies, it'll probably come up at some point. So what do you do?

Well, if you're intent upon impressing those interviewers who are obsessed with Big-O Notation, then you may have little choice but to hunker down and force yourself to learn it. But there are some shortcuts you can take to simply make yourself familiar with the concepts.

First, understand the dead-simple basics:

  1. O(1) is the most immediate time complexity you can have. If you simply set a variable, and then at some later point, you access the value in that same variable, this is O(1). It basically means that you have immediate access to the value stored in memory.

  2. O(n) is a loop. n represents the number of times you need to traverse the loop. So if you're just creating a single loop, you are writing something of O(n) complexity. Also, if you have a loop nested inside another loop, and both loops are dependent upon the same variable, your algorithm will typically be O(n-squared).

  3. Most of the "built-in" sorting mechanisms we use are of O(n log(n)) complexity. There are many different ways to do sorts. But typically, when you're using a language's "native" sort functions, you're employing O(n log(n)) complexity.

You can go deeeeeep down a rabbit hole trying to master all of the "edge cases" in Big-O Notation. But if you understand these dead-simple concepts, you're already on your way to at least being able to hold your own in a Big-O conversation.

Second, you don't necessarily need to "know" Big-O Notation in order to understand the concepts. That's because Big-O is basically a shorthand way of explaining "how many hoops will my code need to jump through before it can finish its calculation."

For example:

const myBigHairyArray = [... thousandsUponThousandsOfValues];
const newArray = myBigHairyArray.map(item => {
  // tranformation logic here
});
Enter fullscreen mode Exit fullscreen mode

This kinda logic is rarely problematic. Because even if myBigHairyArray is incredibly large, you're only looping through the values once. And modern browsers can loop through an array - even a large array - very fast.

But you should immediately start thinking about your approach if you're tempted to write something like this:

const myBigHairyArray = [... thousandsUponThousandsOfValues];
const newArray = myBigHairyArray.map(outerItem => {
  return myBigHairyArray.map(innerItem => {
    // do inner tranformation logic 
    // comparing outerItem to innerItem
  });
});
Enter fullscreen mode Exit fullscreen mode

This is a nested loop. And to be clear, sometimes nested loops are absolutely necessary, but your time complexity grows exponentially when you choose this approach. In the example above, if myBigHairArray contains "only" 1,000 values, the logic will need to iterate through them one million times (1,000 x 1,000).

Generally speaking, even if you haven't the faintest clue about even the simplest aspects of Big-O Notation, you should always strive to avoid nesting anything. Sure, sometimes it can't be avoided. But you should always be thinking very carefully about whether there's any way to avoid it.

Hidden Loops

You should also be aware of the "gotchas" that can arise when using native functions. Yes, native functions are generally a "good" thing. But when you use a native function, it can be easy to forget that many of those functions are doing their magic with loops under the covers.

For example: imagine in the examples above that you are then utilizing .reduce(). There's nothing inherently "wrong" with using .reduce(). But .reduce() is also a loop. So if your code only appears to use one top-level loop, but you have a .reduce() happening inside every iteration of that loop, you are, in fact, writing logic with a nested loop.


Image description

Readability / Maintainability

The problem with performance discussions is that they often focus on micro-optimization at the expense of readability / maintainability. And I'm a firm believer that maintainability almost always trumps performance.

I was working for a large health insurance provider in town and I wrote a function that had to do some complex transformations of large data sets. When I finished the first pass of the code, it worked. But it was rather... obtuse. So before committing the code, I refactored it so that, during the interim steps, I was saving the data set into different temp variables. The purpose of this approach was to illustrate, to anyone reading the code, what had happened to the data at that point. In other words, I was writing self-documenting code. By assigning self-explanatory names to each of the temp variables, I was making it painfully clear to all future coders exactly what was happening after each step.

When I submitted the pull request, the dev manager (who, BTW, was a complete idiot) told me to yank out all the temp variables. His "logic" was that those temp variables each represented an unnecessary allocation of memory. And you know what?? He wasn't "wrong". But his approach was ignorant. Because the temp variables were going to make absolutely no discernible difference to the user, but they were going to make future maintenance on that code sooooo much easier. You may have already guessed that I didn't stick around that gig for too long.

If your micro-optimization actually makes the code more difficult for other coders to understand, it's almost always a poor choice.


Image description

What To Do?

I can confidently tell you that performance is something that you should be thinking about. Almost constantly. Even on frontend apps. But you also need to be realistic about the fact that your code is almost always running in an environment where there are tons of unused resources. You should also remember that the most "efficient" algorithm isn't always the "best" algorithm, especially if it looks like gobbledygook to all future coders.

Thinking about code performance is a valuable exercise. One that any serious programmer should probably have, almost always, in the back of their mind. It's incredibly healthy to continually challenge yourself (and others) about the relative performance of code. In doing so, you can vastly improve your own skills. But performance alone should never be the end-all/be-all of your work. And this is especially true if you're a "frontend developer".

Discussion (50)

Collapse
peerreynders profile image
peerreynders • Edited on

TL;DNR: Often it's less about being performance conscious but more about being explicit about what tradeoffs are being made: for who's benefit and and to who's detriment.

The article largely focuses on code produced by the frontend developer but the third party code selected for use on the client side (and thus affecting the client side architecture) imposes overhead even before a single line of code is written (The Cost of Javascript Frameworks, Benchmarking JavaScript Memory Usage).

So perhaps "caring about performance" should be practised by honestly understanding the impact our tools have on end user performance.

These days React is pretty much a bandwagon choice; reportedly popular DX, large ecosystem, ready supply of developers - but is the (performance) cost of adoption fully understood? If React Native isn't needed perhaps Preact is "good enough" (Etsy). And if it's mostly about JSX maybe Solid is an option?

Similarly Next.js is popular right now but are the end user performance tradeoffs well understood by those who develop with it? There is room for improvement which is why Remix exists. Astro right now supports multiple frameworks making it possible to gradually migrate towards more lightweight solutions once Astro becomes SSR capable (currently just in the SSG phase). Meanwhile Qwik aims to accomplish things that are impossible with the mainstream frameworks.

it was entirely unexpected coming from someone in the Ivory Tower that is Amazon.

Amazon is a large company with numerous teams.

In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

Marissa Mayer at Web 2.0 (2006)

So given their business volume a 1% difference can establish a tolerance for a lot of effort, expense, and "a certain lack of maintainability" in the right place.

And even if that browser is running on a mobile device, it probably has loads of unleveraged processing power available for you to use.

That's largely a desktop web perspective that doesn't transfer well to the (mass) mobile web.

It seems everybody is adopting a stance that serves their particular needs best - example: "on a mobile device this can take seconds".

So the truth is likely somewhere in between and "good enough" is highly context sensitive.

But if the code runs in the browser, you're not a crappy dev just because the tweak is not forefront in your mind.

That comes across as "if it doesn't happen in my backyard, I don't care".


Frontend Devs should care about web performance; JavaScript micro-optimizations play only a minor role in that (unless we're dealing with the implementation of frameworks/libraries).

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

I pretty much agree with everything you've written. But I may not have made it clear when I said you should "Care... but not too much" that what you should care about are discernible differences in performance. I totally agree that even a 100 millisecond "delay" may be enough to negatively affect conversions. What I'm railing against are those who are fretting over a nested loop, when the array being looped over can only ever hold, say, 10 values. In scenarios like those, fretting about "performance" is rather silly.

Collapse
peerreynders profile image
peerreynders

But I may not have made it clear when I said you should "Care... but not too much" that what you should care about are discernible differences in performance.

"Care... but not too much" would resonate strongly with the crowd that likes to invoke the "premature optimization" clause to shut down any discussion relating to any kind of performance - typically to justify or even promote "performance ignorance" because "that's the responsibility of the framework/libraries that we're using - so we don't have to care". So it's kind of "in vogue" to downplay performance.

My sense was that you were singling out "pointless JavaScript micro-optimizations" but there was never a counterpoint "what aspects of performance should a front end developer care about?"

when the array being looped over can only ever hold, say, 10 values.

Understood but there has to be the conscious decision "it's OK for 10 values, for 100_000_000 I'd have to do better", i.e. there should be knowledge of potential performance consequences should the code find itself on the hot path.

"… but the takeaway I want you to get is that more so than in other systems, you need to measure measure measure measure, and make sure your measurements are as near as possible to the real thing you're trying to build."

That said most code isn't on the hot path but it's easy for people to fixate on JavaScript micro-optimizations because those are relatively easy to spot in code - whether or not they are actually relevant. By extension the real performance issues are: knowing how to measure whether code is performant enough, knowing how to find the code that needs improvement, identifying early decisions that limit performance, and exploiting opportunities that aren't directly related to JavaScript.

The Three Unattractive Pillars of Web Dev: accessibility, security and performance;

  • "They’re only a problem when they’re missing."
  • "Try and retrofit any of them to your project and you’re going to have a bad time."

Even in React there is a fair amount of judgement involved when deciding to use features like React.memo, useMemo or to "just let things go".

A front end development performance mindset isn't about micro-optimizing every piece of JavaScript but caring about end user performance from the beginning of the first request up to the point when the browser page tab finishes closing.


Henry Petroski:

The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.

Collapse
cubiclesocial profile image
cubiclesocial

If you run Javascript anywhere, then you already don't care about system performance. Neither your own nor anyone else's.

You probably care more about whether or not the code runs the same in all major web browsers on all OSes. And if you use NodeJS, then you probably care that there is one language that you can use everywhere: You've got a hammer and everything looks like a nail.

If you want to measure performance, then you need to measure clock cycles. A clock cycle is the amount of time it takes to execute a common instruction on the CPU. Most modern CPUs are clocked at around 3-4GHz or roughly 3-4 billion instructions per second. Clock cycle information is not available to Javascript nor any current web browser tools. Measuring how much wall clock time an instruction takes to execute in a loop in Javascript is not actually all that helpful because many CPUs have pipelining and predictive branching thus allowing them to intelligently determine what the next instruction is likely to be and precalculate the result. If the next instruction is actually what was predicted, then it has already obtained the answer and can skip ahead (if not, the pipeline will probably stall). So doing something in a loop is measuring how long a loop is going to take. It might give you a rough idea of any given instruction but clock cycles are a more definitive and accurate measurement. Without line-level clock cycle counts, you'll have a very difficult time measuring performance in Javascript.

You should write some C or C++ code sometime. You'll suddenly see Javascript as the very sluggish, bloated, extremely abstracted away from the metal language that it actually is. Of course, C++ devs also tend to abstract away from the metal. Javascript and DOM are useful for abstracting and normalizing the GUI but it's not fast by any stretch of the imagination. Nor will it ever be.

Collapse
peerreynders profile image
peerreynders • Edited on

If you run Javascript anywhere, then you already don't care about system performance. Neither your own nor anyone else's.

That attitude simply ignores the realities on the web. The browser already has a runtime for JavaScript so you don't have to ship one.

WebAssembly for Web Developers (Google I/O ’19):

Both JavaScript and WebAssembly have the same peak performance. They are equally fast. But it is much easier to stay on the fast path with WebAssembly than it is with JavaScript. Or the other way around. It is way too easy sometimes to unknowingly and unintentionally end up in a slow path in your JavaScript engine than it is in the WebAssembly engine.

Also Replacing a hot path in your app's JavaScript with WebAssembly.

Using the language du jour on the browser will typically require the download of a massive runtime unless something like C/C++/Rust is used and those tend to inflate development time. So using WebAssembly has to be seen as an optimization once things stabilize.

In this case performance is about using the available resources to the best effect - JavaScript on the browser is (for some time to come only) part of the whole picture.

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

This is a great point. And I wouldn't disagree with you on any level. I will only point out what may not have been clear in my original post: When you're writing JavaScript for the browser, the preeminent measure of "performance" is time. Now of course, that can vary wildly on a machine-by-machine (or browser-by-browser) basis. But the generic end-user's perception of time is what typically dictates whether my code is seen as "performant".

Of course JavaScript is "sluggish". In fact, all interpreted languages are. Because they are, as you've pointed out, "farther from the metal". But when I'm writing web-based apps, in JavaScript, the "metric" by which my code is typically judged is: Does the end-user actually perceive any type of delay? If the page/app seems to load/function in a near-instant fashion, I'm not going to waste time arguing with someone over the CPU benchmark performance of one function versus another.

But again, I totally agree with your points here.

Collapse
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

I find it interesting that you did not mention what is probably the biggest predictor of performance in the browser - the size of the download. In general, the less code you send to the browser, the better.

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

With regard to initial page load time, yes. After the initial page load, the size of the download has almost nothing to do with performance.

Collapse
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

That's mostly true when your audience has relatively recent hardware and a good connection to the internet. Something about 4 billion people don't have.

bytebodger profile image
Adam Nathaniel Davis Author • Edited on

No. I'm sorry. But it doesn't matter whether you have gigabit fiber or a 56k dial-up modem. Once the code has been downloaded, the amount of code makes no difference to performance. I'm not saying - in any way - that you shouldn't care at all about bundle size. But if you're inferring that more code leads to lower performance once the package has been downloaded, then that's simply not accurate.

Thread Thread
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes • Edited on

I'm referring to the fact that a lot of people run on old hardware and/or out of date browsers and more code does affect performance for them.

bytebodger profile image
Adam Nathaniel Davis Author

I guess you're referring to the performance of the code in memory. Because more code can take up more space in RAM. But even on a relatively-ancient system, the "performance" hit needed to process 10,000 lines of JS code versus 1,000 lines of JS code is extremely minimal. If you think that you can improve the runtime performance of your code, on anyone's system, merely by writing fewer lines of code, then your target audience probably can't effectively run ANY React / Angular / jQuery / whatever app.

Thread Thread
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

It sounds like you've never encountered an app that will not run well on your old system, but runs fine on your new system.

bytebodger profile image
Adam Nathaniel Davis Author

When an app runs poorly on your old system, but it runs fine on your new system, it's not based on the number of lines of code.

Thread Thread
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

I didn't mention lines of code. There is a correlation between the size of the app and the complexity of its function and the demand it places on its running environment.

The size isn't the actual cause (usually). It's just indicative of the likelihood that the app will be more demanding of its execution environment.

bytebodger profile image
Adam Nathaniel Davis Author

I'm sorry, but this is a bit disingenuous. You say that you didn't mention lines of code. But your initial comment was about the size of the download. What do you think makes the download large???

Thread Thread
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes • Edited on

I'm sorry, but your original use of the phrase was disingenuous. It comes across as an attempt to belittle the point. Lines of code is purely a function of formatting (unless you can point me to an accepted standard of how to measure it).

I get that you don't think the number of bytes you send to the browser matters. You've made that perfectly clear. I understand that point of view. The company I work for takes the exact same stance. It's still the wrong stance. Size is not usually the actual cause, but it is certainly a reasonable proxy for judging potential performance requirements. And that is exactly what I pointed out.

The more code you send to the client, the more potential for execution errors, logic errors, or errors indirectly related to the code itself. More code usually means more complexity, which is another vector for more demand placed on the client system.

The code you didn't have to write will never cause a problem. I'm a firm believer in the best code is no code. If you've never heard the phrase before, you might look it up. The idea has been around for quite a while.

Sloan, the sloth mascot
Comment deleted
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Wow. Your sarcasm skills are epic. I hope you can teach me as well.

bytebodger profile image
Adam Nathaniel Davis Author

I could. But you'd have to download the instructions. And I'm sure that your bandwidth/device couldn't handle the bundle size.

Thread Thread
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Now that you've given up refuting my point, you're going to stick to ad hominem attacks instead. I'll keep that in mind.

bytebodger profile image
Adam Nathaniel Davis Author

It sounds really impressive to use Latin words like "ad hominem" - until you use them in a way that doesn't make any sense in the current context.

Thread Thread
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Definition of ad hominem (Entry 1 of 2)
1: appealing to feelings or prejudices rather than intellect

It's appropriate.

Collapse
jayjeckel profile image
Jay Jeckel

Interesting article and a lot of good points, but I disagree greatly with one aspect:

"But you also need to be realistic about the fact that your code is almost always running in an environment where there are tons of unused resources."

My "unused resources" aren't an excuse for web devs to write less performant and efficient code. You should be no less concerned about using my client resources that cost me money than you are concerned about using your server resources that cost you money.

Collapse
nitzanhen profile image
Nitzan Hen

Great article!

I feel that generally, many developers approach web development with the mindset of developing algorithms, and it's critical to understand that coding in different environments and/or for different purposes means that your top priorities as a developer should also be different.
It's similar, in a sense, to different types of writing - when writing a technical document, for example, you put your focus on completely different qualities than when writing a novel or a poem, even though they're both essentially writing!

As you've said, and it can't be stressed enough - in the case of web development the big-O efficiency of your code is usually a secondary priority. It's important to keep an eye out for it, but unless we're talking about really bad code, it typically makes no noticeable difference. Code brevity, maintainability and other similar qualities have a far greater impact on your product.

However, there is a nuance I'd like to shed light on - big-O time (and memory) efficiency are the two most popular aspects of efficiency, but they're by no means the only aspects of efficiency. Us web developers can afford to pay less attention to those, but other types of inefficiency can make a huge difference: concurrency & async operations, for example, are cardinal to virtually any modern app, and bad performance in that aspect could lead to terrible results. A similar point goes for network operations, bundle sizes, and more.
Once again, in most cases writing clear and maintainable code is a top priority, and can be achieved without sacrificing any of those, but it's important to keep in mind that inefficiency in those aspects of your logic could significantly harm the overall result.

And again - great article, well done!

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

TOTALLY agree. One of my biggest pet peeves is when someone stresses over tiny details of algorithmic "performance", but when you open the inspector, you can see that their app is making three identical GET calls to the exact same endpoint to retrieve the exact same data.

Collapse
nitzanhen profile image
Nitzan Hen

Exactly πŸ˜‚

Collapse
ecyrbe profile image
ecyrbe • Edited on

Hi Adam

Nice article again. I'll summarize for the Lazy:

  • Do not optimize early, if at all if there's no issue
  • Focus on maintainability over optimisation

I'll add, if you start having front-end Time issues, that you should mesure or add tooling to mesure easily (automate lighthouse reporting, activate flamegraphs). and optimize only problematic parts of your reports.

Nowadays, the biggest perf issues i face are not related to algorithms. But on front-end monolith being so big that webpack can take like 20 mn to package all the bundles (working on a really Big app). Vite JS is not an option, as we have too much legacy that vite can't even compile the project. Optimizing this kind of front-end issue is much harder.
So nowadays, i'm doing micro front-end to slim the Monster down. Module federation is a really nice pièce of technology.
I wrote a small article about it yesterday if you are interested.

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

Agreed. And module federation is indeed a wonderful feature.

Collapse
lexlohr profile image
Alex Lohr

Premature optimization is the root of all evil, they say. I think not caring about performance means we're more interested about pushing MVPs onto the customer than actually solving problems.

One of these is that a lack of performance will needlessly burn CPU cycles and waste energy, while also ensuring that whatever system it runs on needs to be replaced faster.

So keep in the back of your mind that you don't want to kill the planet with bad front end performance. Thanks for coding considerately.

Collapse
webreflection profile image
Andrea Giammarchi • Edited on

Imho, in every part of the stack you need to care about performance when performance is your bottleneck. Yet knowing better algorithms, or better libraries or solutions, assuming similar DX, to obtain the same result, is a plus that removes the idea "performance are not great" from the equation and reduce long term need for refactoring and/or maintenance.

In few words, if it takes the same time to implement the same solution but because you care about performance it's faster by design, you'll be a better developer in the long term than one that "didn't care about performance because FE, yolo!".

There are also a lot of people that mistake FE with business logic, PWA or SPA or MTA needs in terms of architecture and so on and so forth ... saying FE shouldn't care about performance is as short sighted as one could be in this industry, if by FE you consider how much responsibility has JS these days to make literally anything work on the Web.

Collapse
miketalbot profile image
Mike Talbot

A few thoughts on performance: one critical performance indicator these days is the amount of battery that functionality uses, and while it's often the case that it is hard to determine this for a website; hybrid apps and heavily used web apps that burn through a user's battery have a directly negative impact on that user's day. Not that this is an argument for micro optimisation, but I suggest it should be a consideration around critical functionality.

Imagine a web app that has some type-ahead functionality, too frequent use of a device's radio to contact the server for suggestions will have a negative impact if this is a commonly used function. Poorly written search functionality in the browser could make the experience of the search functionality poor and burn battery. Over eager caching of entire data sets to allow client side searching could negatively impact both energy usage and startup performance. This simple example shows that we should address proper consideration to the user objectives and the architecture of solutions where there is some chance that solution will be a core part of the user's journey.

The data structures we use frequently dictate performance too, choosing when to trade memory for computation (e.g. building O(1) lookup tables) or utilising our own or 3rd party APIs to request data in the right shape to reduce data transfer, round trips or client side processing are also worth considering at the solution architecture stage too.

I am totally with you on the pragmatism side, I'd use find over a fancy lookup table for arrays expected to be small too, because there is another cost here, the cost to our business or employer in terms of the amount of time it takes to build and deliver solutions to our customers. This is another practical optimisation, because if we run out of money before the solution is released (perhaps due to one of those 3 month long Linting wars?) we have also failed at our task!

A great article, so good to be back reading your thoughts and the debate that they produce after the hiatus.

Collapse
daviddurand profile image
David G. Durand

Just a note. You probably know this, but it's still worth being a little precise about terms. In particular you used the term "exponentially" to mean "grows too fast," but exponential time is unbelievably worse than what you showed -- nesting can be bad, but that's actually just Quadratic time: O(n^2).

I hate the thought of people making people feel bad about this kind of knowledge. It's useful, and not that hard to understand if you don't go into every detail of the mathematics. And if someone twits you for using a O(n log n) algorithm, be happy -- they don't even understand it themselves.

In my quick cheat sheet below, O(n^2) is the place where all developers should be cautious, because it's easy to end up doing something quadratic by accident.

I find it easiest to think about it in terms of what happens when n changes:

  • O(1) doesn't change, as the problem grows. Never worry about this time and your problem size.
  • O(log n) is not quite that good, but still pretty great: doubling the size of the problem adds one more unit of time to the work. Never worry about this.
  • For Linear time, O(n), doubling the size of the problem doubles the size of the work. You only have to worry about this, for very large problems.
  • For O(n log n), there's not a great mnemonic special case, but you don't need to worry about O(n log n) algorithms, either. If the problem goes to a size of n^2 you will take a bit more than twice the time you took for a size of n.
  • For Quadratic time, O(n^2), if you double the size of a problem, you have to do 4 times the work work. For cubic, O(n^3), you do 8 times the work, etc. This is the most common cause for complexity errors to cause intolerable performance. Nested loops may be fine as long as each limit is independent, and some of them have small or fixed bounds (like looping over 3 coordinates in graphics). Even worse, the nested loop may be invisible. It's worth being careful any time you loop calling a function that builds a data structure. If the "add an item" function takes time linear in size of the stored data, the whole thing is a quadratic loop, and will go bad fast as the problem grows.
  • For exponential time, adding 1 to the size of the problem doubles the size of the work. (of course exponentials like 3^n or 10^n) are the same for theorists, but practically it makes a big difference whether the n + 1 case is double or 10 times the effort. In any case you're now in the realm of problems where answers for problem sizes in the double or triple digits may effectively unknowable.
Collapse
bytebodger profile image
Adam Nathaniel Davis Author

First, thank you for the wonderful explanation.

Second, you're being gracious in implying that I already know this. I make no qualms about the fact that Big-O is primarily taught in universities - and I'm a self-taught programmer. In fact, I've only recently begun really paying any attention to it at all.

Finally, I have to say that you may have taught me something. (And thank you for that!) I understand that if you have a nested loop, where the inner loop is dependent upon a variable separate from the inner loop, this is quadratic time. But I honestly thought that if both loops were dependent upon the same variable, that this could reliably be referred to as exponential time. After all, if you have a nested loop, and both loops are looping through the same array, you are looping through it a number of times equal to the square of the length of the array. If you were to nest a third loop, where you're once again going through the same array, you would loop through everything a number of times equal to the cube of the length of the array. Of course, "squares" and "cubes" are... exponentials.

But this isn't meant to argue with you in any way. I'll need to read up on it more myself!

Collapse
daviddurand profile image
David G. Durand

As to the language, you are right that in n^2 and n^3, the numbers 2 and 3 are exponents. Your interpretation makes perfect sense, but doesn't match the way the words are used. Those functions are called polynomial functions because in a polynomial, all the exponents are constants (however big they may happen to be). The term exponential is reserved for when the exponent is the variable, which is a much faster-growing kettle of fish.

The Factorial function n! , that counts how many ways you can arrange n objects in a row, grows exponentially:
There's one way to order 1 object: pick it, and you're done. For two, you get two ways to pick the first one, and then there's only one way to pick remaining one. For 3 object, there are 3 ways to pick the first, multiplied by two ways to pick the remaining two, and so on.

Here's the first 10 values:

1 2 6 24 120 720 5040 40320 362880 3628800
Enter fullscreen mode Exit fullscreen mode

This terminology issue has been bugging me for two years, because of the pandemic. It makes me crazy to know that most people, and many of our leaders, have no idea of what an epidemiologist means when he talks about exponential growth. The ones who think they know are probably thinking about polynomial growth -- already scary, but still nothing on exponential. It's just really hard to understand how fast it is.

And you're right that I wasn't so clear about what has to be different and what the same, and your understanding is good. The point I wanted to make is that if there are hidden loops then you have two variables, but one is easy to forget about (often hidden inside some library routine). If the number of times through the loop is the same, you're surely in n^2-land... But you don't have to have n*n in such an obvious way to get O(n^2) growth:

for (i = 0; i < n; i++) {
    for (j = 0; j < i; i++) {
        i_do_a_lot_of this(i, j);
   }
}
Enter fullscreen mode Exit fullscreen mode

As long as the limits on the nested loops grow together, you can still get a quadratic time. Loops that build data up can do this really easily if the inner loop depends on the size of the data being built.

On the original Macs the system's function to add a menu item to a menu was pretty obvious. It scanned down the menu to the end, then added a new item. When this met Microsoft Word, the result was that graphic designers and font-freaks would have to wait over a minute for Word to start up, because it was adding all the fonts in the system to the font menu one at time. For 10 fonts that was 45 loops. For 100 fonts that's 4950 loops, for 120 it's over 7000. Of course, you if passed the whole list at one time, it just copied all the items, and even on those old machines you could have 100's or thousands of items (if you wanted to).

So for a working programmer, it really helps to be aware of O(n^2) as the start of bad growth -- sometimes it is best that can be done, but then you'll only want to use if you know the sizes are really limited. Otherwise, it may well involve long runtimes and "big computing" to get the answer.

There are some quadratic implementations that do make it into interfaces. Breaking lines for a paragraph or an editor are often implemented in a quadratic way (you can avoid the slowdown, but it's real data structure work), so you often see text editors become painfully slow if a very large file has only one line in it. Megabyte long lines aren't part of the design point of code editors, and the unreasonable slowness with lines of thousands of characters is not worth fixing.

Thread Thread
bytebodger profile image
Adam Nathaniel Davis Author

This is great info and I sincerely appreciate you taking the time to spell this out so clearly. Always learning... Thank you!

Collapse
ashleyjsheridan profile image
Ashley Sheridan

There's an area of front-end performance that hasn't even been touched on here, and it's probably the aspect that us developers are least likely to ever encounter.

The main problem with the front end is that we don't know what devices or browsers are being used, but it's almost a guarantee that those devices are considerably underpowered compared to what we're using.

For example, consider your daily work machine. It's pretty capable of running IDEs, virtual machines, servers, etc, all without complaint. The average users PC has probably a 10th of that power. What might only be a 10ms difference on your machine will be much more noticeable on theirs.

Also, mobile phones are often going to be very old, and probably out of date. Not every phone is old, some were just not an amazing spec to begin with. Considering that most users are browsing via phones and not laptops or desktops, it's important to consider performance there too.

I'm not saying that we should look to optimise to the same degree as code running on the old mainframes or embedded chips, but that we should be at the least mindful of performance. Whilst our code should always be as readable as possible, we should avoid the obvious issues (like nested loops as you've highlight).

Collapse
peerreynders profile image
peerreynders • Edited on

Just some side points.

Once the code has been downloaded, the amount of code makes no difference to performance.

That rationale makes sense from an SPA perspective.

But Marko, Astro and Qwik are pursuing partial or progressive hydration to enable next-generation MPA‑; if you can load SSR HTML and go interactive much quicker than an SPA there is no need for client-side routing. So keeping the "critical JavaScript" to an absolute minimum, lazy loading the rest (which may never be needed) is what next-gen MPAs are based on.

‑ at this point Remix is focusing on progressive enhancement. For the time being they are not convinced about the benefits of islands/partial hydration. But ultimately that may simply be a limitation of the React mental model as each island would have to be a separate component tree (and any inter-island communication would have to happen outside of React).

then your target audience probably can't effectively run ANY React / Angular / jQuery / whatever app.

Actually in 2019 Rich Harris mentions that Svelte was used for the Brazilian Stone Point-of-Sale device because React, Vue etc. simply imposed too much overhead on the hardware. This is just one example of what is often identified as a "resource-constrained device".

Similarly in the mobile space there are cases where the CPU cores are getting smaller ("more power efficient"; though more numerous) which means single thread performance is decreasing (and today's predominant web technologies rely on single thread performance) - this leads to a situation where a $400 iPhone SE has better "web performance" than a $1400 Samsung Galaxy S20 Ultraβ€”because the iPhone has two (of six) cores that are more performant than any one of Samsung's 8 cores.

Finally performance improvements over time for mobile devices is strongly coupled to the device class.
single core scores
Source

The graphic shows that budget devices aren't really improving that muchβ€”their feature is that they are inexpensive, not performant. Some projections stipulate that most of the future growth in the mobile market will be at the lower end, creating a situation where performance of the "median device" could be going down.

Collapse
dylanlacey profile image
Dylan Lacey

I love the nuance here and agree with everything you've said with one exception: There being tonnes of unused resources.

Your code might be lightweight on its own, but most people have multiple tabs. I have enough tabs that I'm not going to count them because it'll give me The Anxiety. Also, 3rd party JS code from ad servers and management tools and whatever-else can be extremely weighty. If I visit any single one of the Gawker sites with an ad-blocker off, for example, my fans spin up like my Mac wishes it was a hoverboat.

This is especially the case if you load un-versioned 3rd party resources directly from the source; Who knows what performance snafus their team might have in any given version?

I can't agree more that it's not worth obsessing over... But the more performance leeway you leave yourself, the less impact external forces have on your product.

Collapse
supportic profile image
Supportic

Due to Javascript's JIT compiler it's hard to optimize it with a predictable increase in performance. You can't know when the compiler flags a Funktion as hot. But you can help the compiler i.e. if you don't mix datatypes too often or declare variables outside of the loop instead of letting allocation happening inside. Still very marginal performance outcome.
I would put more effort into DOM manipulation if you use pure JS without react and Co.

Collapse
darkwiiplayer profile image

One of the most essential skills related to performance is knowing when it matters.

50ms of expensive loop in a button press that will lead the user to a different view in your application? Probably not a big deal.

50ms of expensive loop in a paint worklet that will be used on several elements in your website? That is probably a huge problem.

Once you've figured where performance matters, there's many other things to worry about related to identifying performance problems and fixing them. Bot all of that is wasted time when we've failed to realise that the code we're working on doesn't need to be performant in the first place.


With that being said though: Front-end developers should probably care more about performance than back-end developers. These days adding a bit of processing power to your distributed back-end isn't all that expensive anymore, whereas losing paying users to a bad UX due to slow code quickly adds up.

What's more, processing power isn't distributed evenly throughout society, so there is serious risk of unintentionally preventing users who can't afford good-enough hardware from using a service.

And last but not least, wasted processing power does not care whether it happens in the browser or in the server. If anything, it might be more likely that big hosting companies are using green energy to improve their image, making the front end by far the worse place of the two to waste processing power.

Collapse
jdhinvicara profile image
John Harding • Edited on

This is a great post - and I should read the replies more closely.

I generally agree with all that you say.

A couple of initial thoughts:
1) There's not much point focusing on algorithmic performance and totally ignoring network performance. 90%* of the time it's the network slowing you down (also apply that to a hierarchy of "slow things" and the algorithms are normally near the bottom of the list).
2) When you do look at algorithmic performance you really have to have a good idea what the size of the average and peak data sets will be. 87.3%** of the time your boss's idea of the numbers (especially at startups) is completely overstated.

  • - a completely made up number ** - also completely made up but stated with far more impressive accuracy.
Collapse
shrihari profile image
Shrihari Mohan

Yes , I too don't care about the micro optimizations unless the data set is high and Big O goes off the roof.

These are things I make sure I have done at the frontend .

  • Image Optimization - Those less KBs really matter ( This is a must if the website contains more images , I know this is not JS related but since we re talking about frontend , may help someone who is new to frontend development)

  • Lazy Load pages - Angular / React ( If you are using next you dont have to care about any optimizations)

Collapse
optimisedu profile image
optimisedu

Hi,

I think that you have presented this really well, it is important to talk about performance. It has so many meanings.

You have neatly raised awareness of big o as a rule to learn then at least you are aware you are breaking it.

Performance does translate directly to bandwidth and indirectly to search best practice and client retention as a result.

Bandwidth has a tangeable price. Unreadable code has a performance price. I have done a lot of research in SEO back when Google were less koy with their algorithm. I often say in a sentence that SEO is the payoff from correctly ballencing performance with acessability.

Great article, I am curious what you think.

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

I def agree with you about bandwidth. However, I'll add one thought to that (which may eventually be its own article): I've seen so many frontend dev wring their hands over stripping a few K out of their bundle size - only to deploy it to a content site or e-commerce site that automatically bloats the page with MEGABYTES of additional ad/tracking software. When this happens, I find the debate over bandwidth to be a little silly.

Collapse
andrew1007 profile image
andrew1007 • Edited on

I didn't see anything about the discussion of DOM performance. The DOM is, in an overwhelming majority of cases, the biggest source of performance issues. The frameworks we use and DOM computes on a scale that absolutely pales to 99.99% of any real-world algorithm you write in a codebase.

Algorithms have been run for the past 50 years. But the DOM? The computation requirements of it would be unheard during the infancy of the modern processor.

If your app is slow, the first thing you should be looking at is how your framework is manipulating the DOM and whether or not it is making the most optimized computations/manipulations in doing so. If you're transforming (Array#map or whatever) an array of 500 objects and rendering the computed data, your problem is not going to be the fact that you're running an algorithm on 500 objects. It is the fact that you're rendering 500 html entities. Virtualization or pagination is the answer. The last thing you look at is the algorithm.

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

The biggest "fault" I had in my article is that I apparently didn't make it clear that I was talking primarily about algorithmic performance. But yes, I totally agree with everything you've written.

Collapse
mikecompsci profile image
Mike Reynolds

What a world if our sort algorithms ran in logn. I believe you meant o(nlogn) as js built in sort is a quick sort.

Collapse
bytebodger profile image
Adam Nathaniel Davis Author

Thank you for pointing this out. I've fixed it now.

Collapse
chrismuga profile image
ChrisMuga

The part about performance is actually hard to read.
It's mad to assume that everybody is using the same powerful device you're using.

The browser itself is a mess. Yet here were are talking about how performance is not much of a priority. Sad.

Some comments have been hidden by the post's author - find out more