loading...

Discussion

markdown guide
 

Because just like security, it takes a lot of work. You can't just add another JavaScript library or set one HTTP Header and magically make your website super fast forever. You have to learn best practices, follow them, and fine tune your app for the best experience. A lot of people (whether it be care or money) don't take the time to get it right. Which is why it feels awesome when you find a site that does.

 

To quote Michael A. Jackson:

We follow two rules in the matter of optimization:
Rule 1: Don't do it.
Rule 2 (for experts only). Don't do it yet - that is, not until you have a perfectly clear and unoptimized solution.

There is only one big problem, as Sean Denny also points out: it takes a lot of work. Often when you can start applying "rule 2" there is no more budget.

When you do create anything you should always think about the performance aspects. Design your system in such a way that once it comes to optimization you do not have to rewrite half of it.

There is no silver bullet or gold rule in designing system to perform well. This for the simple fact that "performance" is ill-defined to begin with. For example, you can design a system for high concurrent execution of jobs, but this can negatively affect the performance of a single job.

But the best way is to write clean code and future optimization will be less difficult.

As Kent Beck said

Make it work. Make it right. Make it fast.

(in that order)

 

When planning how a site or feature should behave there's often no discussion of performance. Perhaps it's just assumed that it should be fast, but that's problematic because there is no clear definition of "fast." If no one knows how fast it's supposed to be then no one will know if it's too slow.

Without that consciousness there's nothing to stop developers from gradually adding bits and pieces that slow it down even more. Unless something takes ten seconds they might not pay attention to anything besides whether it functions as expected.

If performance is included up front then it becomes a constraint that's factored into every decision.

 

Selling performance improvements as adding significant business value is difficult in some organizations.

Demo a fancy new widget to a non/semi-tech product owner at the end of a sprint. They are excited and hold you in awe.

Demo a faster, scaleable, backend performance improvement at the end of a sprint. The product owner goes "meh, what else you got?" because it doesn't have any sizzle to them.

Basically, you have to find some sizzle to sell in your beefy performance improvements. This can be rather difficult, depending on your situation.

 

Wild guess: to me there's two causes of bad performance.

  1. Upfront bad ideas. In my experience that includes:

    • Choosing bloated tools (example: Inbox is done with GWT and sucks up 1Gio of RAM... Let's keep in mind it's an email client, not a rocket simulator)
    • Not doing pagination (example: Gmail is fast because it only deals with 25 emails at once while Thunderbird tries to put everything in a list)
    • Tightly coupled code that has too many responsibilities (example: any Wordpress theme that also does coffee)
  2. Things that appear at scale

Point number 1 is fairly easy to avoid, at least with experience. Of course it involves taking a step back from the hype to do smart choices and applying a discipline to the way you code.

Now regarding point number 2, well... If your application is scaling up to this point you'll probably be able to pull some budget to face whatever problem you're facing.

As a general rule when I develop I try to work on low-performance devices that are at the edge of my target. And I mean, as my primary testing devices. That includes throttling the CPU, testing on an old phone, adding 1s latency on every API calls, etc. You're probably thinking that testing on this kind of device is annoying but in truth it's not if your performance is good :)

 

It takes a lot of patience, time, care and understanding in the organisation - all of which are ideologically opposite to moving fast as a company.

For example, if your marketing department doesn't get it, they'll bombard you with requests to just add this script, and you'll not be able to rebuff all of them.

Also, lots of projects are published before they're truly ready, and modern prototyping, while fantastic, is anathema to performance.

 

That's a really good question!

I'd honestly say:

  • because not a lot of developers/engineers that work on the site would use it on a regular enough basis to notice poor performance. Even then, they often work close to the host so it's no where near as major. You'll notice internal tools often run super fast though.

    • there's no single 'one-size-fits-all' kind of 'way' of speeding up sites from what I've found. It takes a lot of man hours to do huge amount of research to really optimize sites for people who don't get good load times.
    • it can be expensive. Edge CDNs can be hella expensive, and hard to maintain if they're in-house.

I'd love to see it more as something that developers/engineers are more dedicated to, though. Nowhere near getting as much exposure as it needs. Kudos to the team at Dev.To though, for making the site so fast for everyone!

 

I struggle with the same dilemma, to which I also have another one on top. First I'll get to the observations:

  1. Very often developers don't have the faintest clue on how to improve performance.

    1.1 Sometimes they ignore computational costs of their implementations.
    1.2 They don't know some key performance characteristics of their language (e.g. not knowing how the GIL works in Python)
    1.3 They don't know how to make use of a profiler

  2. The costs of a badly performing application are indirect and hard to see.

  3. Projects often run out of funds before optimizing makes sense (as others pointed out).

  4. Sometimes you can just throw more hardware at it top "fix" your problem.

  5. ...and that's what is usually done anyway!

So my added dilemma is the following: what's the logic in scaling an inefficient application?

 

In my experience people that talk about "throwing hardware at the problem" don't consider the future. Having a developer spend a couple of weeks making your code twice as fast might let you get away with five servers instead of ten today, which might save you a few hundred dollars a month. Maybe not really all that great. But when your site has grown and your faster code continues to pay dividends you might be talking about the difference between 500 servers and 1000. Now your investment is paying several developers' salaries.

Of course you never know exactly what parts of your system are going to be critical down the line, what code might be thrown away a couple of years down the line, when (and if) you see that big uptick in traffic, and exactly which optimizations will pay off in the long run. Making those choices is more an art than a science.

 

The biggest offender today is all the cargo-culting into using big JS frameworks when they're not necessary.

People tend to ask "which JS framework" but the better question is "do I need a framework?" The answer in most cases is probably not. Most web applications can get by just fine with a series of small JS libraries (e.g. a router if you're building a SPA).

People need to rediscover the lost art of progressive enhancement. Then the web will get a lot faster.

 

I think folks struggle with performance because it usually means saying "no" when it's a lot easier to say "yes".

 

Because it's hard, it can be made easier but it requires a little bit of design ahead.

Performance (good performance) is a feature, but not everybody gets that.