DEV Community

Cover image for I'm Addy Osmani, Ask Me Anything!
Addy Osmani
Addy Osmani

Posted on

I'm Addy Osmani, Ask Me Anything!

My name is Addy and I’m an engineering manager on the Chrome team at Google leading up Web Performance. Our team are responsible for trying to get pages on the web loading fast. We do this through web standards, working with sites & frameworks in the ecosystem and through tools like Lighthouse and Workbox. I give talks about speed and have written books like Learning JavaScript Design Patterns and Essential Image Optimization. I'm married to the wonderful Elle Osmani who co-runs our side-project TeeJungle at the weekends.

To learn more, you can find me on Twitter at @addyosmani

Top comments (117)

Collapse
 
restoreddev profile image
Andrew Davis

Do you think the recent rise in popularity of single page applications using React/Angular/Vue have been good for web performance? To me, it seems too easy to create bundles that are very large and difficult to parse on the client (plus, SPAs can be really complicated, but that is a whole other discussion). Do you think the SPA is the future of web development or is there still a place for server generated HTML?

Collapse
 
addyosmani profile image
Addy Osmani

Great question :) A lot of the sites I profile these days don't perform well on average mobile phones, where a slower CPU can take multiple seconds to parse and compile large JS bundles. To the extent that we care about giving users the best experience possible, I wish our community had better guardrails for keeping developers off the "slow" path.

React/Preact/Vue/Angular (with the work they're doing on Ivy) are not all that costly to fetch over a network on their own. The challenge is that it's far too easy these days to "npm install" a number of additional utility libraries, UI components, routers..everything you need to build a modern app, without keeping performance in check. Each of these pieces has a cost to it and it all adds up to larger bundles. I wish our tools could yell at you when you're probably shipping too much script.

I'm hopeful we can embrace performance budgets more strongly in the near future so that teams are able to learn to live within the constraints that can guarantee their users can load and use your sites in a reasonable amount of time.

SPAs vs SSR sites: Often we're shipping down a ton of JavaScript to just render a list of images. If this can be done more efficiently on the server-side by just sending some HTML to your users, go for it! If however the site needs to have a level of interaction powered by JavaScript, I heavily encourage using diligent code-splitting and looking to patterns like PRPL for ensuring you're not sending down so much code the main thread is going to stay blocked for seconds.

Collapse
 
restoreddev profile image
Andrew Davis

Thanks for responding! PRPL is a new pattern to me, hopefully with more awareness we will be able to use it and techniques like it to get better performance.

Collapse
 
jess profile image
Jess Lee

What was the book writing process like? How did you balance something so long form with technologies that are constantly changing/improving? i.e. did you have to go back to the 'first chapter' at the end, and update anything?

Collapse
 
addyosmani profile image
Addy Osmani

The approach I take to writing books and articles is embracing "The Ugly First Draft". It forces you to get the key ideas for a draft out of your head and once you've got something on paper you can circle back and start to build on that base. I love this process because you get the short-term satisfaction of having "something" done but still have enough of an outline you can iterate on it.

With my first book, "Learning JavaScript Design Patterns", the first draft was written in about a week. It was pretty awful :) However, it really helped frame the key concepts I wanted the book to capture and gave me something I could share with friends and colleagues for their input. It took a year to shape my first ugly draft of that book into something that could be published.

On writing about technologies that are constantly changing - I think every writer struggles with this. My opinion is books are great for fundamental topics that are likely to still be valuable to readers many years into the future. Sometimes topics like patterns you would use with a JavaScript framework or how to use a particular third-party API might be better served as short-lived blog posts (with less of the editorial process blocking you). You're still spreading your knowledge out there but some mediums are better than others for technologies that change regularly.

This is especially true of the front-end :)

Collapse
 
rkoutnik profile image
Randall Koutnik

With my first book, "Learning JavaScript Design Patterns", the first draft was written in about a week.

Yikes, it took me nearly 9 months to put together the first draft of Build Reactive Websites with RxJS. What's your secret?

Thread Thread
 
addyosmani profile image
Addy Osmani

My ugly drafts are really, really ugly :)

It'll sound awful, but I have never intentionally written a book or long article. Often, there will be a topic I'm deeply invested in learning more about or writing about and I'll just try to consistently take time out every day to build on the draft.

With the first draft of the patterns book, I wanted to write an article about the topic so I started there and it just grew. I would stay up late and keep writing into the early hours of the morning each day during that week. The first draft wasn't very long - it may have been 60 pages of content.

However, the very early versions are not something I would have felt confident sharing with anyone. There were many parts with half-complete thoughts. It lacked a lot of structure. Many of these are things you have a better chance at getting right when spending 9-12 months on your first draft. I ended up spending that long on rewrites.

Thread Thread
 
rhymes profile image
rhymes

Apropos of books and long articles, thank you a lot for Images.guide. It was illuminating and also very useful to make clients understand that re-inventing image resizing each time is usually not the best move :D

Collapse
 
rkoutnik profile image
Randall Koutnik

I'm sure you've encountered some real humdingers when trying to optimize slow pages. Got a favorite story about some ridiculous performance bug you've encountered?

Collapse
 
ben profile image
Ben Halpern

Collapse
 
addyosmani profile image
Addy Osmani

Hmmmm. The worst optimized site I've encountered in my career was probably just a few weeks back :) This was a site with a number of verticals where the front-end teams for each vertical were given the autonomy to decide how they were going to ship their part of the site.

As it turns out, this didn't end well.

Rather than each "vertical" working collaboratively on the stack the site would use, they ended up with vaguely similar, yet different things. From the time you entered the site to the time you checked out, you could easily load 6 different versions of React and Redux. Their JavaScript bundles were multiple MBs in size (a combination of utility library choices and not employing enough much code-splitting or vendor chunking). It was a disaster.

One thing we hope can change this is encouraging more teams to adopt performance budgets and stick closely to them. There's no way the web can compete on mobile if we're going to send down so much code that accomplishes so little.

Oh, other stories.

  • Ran into multiple sites shipping 80MB+ of images to their users for one page...on mobile
  • Ran into a site that was Service Worker caching 8GB of video... accidentally

There are so many ridiculous perf stories out there :)

Thread Thread
 
ben profile image
Ben Halpern

Oh my goodness this is making me squirm.

Thread Thread
 
rhymes profile image
rhymes • Edited

OMG six different versions of the same library is definitely the result of poor communication. I can't wait for an AI powered browser opening alerts saying "please tell those developer fools that did this website to talk to each other :D"

The imaging thing is all very common.

I've seen galleries/grids of images rendered using the original images uploaded by the operator, which obviously were neither checked for size nor resized automatically.

Thread Thread
 
galdin profile image
Galdin Raphael

Those stories sound like they're really easy to repeat though.

Collapse
 
addyosmani profile image
Addy Osmani

Hey Lauro!

  1. Start with a short bullet-list of the main points to convey.

  2. Check if anyone has already written an article on this topic. Is it recent? Comprehensive?

  3. If there's still value in writing, use the bullets as headings and try to write out a few paragraphs around each of them. Link to related articles so folks can dive in deeper to the topic if they like.

  4. Share an early draft with a friend or colleague. It's easy to spend weeks on a write-up only to realize too late that it's not easy for others to digest. Getting feedback early on can help validate where you're at or give you a chance to quickly course correct.

  5. Iterate. If you find you're writing something long, make it very easy for casual readers to still get the main points from a quick read. This might mean bolding certain lines or using a tl;dr at the top of the article.

  6. Publish.

  7. Relax. Hopefully :)

Collapse
 
andy profile image
Andy Zhao (he/him)

What are the first performance improvements that you look for when going to a web page?

Collapse
 
addyosmani profile image
Addy Osmani

The first performance improvement that I check for is whether the site can be shipping less JavaScript while still providing most of their value to the end user. If you're sending down multiple megabytes of JS, that might be completely fine if your target audience are primarily on desktop, but if they're on mobile this can often dwarf the costs of other resources because it can take longer to process.

In general, I try to go through the following list and check off if the site could be doing better on one or more of them:

✂️ Send less JavaScript (code-splitting)
😴 Lazy-load non-critical resources
🗜 Compress diligently! (GZip, Brotli)
📦 Cache effectively (HTTP, Service Workers)
⚡️ Minify & optimize everything
🗼 Preresolve DNS for critical origins
💨 Preload critical resources
📲 Respect data plans
🌊 Stream HTML responses
📡 Make fewer HTTP requests
📰 Have a Web Font loading strategy
🛣 Route-based chunking
📒 Library sharding
📱 PRPL pattern
🌴 Tree-shaking (Webpack, RollUp)
🍽 Serve modern browsers ES2015 (babel-preset-env)
🏋️‍♀️ Scope hoisting (Webpack)
🔧 Don’t ship DEV code to PROD

Collapse
 
andy profile image
Andy Zhao (he/him)

Phew, extensive list! Love the emojis :)

Collapse
 
superkarolis profile image
Karolis Ramanauskas • Edited

Could you clarifiy what you mean by library sharding? Awesome list by the way, thank you!

Collapse
 
rhymes profile image
rhymes

Great checklist! Thanks!

Collapse
 
iamsunny profile image
Sunny Sharma • Edited

Thank you Addy for sharing the checklist, enough points for my next talk :-)

Collapse
 
ben profile image
Ben Halpern

Hey Addy, thanks for this!

What's your favorite programming language besides JavaScript?

Collapse
 
addyosmani profile image
Addy Osmani • Edited

I recently enjoyed digging back into Rust and loved it. It has a pretty expressive type system that lets you convey a lot about the problem you're working on. Support for macros and generics are super nice. I also enjoyed using Cargo.

My first interaction with Rust was in this excellent tutorial by Matt Brubeck back in 2014 called "Let's build a browser engine!" (I hope someone tries to update it!). Perhaps a good future post for someone on dev.to? ;)

Collapse
 
nickytonline profile image
Nick Taylor • Edited

This is from way back, but I find your origin story quite interesting.

I guess you've always had perf on the brain? 😜

Collapse
 
nickytonline profile image
Nick Taylor

And a follow up question. Is the work you did on your "Xwebs megabrowser" what paved the way for all browsers to start serving multiple HTTP connections per domain to load a web page?

Collapse
 
addyosmani profile image
Addy Osmani

Haha. Perf always matters :)

For some back-story, when I was growing up in rural Ireland, dial-up internet was pervasive. We spent years on 28.8kbps modems before switching to ISDN, but it was an even longer time before fast cable internet became the norm. There were many times when it could take 2-3 days just to download a music video. Crazy, right?

When it was so easy for a family member to pick up a phone and drop your internet connection, you learned to rely on download managers quite heavily for resuming your network connections.

One idea download managers had was this notion of "chunking" - rather than creating one HTTP connection to a server, what if you created 5 and requested different ranges from the server in parallel? If you were lucky (which seldom happened), you would have a constant speed for just that one connection, but it was often the case that "chunking" led to your file being downloaded just that little bit faster.

I wanted to experiment with applying this notion of "chunking" to web browsers. So if you're fetching a HTML document or an image that was particularly large, would chunking make a difference? As it turns out, there were cases where this could help, but it had a high level of variance. Not all servers want you to create a large number of connections for each resource but the idea made for a fun science project when I was young and learning :)

Back to your question about this paving the way for browsers serving multiple HTTP connections per domain: I think if anything, it was happenstance that I was looking at related ideas. Network engineers working on browsers are far more intelligent than I ever could have been at that age and their research into the benefits of making multiple connections to domains is something I credit to them alone :)

Thread Thread
 
nickytonline profile image
Nick Taylor

Thanks for taking the time to reply Addy. Keep up the great work and keep that Canadian @_developit in check. I'm not sure how much he knows about perf 🐢. Better check his bundle sizes. 😉

Collapse
 
ahmadawais profile image
Ahmad Awais ⚡️ • Edited

Hey, Addy! 👋

Nice to have you here. Big fan of your work. I'll get right to the questions:

  1. 🤔 What's the different between Dev Programs Engineer & Dev Advocate and why you chose to be the former?
  2. 🕖 What's your day to day look like at Google? Also %age of travel you do as a DPE.
  3. 🆚Code is growing like crazy, I'm also building VSCode.pro, what are your thoughts on VSCode and what is your default IDE and Terminal?
  4. 🤣 Would you rather fight 50 small ducks or one giant 50 feet duck and why?

Looking forward! ✌️

Collapse
 
addyosmani profile image
Addy Osmani • Edited

Hey Ahmad! I'll try to answer as best I can here.

  1. Google's developer relation teams have historically had two types of roles: the Developer Programs Engineer (DPE) and the Developer Advocate (DA).

In earlier years, this allowed us to form a distinction between developers who learned more heavily on the engineering side (DPEs) and worked on tools, libraries and samples. You could think of this as aligned with a software engineering position. We then had DA roles where there was a little more emphasis on creating scalable outreach (writing articles, giving talks, roadshows).

Over time, these roles have blended quite a lot. It's not uncommon to see DAs who are extremely heavy on the engineering side and even write specs as part of TC39 and similarly, not uncommon to find DPEs who enjoy speaking and writing. I think ultimately the distinction matters less these days than it used to.

  1. The answer to this question has changed a lot in the last six years :) There was a year when my wife tells me I was traveling 30% of the year, which is crazy. That's thankfully gone down significantly over time and I travel pretty irregularly these days. At this point I've given approximately 110 talks around the world but I'm certainly happy to be taking a break from this for a little while.

Day to day: When I switched over to being an engineering manager, a lot more of my time was spent in coordination meetings with other Google teams, leads and in 1:1s with my reports.

A typical day at the office starts at 7.45AM. I'll usually try to catch up really quick on what's new in the JS community or before diving into meetings for 70% of the day. I reserve 30% for working on design docs, coding or writing articles (if it's not too busy). I wrap up sometime between 6-7 and head back home to hang out with the wife and kids.

  1. I love VS Code. Use it everyday. My favorite theme is probably Seti (which I use across my editor, iTerm and Hyper.app)

  2. 50 small ducks. I like my problems bite-sized ; )

Collapse
 
d0ruk profile image
Doruk Kutlu

meetings for 70% of the day.

Sad (for me). Is that where this road leads?

Thread Thread
 
addyosmani profile image
Addy Osmani

It's not all that bad :)

Moving to management can feel like a large change. You give up some of the things you enjoy, like coding as much. Instead of building libraries, you're building teams and helping others build up their careers.

There's only so many hours in the day so you're going to hit a limit on how much you can "scale" yourself. When you're trying to help others develop their skills, it gives you a chance to do more of this scaling. You get to see how they would tackle problems (often in new and better ways than you would).

That said it's definitely not for everyone. I've been lucky for the rest of my time to still give me an opportunity to write or code..sometimes :)

Collapse
 
ahmadawais profile image
Ahmad Awais ⚡️

Nuce answers. Thanks for the response 👋

Collapse
 
dcj profile image
Daniel James

Hi Addy!

I'm curious about your thoughts on Brave Browser with regards to speed / performance improvements.

From my understanding, they're aiming to completely change the advertising model of the web, and their strategy involves blocking tracking scripts and ads. It seems like that's a big part of their speed advantage that they tout, and that blocking those two things are something Google could never do, as it's essential to their core revenue stream.

I still use Chrome because all of my stuff (bookmarks, passwords, etc.) is tied to it, but the few times I've tried Brave, the thing really does seem to fly.

I'm curious about your thoughts on Brave's approach, or about tracking scripts and ads in general.

Thanks!

Collapse
 
addyosmani profile image
Addy Osmani

I often switch between Chrome and Brave on Android and have respect for their work.

Users can't be blamed for trying any solutions that block third-party scripts as they can have a non-zero impact on page load performance. We also see many cases where folks are shipping far too much first-party script and sites need to take responsibility for auditing all of the code (1P, 3P) they're sending down.

Stepping back to the topic of tracking scripts and ads - my personal opinion is we need to explore models for the keeping sites we love monetarily sustainable while ensuring user-experience and user-choice regarding data sharing is respected as much as possible. While I can't comment as much on Google's own strategy here, as a user, I'd be happy if we shifted to sites that loaded fast and were more respectful of our privacy and data-sharing preferences.

Collapse
 
dcj profile image
Daniel James

Great answer; thanks so much for your reply!

Collapse
 
fnh profile image
Fabian Holzer

I'd be interested to hear your thoughts on WebAssembly.

Collapse
 
addyosmani profile image
Addy Osmani

I'm hiring for a WebAssembly Developer Advocate at the moment so I definitely believe it has a future :)

I'm excited about the potential WASM will unlock for types of applications that were heavily bound to the compute of JavaScript. I think it's going to be huge for certain classes of games, accelerating how quickly well known desktop applications and libraries can be ported to the web (I was playing around with a Vim port in WASM just last night!) and potentially for data-science. At the same time, I don't think it's going to displace the use-cases for JavaScript directly. JS continues to see strong adoption for UI development and I don't see this changing anytime soon.

Collapse
 
peter profile image
Peter Kim Frank

Hey Addy, what are your feelings about AMPs?

Collapse
 
addyosmani profile image
Addy Osmani

I think what we all want is really fast first-party content delivering great experiences to users.

With my user-hat on, an unfortunate reality is that most sites on the web still provide users a slow, bloated experience that can frustrate them on mobile. If a team has the business buy-in to work on web performance and optimize the experience themselves, I'm more than happy for them to do so. We need as many folks waving the #perfmatters flag as we can get :)

That said, staying on top of performance best practices is not something every engineering team has the time to do. Especially in the publishing space, this is where I see AMP providing the most value. I'm pretty excited about their commitments to the Web Packaging specification for addressing some of the valid critique AMP's had with respect to URLs: amphtml.wordpress.com/2018/01/09/i....

I'm also very keen for us to keep exploring what is possible with the Search announcement that page speed will be used as a ranking signal irrespective of the path you take to get there.

Collapse
 
ben profile image
Ben Halpern

This evolution for AMP definitely has me more interested in the project. I've been standing on the sideline hoping some of these URL issues could be resolved.

Collapse
 
dangolant profile image
Daniel Golant

Thank you for doing this!
RAM utilization is a constant topic of conversation in the dev community. You have better insight into the needs of Chrome instances and Electron applications than most people. Is reducing RAM utilization currently a high priority for your team, and do you think that changes in Chrome could noticeably improve RAM concerns in Electron applications?

Collapse
 
addyosmani profile image
Addy Osmani

Memory (as you know) is a shared resource. Any site can use more of it to give their users a better experience but often there's little done to monitor just how much memory consumption/leaks individual sites or apps might have. When everyone is flying a little blindly here (self included) it's easy for sites or individual Chromium instances/Electron apps to cause memory strain and for this to negatively impact users and the experience they have with apps on any system.

We're in a period of researching the impact of memory usage in Chrome at the moment. If the Electron community has more insight (or traces) they can share with us about heavy memory consumption concerns, we would love to take a deeper look. Are there examples/data you're aware of?

I definitely acknowledge high memory consumption otherwise matters where it can negatively impact users. Today that can manifest in a few ways when using Chrome directly on lower-end devices:

  • Foreground out of memory exceptions
  • Need to reload discarded tabs
  • Need to reload other applications that might be kicked out of memory
  • Battery drain

We're looking a little more heavily at the memory consumption story on mobile right now, but as I said any data regarding Electron mem usage would help. I know this has anecdotally been a problem I've heard reported before.

Collapse
 
dangolant profile image
Daniel Golant

Thanks Addy! Glad to hear it's an area of focus for mobile, I feel like improvements there could reasonably at least lead to more insight into utilization on desktop. I don't have hard data, I was mainly referring to what I see as the zeitgeist (warranted or not), and my own personal experience. Appreciate your thoughts !

FWIW, it looks like here a user ascertained that Chrome actually uses less memory than most other available browsers, so kudos!

Collapse
 
liana profile image
Liana Felt (she/her)

Hey Addy, Thanks so much for doing this! How much do different teams at Google coordinate?

Collapse
 
addyosmani profile image
Addy Osmani

Over on Chrome, we try our best to stay in touch with Google teams that are working on shipping experiences for the web as well as folks building for other platforms like Android or iOS. Sometimes this happens in the form of monthly check-ins to share learnings (there's often a lot we can learn from one another) and other times it's just over mailing lists.

That said, Google is a very large company and with this comes challenges always staying on top of who is working on what. We still have a long ways to go with improving our communication across all teams. We do want to keep making progress here :)

Collapse
 
maxart2501 profile image
Massimo Artizzu

Hello Addy, so nice to have you here!

We often talk about JavaScript that has to be parsed and executed, that should be split and lazy loaded, and similarly for CSS. But what about HTML?

Would you say that DOM nodes are heavy in terms of memory occupation? Even if they're not rendered? Taking about 6k-14k nodes here, including text and comment nodes.
Are element nodes heavier in terms of memory and CPU hogging?

How about HTML parsing? Is a big HTML document (like, 400k unzipped) a problem for mobile devices?

Asking for, huh, a friend that has to struggle with a client that doesn't think reducing the HTML to the bare minimum is meaningful for performance.

Collapse
 
addyosmani profile image
Addy Osmani

Everything has a cost and too many DOM nodes = unhappiness.

Back in 2016, the Chrome team observed that most sites we were profiling had 5000+ DOM nodes. Ideally, your page should stick closer to 1500 for mobile. At that time Chrome was optimized for a rough maximum of 32 element deep documents. We definitely handle things a lot better than this, but you're in the sweet spot if you're able to stay within these constraints.

With respect to HTML parsing costs, I always lean on shipping the greatest value to users in the fewest bytes possible. That said, I would probably do some auditing on the costs of sending down 400K of unzipped HTML (see if there's a real issue with CPU and memory usage on your target devices) and visualize to your client the difference of shipping less if that's the case.

Collapse
 
ben profile image
Ben Halpern

What's it like being a coding celebrity?

Did you think you'd have a massive following like this before it happened?

How did you get to where you are in this regard? I'm super curious about your mindset along the way.

Collapse
 
addyosmani profile image
Addy Osmani

I often don't feel like I deserve any of the attention. Some of the most exceptional coders in the world don't get as much acknowledgement of their work as they should. I wish that we could change this and to the extent platforms like dev.to and social mentions enable a path to this, I'm hopeful more of them will be considered coding celebrities in the future :)

What's it like being a coding celebrity?

What's it like.. you learn the importance of being humble. You learn to be careful with what you say and how it can be interpreted when you make a strong statement. When people look up to you (in any situation), you have a responsibility to try giving them a measured response where you've considered the best data and facts available to you. It's far too easy to spend 15s thinking about something and just posting it out into the world (think before you speak).

There are tweets and articles about topics that I would love to post, but don't because I'd prefer to take my time to check on the data and consult with others in the community so I can be confident if I suggest something is a best practice, that I truly believe it is. It's very possible I overthink and over-analyze so take this with a grain of salt :)

Did you think you'd have a massive following like this before it happened?

I didn't think I would be fortunate enough to get the following I have. I just constantly hope I'm giving folks some value vs. throwing out nonsense :)

How did you get to where you are in this regard? I'm super curious about your mindset along the way.

I get asked this question a lot and the answer is: by trying to continue delivering value to the community as often as I can. I definitely don't do this every day or every week, but I think we all struggle to stay on top of things on the web. It's challenging knowing what the latest best practices, tools and techniques are. To the extent that we can distill some of this down into a bite-sized form for folks (tweets etc) that they feel comfortable digesting, maybe that's useful enough.

I will say the journey itself to this point, although hard, has been fun and educationally rewarding.