I'm Addy Osmani, Ask Me Anything!

Addy Osmani on July 11, 2018

My name is Addy and I’m an engineering manager on the Chrome team at Google leading up Web Performance. Our team are responsible for trying to ge... [Read Full]
markdown guide
 

Do you think the recent rise in popularity of single page applications using React/Angular/Vue have been good for web performance? To me, it seems too easy to create bundles that are very large and difficult to parse on the client (plus, SPAs can be really complicated, but that is a whole other discussion). Do you think the SPA is the future of web development or is there still a place for server generated HTML?

 

Great question :) A lot of the sites I profile these days don't perform well on average mobile phones, where a slower CPU can take multiple seconds to parse and compile large JS bundles. To the extent that we care about giving users the best experience possible, I wish our community had better guardrails for keeping developers off the "slow" path.

React/Preact/Vue/Angular (with the work they're doing on Ivy) are not all that costly to fetch over a network on their own. The challenge is that it's far too easy these days to "npm install" a number of additional utility libraries, UI components, routers..everything you need to build a modern app, without keeping performance in check. Each of these pieces has a cost to it and it all adds up to larger bundles. I wish our tools could yell at you when you're probably shipping too much script.

I'm hopeful we can embrace performance budgets more strongly in the near future so that teams are able to learn to live within the constraints that can guarantee their users can load and use your sites in a reasonable amount of time.

SPAs vs SSR sites: Often we're shipping down a ton of JavaScript to just render a list of images. If this can be done more efficiently on the server-side by just sending some HTML to your users, go for it! If however the site needs to have a level of interaction powered by JavaScript, I heavily encourage using diligent code-splitting and looking to patterns like PRPL for ensuring you're not sending down so much code the main thread is going to stay blocked for seconds.

 

Thanks for responding! PRPL is a new pattern to me, hopefully with more awareness we will be able to use it and techniques like it to get better performance.

 

What was the book writing process like? How did you balance something so long form with technologies that are constantly changing/improving? i.e. did you have to go back to the 'first chapter' at the end, and update anything?

 

The approach I take to writing books and articles is embracing "The Ugly First Draft". It forces you to get the key ideas for a draft out of your head and once you've got something on paper you can circle back and start to build on that base. I love this process because you get the short-term satisfaction of having "something" done but still have enough of an outline you can iterate on it.

With my first book, "Learning JavaScript Design Patterns", the first draft was written in about a week. It was pretty awful :) However, it really helped frame the key concepts I wanted the book to capture and gave me something I could share with friends and colleagues for their input. It took a year to shape my first ugly draft of that book into something that could be published.

On writing about technologies that are constantly changing - I think every writer struggles with this. My opinion is books are great for fundamental topics that are likely to still be valuable to readers many years into the future. Sometimes topics like patterns you would use with a JavaScript framework or how to use a particular third-party API might be better served as short-lived blog posts (with less of the editorial process blocking you). You're still spreading your knowledge out there but some mediums are better than others for technologies that change regularly.

This is especially true of the front-end :)

 

With my first book, "Learning JavaScript Design Patterns", the first draft was written in about a week.

Yikes, it took me nearly 9 months to put together the first draft of Build Reactive Websites with RxJS. What's your secret?

My ugly drafts are really, really ugly :)

It'll sound awful, but I have never intentionally written a book or long article. Often, there will be a topic I'm deeply invested in learning more about or writing about and I'll just try to consistently take time out every day to build on the draft.

With the first draft of the patterns book, I wanted to write an article about the topic so I started there and it just grew. I would stay up late and keep writing into the early hours of the morning each day during that week. The first draft wasn't very long - it may have been 60 pages of content.

However, the very early versions are not something I would have felt confident sharing with anyone. There were many parts with half-complete thoughts. It lacked a lot of structure. Many of these are things you have a better chance at getting right when spending 9-12 months on your first draft. I ended up spending that long on rewrites.

Apropos of books and long articles, thank you a lot for Images.guide. It was illuminating and also very useful to make clients understand that re-inventing image resizing each time is usually not the best move :D

 

I'm sure you've encountered some real humdingers when trying to optimize slow pages. Got a favorite story about some ridiculous performance bug you've encountered?

 
 

Hmmmm. The worst optimized site I've encountered in my career was probably just a few weeks back :) This was a site with a number of verticals where the front-end teams for each vertical were given the autonomy to decide how they were going to ship their part of the site.

As it turns out, this didn't end well.

Rather than each "vertical" working collaboratively on the stack the site would use, they ended up with vaguely similar, yet different things. From the time you entered the site to the time you checked out, you could easily load 6 different versions of React and Redux. Their JavaScript bundles were multiple MBs in size (a combination of utility library choices and not employing enough much code-splitting or vendor chunking). It was a disaster.

One thing we hope can change this is encouraging more teams to adopt performance budgets and stick closely to them. There's no way the web can compete on mobile if we're going to send down so much code that accomplishes so little.

Oh, other stories.

  • Ran into multiple sites shipping 80MB+ of images to their users for one page...on mobile
  • Ran into a site that was Service Worker caching 8GB of video... accidentally

There are so many ridiculous perf stories out there :)

OMG six different versions of the same library is definitely the result of poor communication. I can't wait for an AI powered browser opening alerts saying "please tell those developer fools that did this website to talk to each other :D"

The imaging thing is all very common.

I've seen galleries/grids of images rendered using the original images uploaded by the operator, which obviously were neither checked for size nor resized automatically.

Those stories sound like they're really easy to repeat though.

 

What are the first performance improvements that you look for when going to a web page?

 

The first performance improvement that I check for is whether the site can be shipping less JavaScript while still providing most of their value to the end user. If you're sending down multiple megabytes of JS, that might be completely fine if your target audience are primarily on desktop, but if they're on mobile this can often dwarf the costs of other resources because it can take longer to process.

In general, I try to go through the following list and check off if the site could be doing better on one or more of them:

✂️ Send less JavaScript (code-splitting)
😴 Lazy-load non-critical resources
🗜 Compress diligently! (GZip, Brotli)
📦 Cache effectively (HTTP, Service Workers)
⚡️ Minify & optimize everything
🗼 Preresolve DNS for critical origins
💨 Preload critical resources
📲 Respect data plans
🌊 Stream HTML responses
📡 Make fewer HTTP requests
📰 Have a Web Font loading strategy
🛣 Route-based chunking
📒 Library sharding
📱 PRPL pattern
🌴 Tree-shaking (Webpack, RollUp)
🍽 Serve modern browsers ES2015 (babel-preset-env)
🏋️‍♀️ Scope hoisting (Webpack)
🔧 Don’t ship DEV code to PROD

 
 
 

Thank you Addy for sharing the checklist, enough points for my next talk :-)

 

Could you clarifiy what you mean by library sharding? Awesome list by the way, thank you!

 

Hey Lauro!

  1. Start with a short bullet-list of the main points to convey.

  2. Check if anyone has already written an article on this topic. Is it recent? Comprehensive?

  3. If there's still value in writing, use the bullets as headings and try to write out a few paragraphs around each of them. Link to related articles so folks can dive in deeper to the topic if they like.

  4. Share an early draft with a friend or colleague. It's easy to spend weeks on a write-up only to realize too late that it's not easy for others to digest. Getting feedback early on can help validate where you're at or give you a chance to quickly course correct.

  5. Iterate. If you find you're writing something long, make it very easy for casual readers to still get the main points from a quick read. This might mean bolding certain lines or using a tl;dr at the top of the article.

  6. Publish.

  7. Relax. Hopefully :)

 

This is from way back, but I find your origin story quite interesting.

I guess you've always had perf on the brain? 😜

 

And a follow up question. Is the work you did on your "Xwebs megabrowser" what paved the way for all browsers to start serving multiple HTTP connections per domain to load a web page?

 

Haha. Perf always matters :)

For some back-story, when I was growing up in rural Ireland, dial-up internet was pervasive. We spent years on 28.8kbps modems before switching to ISDN, but it was an even longer time before fast cable internet became the norm. There were many times when it could take 2-3 days just to download a music video. Crazy, right?

When it was so easy for a family member to pick up a phone and drop your internet connection, you learned to rely on download managers quite heavily for resuming your network connections.

One idea download managers had was this notion of "chunking" - rather than creating one HTTP connection to a server, what if you created 5 and requested different ranges from the server in parallel? If you were lucky (which seldom happened), you would have a constant speed for just that one connection, but it was often the case that "chunking" led to your file being downloaded just that little bit faster.

I wanted to experiment with applying this notion of "chunking" to web browsers. So if you're fetching a HTML document or an image that was particularly large, would chunking make a difference? As it turns out, there were cases where this could help, but it had a high level of variance. Not all servers want you to create a large number of connections for each resource but the idea made for a fun science project when I was young and learning :)

Back to your question about this paving the way for browsers serving multiple HTTP connections per domain: I think if anything, it was happenstance that I was looking at related ideas. Network engineers working on browsers are far more intelligent than I ever could have been at that age and their research into the benefits of making multiple connections to domains is something I credit to them alone :)

Thanks for taking the time to reply Addy. Keep up the great work and keep that Canadian @_developit in check. I'm not sure how much he knows about perf 🐢. Better check his bundle sizes. 😉

 

Hey, Addy! 👋

Nice to have you here. Big fan of your work. I'll get right to the questions:

  1. 🤔 What's the different between Dev Programs Engineer & Dev Advocate and why you chose to be the former?
  2. 🕖 What's your day to day look like at Google? Also %age of travel you do as a DPE.
  3. 🆚Code is growing like crazy, I'm also building VSCode.pro, what are your thoughts on VSCode and what is your default IDE and Terminal?
  4. 🤣 Would you rather fight 50 small ducks or one giant 50 feet duck and why?

Looking forward! ✌️

 

Hey Ahmad! I'll try to answer as best I can here.

  1. Google's developer relation teams have historically had two types of roles: the Developer Programs Engineer (DPE) and the Developer Advocate (DA).

In earlier years, this allowed us to form a distinction between developers who learned more heavily on the engineering side (DPEs) and worked on tools, libraries and samples. You could think of this as aligned with a software engineering position. We then had DA roles where there was a little more emphasis on creating scalable outreach (writing articles, giving talks, roadshows).

Over time, these roles have blended quite a lot. It's not uncommon to see DAs who are extremely heavy on the engineering side and even write specs as part of TC39 and similarly, not uncommon to find DPEs who enjoy speaking and writing. I think ultimately the distinction matters less these days than it used to.

  1. The answer to this question has changed a lot in the last six years :) There was a year when my wife tells me I was traveling 30% of the year, which is crazy. That's thankfully gone down significantly over time and I travel pretty irregularly these days.

Day to day: When I switched over to being an engineering manager, a lot more of my time was spent in coordination meetings with other Google teams, leads and in 1:1s with my reports.

A typical day at the office starts at 7.45AM. I'll usually try to catch up really quick on what's new in the JS community or before diving into meetings for 70% of the day. I reserve 30% for working on design docs, coding or writing articles (if it's not too busy). I wrap up sometime between 6-7 and head back home to hang out with the wife and kids.

  1. I love VS Code. Use it everyday. My favorite theme is probably Seti (which I use across my editor, iTerm and Hyper.app)

  2. 50 small ducks. I like my problems bite-sized ; )

 
 

meetings for 70% of the day.

Sad (for me). Is that where this road leads?

It's not all that bad :)

Moving to management can feel like a large change. You give up some of the things you enjoy, like coding as much. Instead of building libraries, you're building teams and helping others build up their careers.

There's only so many hours in the day so you're going to hit a limit on how much you can "scale" yourself. When you're trying to help others develop their skills, it gives you a chance to do more of this scaling. You get to see how they would tackle problems (often in new and better ways than you would).

That said it's definitely not for everyone. I've been lucky for the rest of my time to still give me an opportunity to write or code..sometimes :)

 

Hey Addy, thanks for this!

What's your favorite programming language besides JavaScript?

 

I recently enjoyed digging back into Rust and loved it. It has a pretty expressive type system that lets you convey a lot about the problem you're working on. Support for macros and generics are super nice. I also enjoyed using Cargo.

My first interaction with Rust was in this excellent tutorial by Matt Brubeck back in 2014 called "Let's build a browser engine!" (I hope someone tries to update it!). Perhaps a good future post for someone on dev.to? ;)

 
 

I think what we all want is really fast first-party content delivering great experiences to users.

With my user-hat on, an unfortunate reality is that most sites on the web still provide users a slow, bloated experience that can frustrate them on mobile. If a team has the business buy-in to work on web performance and optimize the experience themselves, I'm more than happy for them to do so. We need as many folks waving the #perfmatters flag as we can get :)

That said, staying on top of performance best practices is not something every engineering team has the time to do. Especially in the publishing space, this is where I see AMP providing the most value. I'm pretty excited about their commitments to the Web Packaging specification for addressing some of the valid critique AMP's had with respect to URLs: amphtml.wordpress.com/2018/01/09/i....

I'm also very keen for us to keep exploring what is possible with the Search announcement that page speed will be used as a ranking signal irrespective of the path you take to get there.

 

This evolution for AMP definitely has me more interested in the project. I've been standing on the sideline hoping some of these URL issues could be resolved.

 

Hello Addy, so nice to have you here!

We often talk about JavaScript that has to be parsed and executed, that should be split and lazy loaded, and similarly for CSS. But what about HTML?

Would you say that DOM nodes are heavy in terms of memory occupation? Even if they're not rendered? Taking about 6k-14k nodes here, including text and comment nodes.
Are element nodes heavier in terms of memory and CPU hogging?

How about HTML parsing? Is a big HTML document (like, 400k unzipped) a problem for mobile devices?

Asking for, huh, a friend that has to struggle with a client that doesn't think reducing the HTML to the bare minimum is meaningful for performance.

 

Everything has a cost and too many DOM nodes = unhappiness.

Back in 2016, the Chrome team observed that most sites we were profiling had 5000+ DOM nodes. Ideally, your page should stick closer to 1500 for mobile. At that time Chrome was optimized for a rough maximum of 32 element deep documents. We definitely handle things a lot better than this, but you're in the sweet spot if you're able to stay within these constraints.

With respect to HTML parsing costs, I always lean on shipping the greatest value to users in the fewest bytes possible. That said, I would probably do some auditing on the costs of sending down 400K of unzipped HTML (see if there's a real issue with CPU and memory usage on your target devices) and visualize to your client the difference of shipping less if that's the case.

 

How do you make SPAs discoverable to Search Engines? Like is that even possible?

Cuz if it's a static site the crawler can index the .html file URL but in SPAs built with frameworks like REACT and Angular, the HTML is generated in the client side so... how does SEO work for SPAs?

I'm still learning about SPAs so if something written above doesn't make sense, I apologize 😅

 

Googlebot has supported crawling JavaScript for some time with a few caveats. When evaluating if search engines can render and index content correctly, I recommend:

  1. Checking the "Fetch as Google" Tool (support.google.com/webmasters/answ...). This will let you test how we crawl and render URLs on your site. For SPAs in particular, it's useful to sanity check everything is rendering as expected.

  2. Check if links are working as expected. Are there any problems reported in the JavaScript error console? The new Search console supports also logging JS errors there, useful to understand if there's a reason why were failed to correctly render your SPA. In Lighthouse, we have a set of SEO audits that can help flag common issues SPAs and sites run into.

  3. Double check what polyfills you are using. Googlebot is using Chrome 41, which is far from the latest version. There are certain Web Platform APIs that it doesn't fully support and JavaScript features which may need to be feature-detected and polyfilled. This is a very common problem SPAs run into. If you're comfortable with Node tooling, I recommend looking at github.com/GoogleChromeLabs/puppet..., a Puppeteer script that helps identify if your page is using any features not supported by Chrome 41.

 

Hi Addy!

I'm curious about your thoughts on Brave Browser with regards to speed / performance improvements.

From my understanding, they're aiming to completely change the advertising model of the web, and their strategy involves blocking tracking scripts and ads. It seems like that's a big part of their speed advantage that they tout, and that blocking those two things are something Google could never do, as it's essential to their core revenue stream.

I still use Chrome because all of my stuff (bookmarks, passwords, etc.) is tied to it, but the few times I've tried Brave, the thing really does seem to fly.

I'm curious about your thoughts on Brave's approach, or about tracking scripts and ads in general.

Thanks!

 

I often switch between Chrome and Brave on Android and have respect for their work.

Users can't be blamed for trying any solutions that block third-party scripts as they can have a non-zero impact on page load performance. We also see many cases where folks are shipping far too much first-party script and sites need to take responsibility for auditing all of the code (1P, 3P) they're sending down.

Stepping back to the topic of tracking scripts and ads - my personal opinion is we need to explore models for the keeping sites we love monetarily sustainable while ensuring user-experience and user-choice regarding data sharing is respected as much as possible. While I can't comment as much on Google's own strategy here, as a user, I'd be happy if we shifted to sites that loaded fast and were more respectful of our privacy and data-sharing preferences.

 
 

I'd be interested to hear your thoughts on WebAssembly.

 

I'm hiring for a WebAssembly Developer Advocate at the moment so I definitely believe it has a future :)

I'm excited about the potential WASM will unlock for types of applications that were heavily bound to the compute of JavaScript. I think it's going to be huge for certain classes of games, accelerating how quickly well known desktop applications and libraries can be ported to the web (I was playing around with a Vim port in WASM just last night!) and potentially for data-science. At the same time, I don't think it's going to displace the use-cases for JavaScript directly. JS continues to see strong adoption for UI development and I don't see this changing anytime soon.

 

Hey Addy, Thanks so much for doing this! How much do different teams at Google coordinate?

 

Over on Chrome, we try our best to stay in touch with Google teams that are working on shipping experiences for the web as well as folks building for other platforms like Android or iOS. Sometimes this happens in the form of monthly check-ins to share learnings (there's often a lot we can learn from one another) and other times it's just over mailing lists.

That said, Google is a very large company and with this comes challenges always staying on top of who is working on what. We still have a long ways to go with improving our communication across all teams. We do want to keep making progress here :)

 

Thank you for doing this!
RAM utilization is a constant topic of conversation in the dev community. You have better insight into the needs of Chrome instances and Electron applications than most people. Is reducing RAM utilization currently a high priority for your team, and do you think that changes in Chrome could noticeably improve RAM concerns in Electron applications?

 

Memory (as you know) is a shared resource. Any site can use more of it to give their users a better experience but often there's little done to monitor just how much memory consumption/leaks individual sites or apps might have. When everyone is flying a little blindly here (self included) it's easy for sites or individual Chromium instances/Electron apps to cause memory strain and for this to negatively impact users and the experience they have with apps on any system.

We're in a period of researching the impact of memory usage in Chrome at the moment. If the Electron community has more insight (or traces) they can share with us about heavy memory consumption concerns, we would love to take a deeper look. Are there examples/data you're aware of?

I definitely acknowledge high memory consumption otherwise matters where it can negatively impact users. Today that can manifest in a few ways when using Chrome directly on lower-end devices:

  • Foreground out of memory exceptions
  • Need to reload discarded tabs
  • Need to reload other applications that might be kicked out of memory
  • Battery drain

We're looking a little more heavily at the memory consumption story on mobile right now, but as I said any data regarding Electron mem usage would help. I know this has anecdotally been a problem I've heard reported before.

 

Thanks Addy! Glad to hear it's an area of focus for mobile, I feel like improvements there could reasonably at least lead to more insight into utilization on desktop. I don't have hard data, I was mainly referring to what I see as the zeitgeist (warranted or not), and my own personal experience. Appreciate your thoughts !

FWIW, it looks like here a user ascertained that Chrome actually uses less memory than most other available browsers, so kudos!

 

Hi Addy!

One of the best static site generaron that I’ve been using it’s Gatsby. Do you recommend use this kind of tools that makes easy the performance configuration? Do you recommend another tool like Gatsby?

 

I'm a huge fan of Gatsby. It's easily in my top things to have come out of the JS community in the last few years.

Kyle Mathews and the team care deeply about performance and I love that they're attempting to give folks the DX of React while still trying to get pages to load and be interactive as quickly as possible. Gatsby also has support for PWAs out of the box. I enjoy their approach to eagerly prefetching data for pages a user may need based on different heuristics. This means there's a good chance a page is already in the browser cache by the time a user visits it.

We recently worked with Gatsby on Guess.js - an effort to use your site's analytics and some machine learning to give sites sites support for intelligent prefetching. These shifts in trying to use data to drive our performance optimizations are an area I think have a lot of promise.

I also like the direction projects like Next.js have been taking. In general, anything we can do to try prescribing better defaults for modern best practices around libraries like React at a framework/static-site generation level are a win in my books.

 

Do you think that to really use the full power of javascript and understand it so well, it is better to understand the browser? And if so, what books would you recommend please? What would be your road map for a programmer to understand a language such as javascript to higher level, ie in order to appropriate the technology to ones needs.

A good understanding of browser fundamentals and an innate curiosity to learn more never hurt :) That said, I would focus on learning by doing. Write code, build apps, make mistakes and learn from them. Building up your knowledge of JavaScript, CSS, the DOM and eventually how browsers work is valuable.

html5rocks.com/en/tutorials/intern... is a read that is out of date but still covers a lot of the core ideas of how browsers work. The front-end handbook over at github.com/FrontendMasters/front-e... attempts to give link out to a few learning resources for diving deeper into each area you may eventually require some expertise in.

On the books front, I tend to be partial to Eloquent JavaScript (a third edition just came out), Effective JavaScript by Dave Herman and any of the ES2015 books by Axel Rauschmayer or Nicholas Zakas.

Thanks Addy! My next question would be, if you were to build an enterprise app like facebook today with angular what would be your road-map plan (any pointers, ie book) and do you think using angular and say firebase is recommended?

Can you please point out some books or links that point to modern application architectural designs

 

What's it like being a coding celebrity?

Did you think you'd have a massive following like this before it happened?

How did you get to where you are in this regard? I'm super curious about your mindset along the way.

 

I often don't feel like I deserve any of the attention. Some of the most exceptional coders in the world don't get as much acknowledgement of their work as they should. I wish that we could change this and to the extent platforms like dev.to and social mentions enable a path to this, I'm hopeful more of them will be considered coding celebrities in the future :)

What's it like being a coding celebrity?

What's it like.. you learn the importance of being humble. You learn to be careful with what you say and how it can be interpreted when you make a strong statement. When people look up to you (in any situation), you have a responsibility to try giving them a measured response where you've considered the best data and facts available to you. It's far too easy to spend 15s thinking about something and just posting it out into the world (think before you speak).

There are tweets and articles about topics that I would love to post, but don't because I'd prefer to take my time to check on the data and consult with others in the community so I can be confident if I suggest something is a best practice, that I truly believe it is. It's very possible I overthink and over-analyze so take this with a grain of salt :)

Did you think you'd have a massive following like this before it happened?

I didn't think I would be fortunate enough to get the following I have. I just constantly hope I'm giving folks some value vs. throwing out nonsense :)

How did you get to where you are in this regard? I'm super curious about your mindset along the way.

I get asked this question a lot and the answer is: by trying to continue delivering value to the community as often as I can. I definitely don't do this every day or every week, but I think we all struggle to stay on top of things on the web. It's challenging knowing what the latest best practices, tools and techniques are. To the extent that we can distill some of this down into a bite-sized form for folks (tweets etc) that they feel comfortable digesting, maybe that's useful enough.

I will say the journey itself to this point, although hard, has been fun and educationally rewarding.

 

I help medium businesses on mobile UX ( webperf / UI ) in a big search Ads company

I feel that lot of businesses are ~5 years behind implementing new tech ( REST, JavaScript FW, PWA )

YouTube... is full of trainings but in the real world devs don't have time to learn "exciting tech"

E.g : companies are stuck with old Magento 1.x, affraid to touch anything ( because no tests)...

What do you think could change / improve this ?

 

I've worked in companies where change aversion made it difficult to migrate legacy codebases onto anything more modern or efficient. It's not uncommon to hear stakeholders use the "if it's not broken, why fix it?" rationale. They often don't have your insight into the maintainability of performance issues some of these decisions can cause over time.

One approach I've increasingly found teams use is pitching for an A/B test - e.g "Let us try to migrate over a smaller part of the site. If we can show you it will improve business/conversion metrics by at least X, let us try it out for other parts of the site". This reduces the cost of the exploration in the eyes of the business ("they're just asking to do this for one page or section...") and gives you a target to justify continued investment.

Where it's not quite as straight-forward as demonstrating a change will lead to improvements in business metrics, showing data about how many engineering hours will be saved by switching to a more modern Magento vs maintainance cost of old, what other opportunities doing so unlocks etc might also be something that can convince the business it's worthwhile letting you explore it.

 

Are there any specific thresholds in terms of sending a certain number of KBs or MBs where the user experience starts to suffer most? And would that number be considerably different for areas well-served by fast Internet vs underserved areas?

I operate under "less is better" paying some attention to possible packet-specific thresholds, but I can't say I'm all that certain about any of my ways and any insight in this regard would be awesome.

 

I usually try to walk back from my goals when it comes to performance budgets. For example:

"Users on average phones can load and interact with this site in 5s on a 3G or better network".

If we look at the sequence of communication between a browser and a server, a few hundred milliseconds (400-600ms) will already be used up by network overhead and round-trips: DNS lookup to resolve the host-name (e.g google.com) to an IP, network round-trip to do the TCP handshake and then a full round-trip to send the HTTP request.

This leaves us with ~4+ seconds to transfer data while still keeping the page interactive. On a 3G network, you can probably at best get away with sending 130-170KB of resources while meeting your targets. If your users are on 4G/LTE, you may be able to send more. The variability of mobile networks means that, even if you're on a high-end phone over LTE (e.g iPhone X) your network speeds can effectively be slower than you'd like. This is why developing for a "poorer" baseline is great. It means even under worse conditions you're still able to deliver a good user experience.

On desktop, it's a different ball-game. Your users are likely connected to more reliable WiFi/cable on a CPU-beefy machine. You can probably get away with shipping MBs of code to your users there. That said, many of us have gone through the pain of trying to use WiFi from a coffee shop, at a conference or on a plane. When your effective network connection speed is poor, you can start to feel the pain of those MBs even on a desktop machine.

"Less is better" is always a good mantra to hold regardless of device and network type :)

 

Do you think you can fix the autofill bug that's going on in Chrome?

github.com/vuetifyjs/vuetify/issue...

Vuetify guys says it's a bug in Chrome. They even mentioned:

To be fair, even google's own login form has the same problem. I believe that was the justification for closing the other issues as wontfix.

 

The Chromium bug listed in the issue appears to link back to a private internal bug I'm reading up on now. I'll try to chase up the sub-team working on autofill/input and see if there's anything we can do here.

 
 

The future platform of computing ( 5-10 years ) is augmented reality glasses. How do we prevent tech giants from building more walled gardens ( app stores, proprietary SDKs, etc )? Do you think the WebXR api can compete with native implementations?

 

Hello Addy! Do you have all the main-stream browsers installed and how often do you use the them to stay ahead of the game?

 

I usually try to have the latest + bleeding-edge versions of most browsers installed for testing. On my Mac at the moment there's..

  1. Chrome: stable, beta, canary, Chromium (tip-of-tree) installed
  2. Firefox: Quantum, Firefox Developer Edition/Nightly
  3. Safari: stable, a version of Safari Tech Preview that's often a few weeks old
  4. On my Android phone: Chrome, Firefox, Brave, Opera

For Edge, I'll usually have a VM setup for testing or take out a Surface from my drawer to test the current stable version.

I otherwise heavily rely on services like BrowserStack or WebPageTest to validate that my projects perform and render correctly on real devices.

 

Are there any important considerations or gotchas when shipping an ES2015 bundle to modern browsers and a transpiled bundle to legacy browsers? Is this something you recommend?

 

One caveat off the top of my head: check the bundle that is being served to search crawlers can be interpreted correctly. For Google Search, our web rendering service is based on Chrome 41.

I would just check to ensure the legacy (non-ES2015) bundle doesn't also contain anything that requires additional polyfills. If it does, look at ways you can feature detect and serve that additional JS as needed.

 

Hadn’t thought of crawlers! Great shout, thank you.

 

Hello Addy,
Thank you for taking the time to respond to all these questions. I am a big fan of your work and always wanted to know more about your workflow, how do you manage your time and how you split your time between your day job , family and open source?

 

Hey Fady! Thanks! These are good questions.

Here's what I try to do to stay productive:

  1. Try not to overestimate the things you can get done in a day. I usually have a list of goals I really want to complete. If I do any more than that, I consider it a bonus. Evernote has been very useful for planning out the things I need to get done.

  2. Breaks are essential. I rely on short breaks heavily, especially when context-switching between projects. This isn’t always possible if you have a day of back-to-back meetings, but where you can take a break, it does wonders to clear the mind.

  3. Never put too many large tasks on your plate. Break these down as much as you can so it’s clear where you’re making progress.

  4. Try to get tasks requiring deep-thought or creativity out of the way early in the day when your mind is still fresh. I find it much harder to make progress late in the afternoons as my attention is already starting to wind down. I try to save the afternoons for email.

  5. Sometimes it helps to get the things you don’t want to do out of the way first (referring more to non-coding/creative tasks here). This can be a nice forcing function because it encourages you to do it faster so you can get onto things you enjoy doing more next.

  6. Wear headphones when you need some quiet/focused time and aren’t able to get a dedicated space to work in. It lets folks around you know you’re concentrated on something and they’re more likely to drop you an email if they need something instead.

Work/life balance is incredibly important and is one I'm still working on.

I used to have a really poor work/life balance and it was my own fault. There were many years where I would work from 8AM-7 or 8PM and rarely got to spend time with the wife and kids by the time I got back. From the outside, it looked like I was very productive, but it had a cost. It wasn’t healthy. I’d do open-source at the weekends or right before work and I wish I’d balanced that time better looking back.

These days, I aim to leave work a few hours earlier than that and try to detach as much as I can when I’m not at the office. Be present and be there when you’re spending time with the family is my advice.I try to fit in open-source before my meetings kick off, while the wife is working on her own projects or try to set aside time on Fridays when work is less crazy to do what I can.

It's manageable - you might not get everything done quickly, but you'll get it done in a more balanced way. I'm content with that now :)

 

Do you see any paradigms, attitudes, or practices in other dev communities, (systems, embedded, data) that you would like to see more widely adopted in the web/js communities?

 

I come from a C++ background and think the web has an immense amount that we can learn from native platforms like Android and iOS. In particular, patterns for working under the constraints of slower mobile devices and networks that could translate back to the web.

One of my favorite recent reads was a book called "High Performance Android Apps" by Doug Sillars. It's amazing how much is in there about not stressing the CPU and being mindful of how much code you're sending over the wire, conditionally switching what you send based on the network conditions can be directly applied to the web with minimal effort. Exciting stuff.

We've started to do these learning exercises a little more in Chrome. Recently, we started looking at a virtual-scroller element for enabling folks to build efficient infinite lists. This led us to look at UITableView (on iOS), which recycles and has an elegant API for row templates. Where possible, rather than re-inventing the wheel, I would love for us to absorb as many learnings from other platforms as we can to better inform our direction on some projects.

 

Hi Addy,

thanks a lot of this AMA, you're definitely a developer hero, it's unreal how many times I ended up articles written or co-written by you :-D

My question is not technical, more philosophical.

What do you envision for the future of PWAs?

Also, Chrome seems at the forefront of web dev and PWAs but for obvious reasons it could be dangerous to rely too much on Chrome as the target browser, are you hopeful the other major browsers (mainly Safari because its user base on mobile it's massive) will catch up?

After years of using Chrome on desktop and Android phone now my dev and "normal" browser is Firefox on the beta channel and it seems fantastic, so there's hope there :-)

 

You're very kind!

PWAs enjoy a few advantages today. They're zero-install (users can choose to keep them, but installing them is something you opt to do after you're using them). They start pretty instantly and don't require a large binary commitment to start using them. You can use PWAs without needing to engage with a App Store. You just directly go to the experience. You can update them on the go. You just use the PWA and get the latest version.

That said, there are still large areas for improvement. PWAs are web apps and if the web doesn't have an API for doing what you need, you may need to look at other platforms for options. I'm hopeful in the future we'll see continued evolution of the device capabilities available to PWAs. I would also like to keep exploring what we can do to make it easier to build smooth animations and transitions on the web.

We've seen progress made in mobile Safari over the last year. Safari now supports Service Workers and has some Web App manifest support. There remain some platform limitations there, such as lack of Web Push notifications. Thomas Steiner on our team has been tracking some outstanding bugs in their PWA implementation that could use some love twitter.com/tomayac/status/1015689.... I'm hopeful they'll catch up.

 

PWAs enjoy a few advantages today. They're zero-install (users can choose to keep them, but installing them is something you opt to do after you're using them). They start pretty instantly and don't require a large binary commitment to start using them. You can use PWAs without needing to engage with a App Store. You just directly go to the experience. You can update them on the go. You just use the PWA and get the latest version.

Yeah, that's what I love about them. I used Twitter's PWA a lot and I did not miss the official app at all.

We've seen progress made in mobile Safari over the last year. Safari now supports Service Workers and has some Web App manifest support.

I remember that, that was a really good news.

I'm hopeful they'll catch up.

Crossing fingers! A full support would be a game changer for PWAs. Also hope things like Web USB get more love by browsers.

Browsers developers have a lot on their plates :)

 

What do you think about the grown popularity of "Micro Frontends"? The idea of it seems to be bring microservices architecture to the frontend, do you think it can be worth? What about the implications for performance, can it be done without destroy the user experience?

 

Hey Addy!

Are you aware of performance issues related to security updates to chrome, such as the recently launched Site Isolation?
And on a similar note, are their performance-enhancing features which are not achievable due to a security concern?

Thanks for the good work, I've learned a ton about website performance thanks to Lighthouse!

 

Hi Addy!
Thank you for your time. I have a very simple/difficult question.

Devs I know are very competitive, and always try to know more and be better day after day. The final goal for a lot of us would be to work at a major company like Google. In your opinion, which are the skills that are required to achieve that? Hard work, luck, intelligence?

 

I would say..

  • A passion for learning and a curiosity for problem-solving
  • An understanding of computer science fundamentals. How do systems work?
  • For front-end roles, an understanding of JavaScript, CSS and the DOM. Having some knowledge of how browsers and networks work is always useful.
  • The ability to write good code, absorb feedback and help others improve. Collaboration and communication skills are important.
 

What's your opinion on the future of web components? With react and other SPA frameworks like angular, Vue, etc occupying the scene and the persistent vendor wars on deciding the spec for web components(eg. Firefox refusing to incorporate html imports).

 

In a working environment where you may have multiple disparate teams working on the same product, how do you trade off team autonomy with collective outcomes? While sharing and reusing code seems to be a good goal, is it sometimes better to accept redundancy in the way things are built?

 

Hi Addy!
Thanks for your this AMA.

We are big fans here of lot of your work ;)

1) How do you see the future of developing multi-device responsive apps?
2) How would you imagine approaches like the one in the following video to be used in your workflow when building a SPA from scratch?

yotako.io/freehand-canvas

Thank you so much!

 

Hi Addy,

I am curious about your answers to the following questions.

-- Why can't Chrome browser include most of the popular versioned libraries/frameworks (react, angular, etc) within the browser itself? Then, developers write codes to access to those natively for the better performance. Can you imagine how many millions of websites save a lot of bandwidth?

-- Is there any way to compile/bundle the 3rd party libraries with only used functions instead of as a whole? Most of web/mobile apps don't use those libraries fully.

Thanks
Nizam

 

Why can't Chrome browser include most of the popular versioned libraries/frameworks (react, angular, etc) within the browser itself? Then, developers write codes to access to those natively for the better performance. Can you imagine how many millions of websites save a lot of bandwidth?

We've debated this a few times :)

The main challenge here is versioning. At the end of the day, the reason bundling popular JavaScript libraries in the browser wouldn't work in practice is that individual sites would want to pin to specific versions or serve from their own infrastructure. They might be convinced they have to use a "trusted" version or that a newer/bug-fix version could break their site.

Attempting to bundle multiple versions of each library with Chrome in each release could also be problematic for other reasons (e.g constantly keeping up to date with vulnerability fixes). In theory, HTTP caching addresses the overall benefit of shipping libraries in the browser, albeit on repeat load if it isn't already in the user's cache.

We're exploring a more slimmed down variation of this idea in a project called Layered APIs (github.com/drufball/layered-apis). The basic idea is if we can define and ship primitives implemented in JavaScript within browsers (and formal polyfills where needed), third-party libraries would be able to use these as dependencies without needing to ship quite as much code down. It's still in the very early phases though.

Is there any way to compile/bundle the 3rd party libraries with only used functions instead of as a whole? Most of web/mobile apps don't use those libraries fully.

An approach I recently saw a PWA in Japan (Nikkei) take is self-hosting third-party JavaScript and running it through webpack to try stripping out as much unnecessary code as needed. This isn't going to always be feasible without the complete source though.

 

Hey Addy! Can you tell me a book, podcast, blog or vlog that you use to get focused or the team google recommend? Thanks!

Hughs & Husky love 🐶

 

Hey, Brenda! I highly recommend the book "Deep Work" by Cal Newport. It was first introduced to me by Rob Dodson and fundamentally changed the way I approach staying focused over the last few years.

The core idea is that if you want to produce work of high quality, you need to find a way to maintain focus on a single task for a long period of time without any distraction. It's a narrow focus that allows you to produce some of your best work.

When I want to work on a coding problem, most of my time is spent thinking rather than writing. To stay focused, I’ll usually do this away from my desk as it’s very easy to get distracted and involved when there are interesting conversations happening around you. I’ll try to find a quiet space or a whiteboard where I can try to look at the bigger picture and design the solution away from just the code itself. This isn't always possible, so putting on headphones also works :)

This has been a large change to my workflow. We often rush (out of passion) to solve problems in code, firing up our IDEs right away, but there are times when this can slow us down too. Working on a problem/doc/whatever you need to in a quiet space also helps a lot with staying focused and getting things done.

While I'm not at the office, my wife has otherwise been a big help keeping me focused when I really need to get things done :)

 

Thanks for doing this Addy,

I do have a few questions :

1)
Related to the lighthouse audits and the next-gen image optimization, audit says we should use Jpeg200, JpegXr or WebP format but none of these are known as a cross browser solution. Instead of the webP, almost all browsers do not allow this kind of format.

I am thinking about using USER_AGENT to serve the right format for each browser but that is not known as a sustainable solution, do you have any other suggestions?

2)
Funny thing, on PageSpeed Insights, audit asks me to cache google-analytics.com/analytics.js, do I host this locally as a workaround ?

3)
Also, I found several differences between pageSpeed and lighthouse.

As an exemple :

I'm working on a preliminary project studies phase for a client's website. I'm trying to analyse scores and blocking factors to set my technical recommendations in order to improve their SEO.

As part of this work, i used both of these tools but page speedInsight reveals, on mobile, 88/100. Lighthouse shows a 27/100 score. I know audits are different but it's hard to technically and commercially explain this gap.

This website is not mobile first, weight is around 3,2Mo on mobile... So no optimizations are existing for mobile at the moment.
Between PageSpeed Insights and LightHouse, which one is for you the most able to show the real performance ?

Thanks
Julien

 

Hello, this is a technical question, I've tried to use Chromeless to scrap some data from google search results, specifically from the "Special Box" that appears above the results, usually when we search like "define dictionary", But that box doesn't appear while I search using chromeless, how is this happening, does chrome recognise automated search with normal ones ?

 

Hey Addy, What's your advice to jr developers who constantly start to switch tools from one framework to another without achieving the main goal?

 

In a parallel universe where the web is already fast, what would you be working on?

 

How can I get to this parallel universe..? :)

The other thing my wife and I are trying out is designing t-shirts for web devs over at teejungle.net (which has been a fun, very different change of pace).

I would otherwise probably be looking at how we can improve the developer experience for "embeddable" content. Over the next decade or two, the surfaces where we interact with the web could evolve. We could see more users looking for a quick way to consume a service via an Assistant-like interface (think schema.org style mark-up + constrained UIs), the screen in their car, an element in a WebVR scene etc. Will a browser always be the quickest path to completing an action? How can we keep that fast, and if not, make it easy enough for the web to have a seat at the table?

How do we go from building single-page apps backed by APIs to a world where our experiences can be embedded and consumed in contexts that might not always be a browser? I find this interesting. It may also be totally dumb.

 

I'm confused about how to handle css for an app with many pages (not using react). Should you serve two style sheets with global css and module/page specific css? In the past, it was better to cache one large stylesheet, is this no longer the case? Should the css required for the above-the-fold view be inlined? Thanks!

 

What do you think of next.js?
How would you compare to other solutions and which would you choose and why?

 
 

Totally Tooling Tips will probably not be coming back with it's current line-up (sorry!). Matt and I had an amazing time doing the show but as we've switched to different roles at Google over time we've had less free time to shoot.

We're evaluating whether we want to keep the show going with a different set of speakers (or do something completely new) so keep watching this space!

That said....Matt and I might come back for one final episode. If folks want it :)

 

Hi Addy, does Chrome team has any plan/idea/thoughts on userscripts? Will it get better support or abandoned?

 

How does Chrome team check for security of the sandboxing, given that JS is a Turing complete language and any code changes can possibly lead to escaping of the sandbox?

 

What are your thoughts on current state of BackboneJS. Do you think it will rise in popularity again?

 

Many of the apps I see using Backbone these days are slightly older codebases (although, Backbone and Marionette were lightweight enough that they likely still work fine).

I like to think that libraries like React and Preact are the spiritual successor to Backbone otherwise. These days the thinking around state management has evolved. Backbone's models were simpler, nested views were lacking and it was easier to leak memory if you weren't careful.

 

Hi Addy! What do you believe is true that very few people agree with?

 

Hi Addy,

Is it okay to have 2 different versions of jQuery in a page? Any idea on pros and cons?

Thanks,
J

 

Hi Addy!
1) How memory snapshots (memory tab) should be used to improve performance in SPAs?
2) your favourite Chrome Dev tool feature (except audits 😁)

 
 

I am! I'm currently hiring for a WASM Developer Advocate. I believe our team are currently also hiring for a DA with a UI/UX focus.

 
 

Hi, Addy. Thanks for your time on this.

There is any plan to make Chrome natively compatible with Dart?

code of conduct - report abuse