There's a lot going on with the dev.to home page, especially when you are logged in. We load the top community posts, the top posts related to the tags and users you follow, the number of reactions and comments on each post, info about top tags, the tags you follow, your profile pic, and of course, all the application code related to actions you might take.
And we do it all really fast.
We believe in the possibility that the web can be virtually instant for all those who use it. Early on, I approached this from the perspective of "we're just serving blog posts, of course it can be instant". But as we've evolved the platform, it has taken a lot more creativity to keep things pretty damn instant.
In order to render the request as quickly as possible to all corners of the globe, the HTML request is served directly from a Fastly CDN node. So, if you are located in New York City, your page will be fetched from New York City; if you are in Tokyo, the request will hit the Tokyo server. If you are in Egypt, your request will come from Dubai. Operating and optimizing a global CDN is outside the scope of my expertise, so in general I think in terms of the abstractions the CDN provides.
An issue with this approach is that we need to maintain a high hit rate of the cached content, but we don't know who you are on initial request. We use headers to understand whether you are logged in or not, and this helps us organize the initial page for you and simplify some of our logic; but, of course, it would not be reasonable to cache every unique logged in user. So we make secondary ajax requests in order to serve custom data. When you see your profile picture or information on the page related to you, it was loaded after initial page load. Sometimes this is laggy, and we're always trying to work it out, but the typical experience is solid.
In order to optimize your experience when we can, we store some of your information locally when available. So your profile picture in the nav bar and on the home page, and the tags you follow, and some other info, is stored in your browser so it's rendered virtually instantly regardless of network conditions. We still make the request to the origin server every time to double check and ensure the info is fresh. This approach makes the typical request a lot faster than if we did not take this approach.
We used to take that approach on each page, but we've since evolved our approach.
While it's been a great choice to inline styles for ensuring incoming traffic gets a great experience, it's not optimal to send this data over with every instance of internal navigation. We use a form of XHR-based internal navigation similar to Turbolinks, but smaller and more customized. This gives the experience of an extremely performant single-page-application, but most of the logic gets to stay on the server, which I think is a really big win in terms of keeping the code clean and maintainable.
So we only inline styles on requests that are not triggered by internal navigation. To take it a step further, we also stopped sending over the nav bar or the footer. You only get the meat of the page. And yet the backend developer doesn't have to stress about this choice. They only have to know that we can't have different versions of the nav bar on different pages (without invoking special code on the client to do this). This means that internal page request typically measure in about 4-8kb. There are outliers here, like pages with a lot of comments, but we'll iron that out as well.
Integrating some newer APIs that will improve these aspects even further is next on the docket. Our caching-first architecture is primed to take advantage of these features and I am very excited. Our initial efforts have been to improve performance for everyone, regardless of browser features, and it has set us up well to make good use of the bleeding edge APIs which are growing in browser coverage and becoming more mature overall.
I tried running some rendering tests comparing dev.to and twitter.com and Twitter just never stopped rendering. I waited for seven minutes and the test wouldn't stop. If this post is any indication, I do not have a lot of patience for waiting around. So I'm just going to go ahead and hit publish now. I'll be forced to extrapolate the evidence I do have and assume our site is infinitely faster.
Of course I know that Twitter's product is massively more complex and they serve orders of magnitude more traffic than we ever would. So I'm not trying to boast on any engineering levels, but for any individual user, it doesn't really matter how much overall traffic they serve. The individual will still measure user experience apples to apples. User experience is everything and performance is the biggest factor in user experience in the browser.
Happy coding âœŒï¸