I'll cut the crap out and jump directly to the two line of css that you need to add to improve your performance by approx 7x:
{
content-visibility: auto;
contain-intrinsic-size: 1px 5000px;
}
Why do you need this?
Website's nowadays need to be optimal and performant, users on the web have very short attention span. According to Doherty threshold response time to be 400 milliseconds.
Now imagine a website like Facebook,Instagram etc.. taking more time then the threshold ? No one would be coming back to these sites again.
When would you use this?
Most common use cases for this is when you have huge list/grid of data that needs to render at the mount of the application.
Example: Static website like specs or documentation or travel blogs etc...
Would love hear in comments if you have any other use cases for it.
How does it work ?
The browser acts smart by skipping rendering work with the class you applied with content-visibility: auto
.
Browser needs to know the layout of the DOM in order to render, those elements which are not in viewport are not rendered and infact have empty box with contain-intrinsic-size
you provided.
To summarize all of the rendering is deferred until it reaches the viewport at which the browser renders the actual layout with the width, height and styles you provided.
P.S: the layout which are not outside of the viewport would have a height: 0
, so when deferred layout comes to viewport it would stack on top of each other, so that's why contain-intrinsic-size
is needed, However, no worries these are just a fallback values, browser will render the actual ones when it renders in viewport.
Hence one drawback of this is the scrollbars would be wacky and jump to places if the contain-intrinsic-size
not given properly. :)
Browser Support
content-visibility
relies on the the CSS Containment Spec. While content-visibility
currently is supported on mostly chromium tech as at the date of writing.
However, content-visibility
support is not bad for a good to have feature on high end systems, however with progressing web development it will soon be supported in all browsers too. Hopefully :)
Alternatives
There are alternatives to improve performance using JavaScript, such using List virtualization, but who wants to write 100's of line of js
and maintain it when you could do it in 2 lines of css
Further reading; that you could do it in js:
react-window
react-virtualized
Excellent demo and explanation:
Further readings:
https://web.dev/content-visibility/#support
https://developer.mozilla.org/en-US/docs/Web/CSS/content-visibility
Regards,
Top comments (58)
There is no support in Firefox,¹² and it is only marked as "worth prototyping", with some remarks calling early versions of the spec actively harmful, especially to accessibility technology.³
Both points together instantly squelched my enthusiasm.
¹: developer.mozilla.org/en-US/docs/W...
²: developer.mozilla.org/en-US/docs/W...
³: github.com/mozilla/standards-posit...
As long as that does not change, this is not viable to use and other methods to speed up the site are preferable.
Yeah that's very unfortunate.. but i am optimistic.
However, what I like is it supports chrome mobile devices too, where we have most of the low end devices...
Shouldn’t that also be the case for every of the well-supported optimization techniques?
It is tempting to have a two-lines-solve-my-problems tool, but as long as there is no native FF support, it’s just not there. A polyfill won’t do here, because it won’t provide the performance of native, and performance is the very reason why this looks interesting.
It's really depends on your app and your users.. so if 80% of your app is used by chrome users then it's not a bad solution, considering you do not have resources to do proper optimization using JS.
Moreover, on the web we have a huge ecosystem and I think it's hard for browsers to coupe up with all these optimization...
In my view: it's a good to have feature, maybe FF or safari needs to do so many others required thing's that maybe this could not be there priority
If you don’t have the resources to do proper optimization (not with JS but by doing what’s fast, also in CSS and by keeping your DOM sane), then that’s a problem by itself. If 80% of your app is used by Chrome users and you enable this feature, then it won’t be long until 100% of your users are Chrome users, because you will break performance for FF and Safari — after all, you still won’t have the resources to do proper optimization if you don’t have it before using this — so those users will leave.
EDIT: You will break performance, because you will have to cram in features without proper optimization until you hit the 400ms limit again with the optimization. Those without the optimization will then have an unusable App.
FF devs plan to prototype this feature, but their concerns were not “something else is more important”, but “this is actively harmful”, so depending on the outcome of the prototype, this might never get cross-browser support, locking you firmly into Chrome if you depend on it to get good load performance.
I kinda agree on the optimization point of view...
There's always more than one ways to do it, although might require more effort but why not.. that's how optimization comes in...
However, I highly doubt about being harmful.. it's very harsh term to use..
I will read again the Mozilla specification link to understand..
Thanks for contribution
If the outcome of the prototype in FF is “yes, implementing this version is possible without causing deep other problems”, your solution might allow realizing the huge speedup you saw across different browsers, so please don’t let the wording keep you from experimenting with what might come.
After all, that Mozillians decided to prototype at all means that the changes in specification since this was declared harmful might have solved enough problems.
This solution requires that you first load all the invisible data, create components for it, form the DOM, and only then will the browser decide that for 99% of that DOM, you don't need to calculate the layout.
The best solution is not to form this huge DOM at all. And don't even load invisible data. I told how to achieve this here: https://github-com.translate.goog/nin-jin/slides/tree/master/virt?_x_tr_sl=ru&_x_tr_tl=en&_x_tr_hl=ru&_x_tr_pto=wapp
Thanks for sharing you work @ninjin , it looks great...
It depends on the usecase , well if you dont want to load all the data, then can't we do pagination or infinite scrolling ? that are the built in optimization..
Yes, of course, and there I was just talking about how to get the effect of lazy loading and rendering without making any effort to optimize at the application level.
See the huge example: nin-jin.github.io/habrcomment/#art...
And its source code: github.com/nin-jin/habrcomment
Pagination and endless scrolling can't work well here.
wow it looks smooth. Great effort bro.. scrolling of the doc seems rendering only the viewport 👍
That's very interesting, hopefully support will get better. As a side note, as for virtualization libraries I would rather recommend tanstack/virtual or react-virtuoso.
wow, nice @zhouzi ..
Thanks for sharing these are good alternative to js virtualization 👍
I had trouble getting this to work for my use case, which also involves lazy loading images inside the sections where I attempted to apply this CSS trick. The negative side-effect of lazy loading images inside a section with this CSS applied was that when I rapidly scrolled down the page to a given section, the image didn't always render correctly (weirdly pixelated or obscured) until I scrolled away and then back. In the end, what seems to work best for my particular scenario is relying entirely on the JS Intersection Observer API:
developer.mozilla.org/en-US/docs/W...
Ummm, interesting...
Just a thought do we need to lazy load image's if you already rendering only what's in viewport ?.. since we aren't rendering other than what's in viewport, so for me it looks like tradeoff..
Also maybe try playing with different
contain-intrinsic-size
, might help with obscureness. ?Moreover, intersection observer is also a good option.. there's always more than one solution to do it.. whichever suit's the best.. )
hi. I tried use your ficha in real project. Unfortunately, I got a problem in the Chrome browser. jsfiddle.net/huy2o5rf/ Can you please explane why style
.btn_menu__checkbox:checked+.block_navigation {
transform: translateX(0);
}
would not work with your ficha ?
hii @blackstar1991 , i saw your fiddle, and the css that you have applied is global
this would have no effect in the optimization:
as in my understanding it would render the whole page will render after checking the layout without any optimization.
In your case, if you apply the above css in your items class
bl_nav__item
, it could work.P.S: this optimization is usefull if you have 1000's of record loaded at the same time and you dont want to render it all at once..
I hope i am able to understand your issue and this could be any help.
in my example, I wanted to show you that we got problem with hidden element on a page. jsfiddle.net/rd2b37t4/ A psevdo menu dosn't work. .btn_menu__checkbox:checked+.block_navigation dosen't work if using your ficha
It depends on your usecase of where this feature is best used for...
Yes one side effects of this is the scrollbars position, but in my understanding it's not issue considering Infinite pages or huge static chunk that needs to render
Question re: contain-intrinsic-size: how did you come up with the values you used? Is 1px 5000px the max the screen will allow? Thx in advance🙏🏼
Hii,
It's just a random value, you can put whatever value you want according to design..
In my understanding it has these effects:
Furthermore, I came up with this value when was watching the YouTube video I have attached, it's really a great explanation to understand the concept.
Thx! That covers it 👍
Hi, how did you measure 7x? Do you have any benchmarks?
Hey.. I learned this stats from link attached in doc.. where it shows how a travel blog FCP changed from 232ms to 30ms.. which is 7x faster
web.dev/content-visibility/#example
Moreover, you can also check the benchmarks using
profiler/performance
in chrome dev tools as well..try running with and without it.. would be a cool learning :)
"Now imagine a website like Facebook,Instagram etc.. taking more time then the threshold ? No one would be coming back to these sites again."
Facebook is slow, buggy, and bogged down but I keep coming back to it...
Then you must watch a Netflix documentary: "The social dilemma"
netflix.com/my-en/title/81254224?p...
Awesome article.
Thanks 🙏