tl;dr - Web components are SEO friendly!
I had read a number of articles that discussed SEO and web component compatibility and saw that many of them stated web components were not SEO friendly. These articles were at least a few years old so I thought they were out-of-date, but I didn't see anything recently that contradicted those articles. So, I decided to test some things out and see what the story was for myself.
Creating a Test Site
I created a simple web page with a static "vanilla" web component to test if search providers would render and index content in the shadow root. The site can be viewed here and the source code can be viewed here.
Google Test
Google commands the lion's share of search traffic, so this seemed like a good place to start the test.
After creating my page and adding the property to my Google Search Console account, I went to URL inspection and clicked on VIEW CRAWLED PAGE.
In the preview window, I can see that the <h1>
and <h2>
tags in the shadow DOM were rendered and can be indexed by Google's web crawler! 👍
Bing Test
Although Bing is not the most popular search engine, its index is used to power many other search engines including Yahoo!, DuckDuckGo, Neeva, and You.com.
I added my page to the Bing Webmaster Tools and selected URL Inspection, clicked the Live URL tab at the top of the page, and clicked the View Tested Page Button.
When I look at the tested page's HTML, I notice that none of my custom elements have rendered their shadow DOM contents and I am seeing error messages about missing H1
tags... 😳
As a huge proponent of web components, I was extremely concerned about these results, so I reached out to some of my co-workers on the Bing team to find out what was going on and figure out how we could fix this.
The good news is that after looking into it, they told me the issue is a bug in the Bing Webmaster Tools and that the content does get rendered and indexed! 🙌
The great news was that they have prioritized the bug and are looking to have it resolved by the end of this month (August 2023)!!! 🎉🎉🎉
Conclusion
Do to a bug in some of the tools, there have been some misconceptions about how compatible web components are for SEO. Fortunately, that should be resolved soon and teams can continue using web components know their content is searchable!
Top comments (15)
This is great news for web components!
Hey, thank you for sharing; this is very helpful!
Hey I wondered about this too, thanks for sharing.
Though I have a follow-up concern, maybe source material for another article?
Even if the webcomponent inners are readable, what significance are they given in the document structure? For example, would Google Search generate an enriched search result with main navigation links if the was embedded in a custom webcomponent? If a page has 2 <h1>'s, with one in the shadow DOM of a webcomponent, which one will render as page title?
It renders the content and reads it like standard markup on the page, so the behavior should be the same as if you added 2
<h1>
's in the light DOM.That's indeed good news coming from Bing side.
I've heard that Google Crawler provides limited "website crawling budget" when using the JavaScript enabled web crawler, do you know if the same applies to Bing?
That has been one of my main blockers to the use of Shadow DOM in large websites because it ends up not indexing ALL pages.
I'm not sure to what extent it indexes JS content, but it should execute web components on the client to index the shadow root contents.
Hi Burton,
thanks for testing this and for sharing. I just cloned your repo, just to see with my own eyes, maybe add some more tests and to check up on Bing.
Unfortunately Bing still gives me the "SEO issue found - H1 tag missing" warning. This still concerns me a bit and I'm still not sure if it's a good idea to build a
<my-headline level="2" layoutLevel="3">An important H2 headline<my-headline>
component hiding the semantic structure of my page regarding the initial HTML.On the other hand passing the
<h2>
with the slotted content exposes my headline to styles coming from the global scope<my-headline layoutLevel="3"><h2>An important H2 headline</h2><my-headline>
- which can only be safeguarded against using the ultimate weapon!important
Yeah, I agree. I would love to see that fixed ASAP. It looks like updating the Web Master Tools to use the indexed content rather than the page source is quite the process. I don't have an ETA on when that will be updated, but they have assured me the indexed content does execute and traverse the shadow DOM content.
Do the pages need to be server side rendered or are you testing client side pages?
Great question! These were all rendered client-side.
Some of our components are server-side hydrated and some are not. I wonder if Google will get confused and read the page too early, not waiting for client-side hydration to finish. Seems like that might be happening on some pages already...
All of my examples were client-side hydrated and it didn't have any issues with it.
Thanks for checking this out.
It is now October. Do we know if Bing got around to fixing this bug?
I just did a test and it looks like it's not. I will ping the team and see if the fix is publically available yet.
No ETA yet. It looks like updating the Web Master Tools to use the indexed content rather than the page source is quite the process.