DEV Community

Discussion on: I'm Addy Osmani, Ask Me Anything!

Collapse
 
jaffreyjoy profile image
Jaffrey Joy • Edited

How do you make SPAs discoverable to Search Engines? Like is that even possible?

Cuz if it's a static site the crawler can index the .html file URL but in SPAs built with frameworks like REACT and Angular, the HTML is generated in the client side so... how does SEO work for SPAs?

I'm still learning about SPAs so if something written above doesn't make sense, I apologize 😅

Collapse
 
addyosmani profile image
Addy Osmani

Googlebot has supported crawling JavaScript for some time with a few caveats. When evaluating if search engines can render and index content correctly, I recommend:

  1. Checking the "Fetch as Google" Tool (support.google.com/webmasters/answ...). This will let you test how we crawl and render URLs on your site. For SPAs in particular, it's useful to sanity check everything is rendering as expected.

  2. Check if links are working as expected. Are there any problems reported in the JavaScript error console? The new Search console supports also logging JS errors there, useful to understand if there's a reason why were failed to correctly render your SPA. In Lighthouse, we have a set of SEO audits that can help flag common issues SPAs and sites run into.

  3. Double check what polyfills you are using. Googlebot is using Chrome 41, which is far from the latest version. There are certain Web Platform APIs that it doesn't fully support and JavaScript features which may need to be feature-detected and polyfilled. This is a very common problem SPAs run into. If you're comfortable with Node tooling, I recommend looking at github.com/GoogleChromeLabs/puppet..., a Puppeteer script that helps identify if your page is using any features not supported by Chrome 41.