The React SPA is set up to dynamically pull content from a server based on the URL. There is an index.html file sitting in a directory that routes to the correct content from the page requests. The URLS are set up to look like this http://www.example.com/articles/#/title1. A site map, which lives at example.com/articles/sitemap.txt, has been submitted with all of the URLS in the appropriate format. As of a week in only the page at /articles has been indexed.

In the particular environment deployed the rewrite rules can not be changed.

"Fetch and Render as Google" in Search Console appears not to work with URLs that contain a #.

1 Answer
1

Google should be able to index your AJAX driven content. As of 14 October 2015 Google depreciated its AJAX Crawling Proposal which was made back in 2009 as at the time the Googlebot was incapable of understanding javascript generated dom objects on the page. Since then the Googlebot has been enhanced and can interpret the page just as any modern web browser would. Google updated their Technical Webmaster Guidelines to reflect this change and now recommend against disallowing the Googlebot access to crawl CSS or JS files for your site.

The fetch and render as Google may not work as expected however unless there is something unusual about how the javascript serves up your HTML to the browser over an AJAX connection it should work for the Googlebot.

When I do a site:example.com/articles/ Google search it looks as though Google has only indexed the index.html which is performing the SPA routing. The index.html sits at example.com/articles/index.html and is responsible for directing the page to the correct article for example example.com/articles/#/title1.
– carwbrownApr 25 '16 at 16:18