Google recently hosted a hangout to provide feedback on sites that webmasters submitted for review. Although the hosts John Mueller and Pierre Far didn’t actually review sites individually, they did highlight example issues they found within the submitted sites and provided generalized advice on how to resolve some of the more common problems they discovered.
As the hosts described their process for reviewing sites, they first listed the areas they would examine that often cause problems for webmasters. They also made a point to mention the areas they would NOT examine, areas which they no longer consider problematic as a result of Google’s improved ability to assess and ignore certain issues. The list of elements Google said to ignore was rather surprising.
Although we’ve listed a few of the highlights below, it’s important to keep in mind the caveats that accompany each of these, typified by the word “usually.” For instance, lack of HTML validation “usually” isn’t a problem for the Google bots, and Google can “usually” compensate for mixed up 301 and 302 redirects. However, “usually” by definition isn’t “always,” so webmasters should continue their optimization efforts in these areas whenever possible, adhering to best practices and common sense.
With this necessity in mind, what does Google say you don’t need to worry about?
- HTML validation: Google can usually pick up any page that has HTML on it – it doesn’t have to be valid.
- Missing robots.txt files: Forgetting to insert robots.txt files is not a problem. Google will simply continue crawling normally. Note: robots.txt files should be used for important crawler instructions, such as “blocking” directory crawls.
- Duplication from www/non-www or http/https sites: Google can usually reconcile duplicate content resulting from www/non-www or http/https URLs, unless the site is exceptionally large.
- 301 and 302 redirects: If you mix up 301 and 302 redirects, Google can usually figure out the difference – more specific to device detection / redirection.
- IDs in URLs: If you already have IDs in your URLs, you don’t have to go out of your way to rewrite them, as long as they are crawlable. Google still recommends “clean” URLs.
If anything, by indicating certain elements that webmasters usually don’t need to worry about, Google provides insight into some of the incremental improvements that the crawling system has made over the years. These changes make it easier for Google to identify common technical issues, potentially removing some of the burden from webmasters who are optimizing their sites.
Check out the archived recording of the hangout to get the whole story and to learn from some of the common mistakes made by other sites. The hosts also indicated that they planned to hold more site review hangouts in the future where they might delve into individual sites to provide a deeper level of insight.