
Robots.txt Noindex Update: Everything SEOs Need to Know
Update: As of 1st September 2019, Google will be retiring all code that handles unsupported and unpublished rules in robots.txt…

Common Robots.txt Mistakes and How to Avoid Them
Robots.txt is a critical tool in an SEO’s arsenal, which is used to establish rules that instruct crawlers and robots…

Using a Crawler to Collect Chrome Page Speed Metrics at Scale
It seems like the SEO community can’t stop talking about site speed and performance at the moment, and it’s no…

Webinar Recap: Using Log Files to Super Charge Your SEO With Eric Enge & Jon Myers
The Skinny On the 6th of September 2017, we were humbled to host a webinar with Search Superstar – and…

The Googlebot Timeout is 3 Minutes
Googlebot stops crawling pages that take 3 minutes or longer to load. We ran an experiment with a set of…

An SEO’s Guide to Crawl Budget Optimization
Let me present an issue that, from my point of view, is crucial to SEO success. Of course, we can…

8 Ways of Getting URLs Crawled
Getting URLs Crawled So you already have a website and some of its pages are ranking on Google, great! But…

How to Measure Indexed Pages More Accurately
If the need arises to check how many of a site’s pages are indexed (ie. those URLs that are returned…

Disallow and Google: An Intermediate SEO Guide
Following on from our beginner’s guide to implementing noindex, disallow and nofollow directives, we’re now taking a look at some…