Monthly Archives: August 2008

Google recently modified how they show results and in doing so virtually crippled at least one popular "rank checking software" package. As "JohnMu" pointed out, Google has always been clear about using these kinds of tools. In fact for as long as I can remember, Google Webmaster Guidelines has clearly stated:

"Don't use unauthorized computer programs to submit pages, check rankings, etc. Such programs consume computing resources and violate our Terms of Service."

Bottom line, automated queries require resources without any potential for generating revenue. It has been said that Google's "ultimate selection criterion is cost per query, expressed as the sum of capital expense (with depreciation) and operating costs (host-ing, system administration, and repairs) divided by performance."

During Q4 2007, Google reported capital expenses of $678 million with operating costs of $1.43 billion. According to ComScore 17.6 billion "core searches" were conducted by Google during the same period. Using Google's formula and financial data along with ComScore's estimates, it appears as though Google's average cost per "core search query" was nearly $.12 during Q4 2007. Again, this is a rough estimate and a rounded total but, personally I was a little surprised by the number.

If 1 million sites run ranking reports on 100 keywords 12 times per year at $.12 per "core search query", it costs Google $144,000,000. Over a ten year period, that's more than a billion dollars. Given this data, it's easy to see why Google uses "algorithms and different techniques to block excessive automated queries and scraping, especially when someone is hitting Google quite hard." Matt Cutts suggests, contacting Google's Business Development Team about permission on sending automated queries to Google.

Now, I fully understand the importance of ranking reports when it comes to SEO clients. That said, there are folks out there abusing the system, running ranking reports on thousands of keywords daily.

Does Google's removal of the following paragraph mean that creating static copies of dynamic pages is no longer necessary?

"Consider creating static copies of dynamic pages. Although the Google index includes dynamic pages, they comprise a small portion of our index. If you suspect that your dynamically generated pages (such as URLs containing question marks) are causing problems for our crawler, you might create static copies of these pages. If you create static copies, don't forget to add your dynamic pages to your robots.txt file to prevent us from treating them as duplicates."

http://www.google.com/support.....py?answer=40349&ctx=sibling

Until today, Google suggested creating static versions of dynamic pages. Reason being, Googlebot had difficulty in crawling dynamic URLs especially URLs containing question marks and/or symbols. To prevent duplicate content issues caused by sites with both static and dynamic versions, Google suggested "disallowing" the dynamic version via robots.txt. While this tactic helped engines, some would say using both thinned PageRank as well as the relevancy of anchor text in inbound links. Either way, it will be interesting to see how this move impacts Flash sites with a "static" version. Safe to say, Google is quickly advancing in the ability to crawl content!