Pushing Bad Data- Google’s Latest Black Eye

Google stopped counting, or at least publicly displaying, the number of pages it indexed in September of 05, after a school-yard “measuring contest” with rival Yahoo. That count topped out around 8 billion pages before it was removed from the homepage. News broke recently through various SEO forums that Google had suddenly, scrape google search results  over the past few weeks, added another few billion pages to the index. This might sound like a reason for celebration, but this “accomplishment” would not reflect well on the search engine that achieved it.

What had the SEO community buzzing was the nature of the fresh, new few billion pages. They were blatant spam- containing Pay-Per-Click (PPC) ads, scraped content, and they were, in many cases, showing up well in the search results. They pushed out far older, more established sites in doing so. A Google representative responded via forums to the issue by calling it a “bad data push,” something that met with various groans throughout the SEO community.

How did someone manage to dupe Google into indexing so many pages of spam in such a short period of time? I’ll provide a high level overview of the process, but don’t get too excited. Like a diagram of a nuclear explosive isn’t going to teach you how to make the real thing, you’re not going to be able to run off and do it yourself after reading this article. Yet it makes for an interesting tale, one that illustrates the ugly problems cropping up with ever increasing frequency in the world’s most popular search engine.

A Dark and Stormy Night

Our story begins deep in the heart of Moldva, sandwiched scenically between Romania and the Ukraine. In between fending off local vampire attacks, an enterprising local had a brilliant idea and ran with it, presumably away from the vampires… His idea was to exploit how Google handled subdomains, and not just a little bit, but in a big way.

The heart of the issue is that currently, Google treats subdomains much the same way as it treats full domains- as unique entities. This means it will add the homepage of a subdomain to the index and return at some point later to do a “deep crawl.” Deep crawls are simply the spider following links from the domain’s homepage deeper into the site until it finds everything or gives up and comes back later for more.

Briefly, a subdomain is a “third-level domain.” You’ve probably seen them before, they look something like this: subdomain.domain.com. Wikipedia, for instance, uses them for languages; the English version is “en.wikipedia.org”, the Dutch version is “nl.wikipedia.org.” Subdomains are one way to organize large sites, as opposed to multiple directories or even separate domain names altogether.

So, we have a kind of page Google will index virtually “no questions asked.” It’s a wonder no one exploited this situation sooner. Some commentators believe the reason for that may be this “quirk” was introduced after the recent “Big Daddy” update. Our Eastern European friend got together some servers, content scrapers, spambots, PPC accounts, and some all-important, very inspired scripts, and mixed them all together thusly…

Five Billion Served- And Counting…

First, our hero here crafted scripts for his servers that would, when GoogleBot dropped by, start generating an essentially endless number of subdomains, all with a single page containing keyword-rich scraped content, keyworded links, and PPC ads for those keywords. Spambots are sent out to put GoogleBot on the scent via referral and comment spam to tens of thousands of blogs around the world. The spambots provide the broad setup, and it doesn’t take much to get the dominos to fall.

 

Leave a Comment

Your email address will not be published. Required fields are marked *