The New World Of Search Engine Rankings

There are plenty of ways to make money online, and many of them revolve around getting good search engine rankings. If we can achieve top positioning in the SERPs (Search Engine Results Pages), we gain access to free search engine traffic. Naturally, there are innumerable products, strategies, and schemes for getting top rankings.

Because top listings can be so lucrative, plenty of people continually look for ways to ‘game’ the system – buy or manipulate their way to the top scrape google results. On the other side of the coin, Google is constantly struggling to ‘clean up’ it’s database so as to show what it deems as legitimate ‘best’ and ‘most relevant’ results. Recently Google initiated another in it’s ongoing series of updates to it’s ranking algorithm or formula. This “update” had the effect of knocking down many sites that were either ‘thin’ – meaning little or no unique content – or relying on manufactured or bought backlinks.

Pages that were manufactured at that speed could hardly rely on human dexterity in creating their content. So the software which produced them – and it was ingenious software – had to resort to other means. These largely fell into two groups: RSS feeds and what came to be called “scraped” content. The problem with RSS feeds was that lots of other people were using the same feed. The problem with scraped content was that it belonged to someone else. In both cases, the hyperlink which was obligatory (but which could be turned off in the case of the scraped content) bled Pagerank away and in other ways compromised the integrity of your site. Both practices also had the habit of leaving footprints for the search engines to spot. Lawyers’ purses bulged a bit as well.

At about the same time, people searching the Internet complained of seeing bland web pages with content that was either non-existent, meaningless or repetitive (even, heaven forbid, duplicate). The search engines addressed this by punishing web sites that displayed those tendencies, and so raised the informational quality of their listings for a while. This punishment consisted of altering their algorithms so that sites or pages which demonstrated such blandness were either pushed so far down the listings that they effectively could not be seen, or delisted altogether (banned).

Along came a flurry of remedies. You could pay ghost-writers at Elance or Rentacoder to produce the content for you according to a specified keyword density (but even at $3 an hour it was expensive if you wanted to replace all those thousands of pages which had just been banned by Google). Then a huge mini industry of private label membership sites came along, charging you a monthly fee to use its thousands of stock articles without any copyright questions being asked. (But there were seldom the specific keyword phrases you wanted in those articles, and you could never control the keyword density; also you just knew that lots of other people were using the same articles from the same membership sites.)

Other software came along and inserted random text at the top and bottom of each article, so that each page became unique in its own way. Still more software was produced which substituted common words in existing PLR articles from stock synonyms (there was word going round that if a page was 28 percent more different than another page then you were okay). The problem was that if the page was read as a whole, it made no sense at all. But this could still fool the search engines. Just.

The search engines were reported to have recruited thousands of student “editors” to manually weed out such aberrations from their indices. More emphasis was placed on non-reciprocal inbound links with the appropriate keywords in the anchor text (or within ten words left or right of the anchor text), and other “off-page” considerations. And so it went on. And on.

There were all sorts of “solutions” offered to those webmasters who had known the heady days of the big-figure Google checks for doing very little, and were willing to pay almost any price to return to them. Accordingly, the software became more ambitious. In turn, the search engines became more demanding, and there were increasing signs that perfectly legitimate sites were being punished as well as the spam pages.

We seem to have reached a point where something has to give. The browsing public does deserve better than scraped content, RSS feeds and the abundance of proto-plagiarism that it still gets. The need is for content that makes sense and is readable by real people and also of value, as well as ticking all the boxes of the search engine bots’ latest algorithm. Equally, webmasters have a need for such content as well, yet they also have an understandable need to be able to produce that content on demand to their increasingly information-hungry readers. To satisfy such demands it is unlikely that one piece of software alone will suffice. Instead, it seems clear that a system of content delivery needs to exist which is actually sophisticated enough to produce content which is of value to all concerned.

Leave a Reply

Your email address will not be published.

Close