Google’s history of differentiating good content from bad content
Just like it’s quite common for anybody to come across good and bad people in life, so is it quite common for internet users to come across good and bad content, respectively. After Google’s inception several years ago, an algorithm was developed as the basis for SEO, and has been used as a platform for enabling internet users locate information conveniently. However, the creation of bad content gradually lead to the pollution of Google’s platform; hence successive modifications of Google’s original algorithm to rely on more than 200 unique signals and differentiate “good content” (ranked higher) from “bad content” (ranked lower). Differentiating good content from bad content involves crawling and indexing both types of content. Google developed robots (called Googlebot) to crawl the web and categorize content into good and bad.
Changes in Google’s algorithms that have been differentiating good content from bad content
After developing the first search engine algorithm, PageRank, in 1997, Google continuously modified its algorithms in a quest to differentiate between good and bad content. We take a look at continuous changes (which occurred significantly from 2011 to date) in algorithms that have helped Google to always differentiate good content from bad ones:
(i) Google Panda
Google Panda (launched in 2011) was created with the aim of detecting “content farms” and blocking them from showing on Google search results. Content farms have bad content that contain shallow and grammatically incorrect information that are improperly punctuated and overly stuffed with keywords. Also, in attempt to differentiate between good and bad content, Google Panda acted against scraper websites (sites that create bad content by “scraping” original content from existing websites) by making them not to show up on the upper echelons of Google’s pages.
(ii) Google Penguin
Google Penguin (launched in 2012) was targeted against webspam with the aim of decreasing the ranks of bad content which violated Google’s quality guidelines related to keyword stuffing and intentional duplication of original content from other websites. Google thinks that putting too much keywords create negative experiences for site users and makes content incomprehensible. Incomprehensiveness and lack of unique/relevant content is considered as a signal for Pengiun to lower the rank of any bad content.
(iii) Google Hummingbird
Google launched Hummingbird in 2013 as a brand new algorithm that uses some parts of old engine systems such as Panda and Penguin. Hummingbird does not affect SEO; neither does it doesn’t differentiate between good and bad content. Although Google released Pigeon update in 2014 and Mobile update in early 2015, the two were not designed to differentiate between good and bad content.
(iv) Other Google Algorithm Updates
Prior to release of Google Fred in 2017, Possum update was released in 2016 and used to improve location-based searches, although it didn’t differentiate good content from bad ones. Google Fred update was released with the aim of lowering the rank of bad (low-quality) content created for the sole purpose of bringing in revenue from ads.
It can be observed that from Panda to Fred, Google continuously updated its algorithm to make it more difficult for bad content to manipulate their algorithm’s program.
Lievonen, M. 2013. Understanding Google algorithms and SEO is essential for online marketer. Tampere University of Applied Sciences International Business Marketing. Available at: https://www.theseus.fi/bitstream/handle/10024/67859/Lievonen_Marjut.pdf?sequence=1
Rand Fishkin. How to Determine if a Page is “Low Quality” in Google’s Eyes Whiteboard. [Weblog] Posted August 25, 2017. Available at: https://moz.com/blog/low-quality-pages