Google Search has been using popularity to determine reliability.
A clever strategy when the Internet was in its infancy, but now, just like in high school, popularity can be overrated. (If you doubt this statement go on Facebook and track down where your high school homecoming queen and quarterback ended up.)
While Google continues to base its search algorithm on how many other pages link to a particular site, they have added and tweaked hundreds of other indicators to improve the quality of search results.
Unfortunately, the search engine optimizers are often one step ahead. The upshot? When we look for reliable information we end up at sites that fall short on quality, but excel at search engine optimization. And research, not surprisingly, indicates that most people equate link order with reliability. [i] It’s hard to resist right? Pop a topic in Google, grab one of the first links, and off you go. And probably 80% of the time the information is “good enough.”
Who created the English muffin?
It’s a piece of, excuse the pun, cake. But what about the other 20% of the time? And what about when the stakes are high? Not just a bet with a friend but a serious health concern or news about a life- threatening event. Companies are not the only ones manipulating search results. The New York Times just reported on Russian trolls who are hard at work causing all kinds of mayhem, such as spreading information around the web of a powerful chemical explosion in Louisiana that was completely fabricated.
So what to do?
Google has decided that it can be the arbiter of truth.
Ok, a Google team is now working on a system that counts the number of incorrect “facts” within a page, and will give a high ranking to sources that have the fewest false facts. The Google team will compute a “Knowledge-Based Trust Score” for each page based on software that will compare it to “verified” facts that have been pulled off the Internet and put in Google’s “Knowledge Vault.” Assuming we now know everything about everything, this shouldn’t be a problem. Anyone discovering any new knowledge will be quickly bumped down in the rankings for daring to pony up an unverified fact.
But kidding aside, what about all the gray areas? As in almost all the world’s knowledge…
When I was writing my recent book I spent years researching topics such as:
- Is Red Wine Good for You?
- Do open-plan office designs result in greater productivity?
- Are crowd sourced restaurant reviews reliable? Are the reviews real? Are there alternative sources?
- Where is the best place to get travel information when you are researching a big trip?
- Do dogs experience some rudimentary form of empathy? Where is the best place to find reliable science information?
The answers to these questions cannot just be crowd-sourced or simply verified automatically by tapping Google’s Knowledge Vault. They are too complex and nuanced.
In a thoughtful paper on this topic from the Center for Information Retrieval and Microsoft Research, a more grounded approach to finding trustworthy information is discussed. The main question the researchers asked is how to deal with controversial issues, whether search engines should be in the business of serving us what is “good” for us versus what we want, and how one can even determine which information topics are controversial to begin with. Karen Blakeman also provides a good discussion on this topic: And You Thought Google Couldn’t Get Any Worse. As does an article in the New Scientist.
Call me old fashioned, but I’m a big fan of curators, editors, gatekeepers, oh and – LIBRARIES – that help me start my search with collections of information that have been vetted. These immediately point me to piles of trustworthy information, even information that may not always be that popular or part of Google’s knowledge vault.
[i] HargittaI, Eszter, Lindsay Fullerton, Ericka Menchen-Trevino, and Kristin Yates Thomas. “Trust Online: Young Adults’ Evaluation of Web Content.” International Journal of Communication 4: 27 (April 2010).