Who’s Your Information Daddy?

Screen Shot 2015-09-05 at 3.07.48 PMGoogle: “Latina Girls.”

Go ahead. I’ll wait.

What did you get? Concerning right?

My top five hits:

  • Sexy Latina Girls (YouTube)
  • 7 Reasons Every Man Should Date a Latina (ReturnofKings.com)
  • Sexy Latina Girls (Facebook)
  • Sexy Latina Girl (Twitter)
  • Best Latina Girls in Boston: Centerfolds (Yelp)

Wait… Did I ask to have sex with a Latina girl?


The only thing Google got right in my filtered results was that I do live in the Boston area, but I’m an old hetero female. Why the sex?

And for the 14-year-old Latina looking for role models: This is what she gets from Google’s filtered results? Ugh.

Safiya Umoja Noble, an assistant professor at UCLA, investigates how search-engine bias affects women and girls negatively. From her research she asks:

 What does it cost us more broadly in terms of the human experience and degradation of our humanity in that we over-invest in private solutions to information and search? 

Advertisers are heavily driving Google search ranking, and it’s not adversely impacting just women and girls. While I agree with the adage “you are what you eat” – with my hips providing ample proof – I also strongly believe that you are what you read. I guess it’s my librarian roots showing.

Eons ago we used to select the information we read by choosing newspapers and magazines and relying on the filters librarians, publishers, and bookstores imposed on us. While there were certainly downsides to this — picture those daily New York Times editors meetings where a group of old privileged white males decided what to put on the front page –at the same time you could always choose to read other publications.

Today a 26-year-old Facebook Engineer named Greg Marra and his team design algorithms based on data that exerts a huge editorial influence on what we read. In fact, he and his small group control the news that 30% of U.S. adults consume every day when they read their Facebook newsfeed.

Marra claims he is not an editor.   He says they have been very careful not to editorialize and instead just focus on giving us what will be most engaging based on data: our data on past likes and friend choices and the data from human raters who work for Facebook.

Not so fast!

While the average Facebook user receives roughly 1,500 posts a day, most of us only look at about 300.  To make sure those 300 are the most meaningful, Facebook uses thousands of data points to customize each feed so that it includes close friends’ posts along with incorporating paid ads into the mix. If you liked your friend Robin’s baby photo out of an act of charity: more baby photos for you! (Not necessarily a good thing!) Facebook’s team of human raters help by ranking news stories on a scale of 1 to 5. I’m just praying they are not all 20-something guys who like playing beer pong. We can only hope.

But what’s the big deal? Facebook, Amazon, Google and other large platforms we use daily are only trying to give us what we will find most engaging in order to keep us on site so that we can see an occasional targeted ad. A small price to pay right?

Unfortunately there are large prices to pay for having algorithms dictate most of what we consume every day. Here’s why:

Algorithms discriminate.

Whatever prejudices and biases exist out there in the world, algorithms have a tendency to magnify them. (Remember the search for Latina Girls?) Also, algorithms are designed by humans who have an interest in engaging our eyeballs so they can show us more ads. If cat videos get a lot of likes (or fawns caught in fences, lions playing with Labrador retrievers, or moose falling through ice), then that’s what the algorithm selects.

In 2014, after the police shooting of Michael Brown in Ferguson, little of the ensuing riots showed up on Facebook in the following weeks. Instead what was heavily weighted by the algorithm was the “Ice Bucket Challenge” because our close friends were participating and videotaping themselves (both close friends and video are heavily weighted in the algorithm). The Facebook newsfeed suppressed news of the protests because this news was not deemed “relevant” or engaging enough for most users. Interestingly, Twitter’s unfiltered feed led the pack in reporting on the latest happenings in Ferguson. This resulted in the mainstream press also picking up on the importance of the story, with Facebook trailing behind. (See this for more.)

While Facebook admits to the Ferguson phenomenon they claim it is “anomalous” and that all they are doing is analyzing data: i.e. they are serving up cat videos to people who “like” cat videos, whereas not many people at that time were “liking” riots.

Algorithms are not Transparent

The New York Times has human editors that decide “all the news that’s fit to print.” While we may not always agree with what they include or do not include, it’s pretty transparent. Everyone views the same paper every day, and though they don’t exactly provide a list of items they’ve “suppressed,” it is on record when they miss something important. With Facebook feeds and Google Search we don’t know what is being withheld or given priority. Everyone’s feed is different and it changes by the minute. It is not just that it is not transparent, a recent survey demonstrated that 62% of users don’t even realize their feed is filtered and tweaked.

Facebook’s now infamous experiment carried out in 2014 that involved taking a pool of 700,000 users and showing half of them positive stories on their feed and the other half negative stories was also not transparent. They got in hot water for experimenting on human subjects without consent, but they also discovered two important issues:

  1. Mass emotional contagion can be produced by tweaking people’s feeds
  2. No one can easily determine when they are being tweaked.

Wow. Can throwing elections be far behind? Does this perhaps explain the rise of Trump?

Algorithms Know More About You Than Your Parents Do

Not only do algorithms pick up on your political affiliations — based on your likes, buying habits, and friends — they can also infer your health, sexual preferences and much more. A father recently was offended and complained that Target was pushing baby product ads to his teenage daughter. He later apologized when he came to learn that by using data Target had figured out well before he had that his daughter was pregnant.

Facebook and Google are not the only ones controlling the information we see. Amazon and other sites make money by using filters. When you type “toaster” into Amazon the filter calculates popularity and positive reviews, but it also selects and brings to the top of the result list paid listings that are not necessarily the best toasters.

Are there alternatives? Could search ever be in the control of the public sector? Like libraries perhaps? (I guess this is my own bias showing –right?) As Dr. Noble says: What does it costs us as a society to have information search controlled by private companies?

Posted in Blog | Leave a comment

Does Google Cheat?

salmon and cucumber sushi

A new study funded by Yelp suggests that Google has its “thumb on the scale” when it comes to providing searchers with the “best” links. Really? Even without a study just try typing into Google the following:


  • Restaurant reviews New York –  first result is Zagat, owned by Google!
  • Restaurant reviews Chicago– first result is Zagat, owned by Google!
  • Restaurant reviews Chicago– first result is Zagat, owned by Google!

The Zagat name has been connected to reliable and quality reviews, but since Google bought Zagat’s in 2011 the quality has gone downhill, with aggregated review data often based on reviews that are extremely out of date.

On the other hand, having competitor Yelp fund the study makes the findings a little suspect, even if the researchers come from Harvard and Columbia Universities.

Danny Sullivan, of Search Engine Watch, probably said it best:

…it comes across more to me as a public relations exercise rather than precise science”

So yes, Google cheats, and yes a study funded by its competitor is difficult to trust, but what is really difficult to trust are user review sites like Yelp, Zagat, and others.

Aggregating “crowd wisdom” on review sites often includes fake reviews, sponsored content (Elite Yelpers are given free food and then rate it 5 stars), and reviews that are just plain off topic.

A study from Purdue University found that user-generated reviews devote a far greater percentage of their word count to service and price than do semi-professional and professional restaurant reviews.

  • The sushi here is amazing! It literally melts in your mouth. BUT little did I know, this place charges 18% gratuity no matter how big your party is. That really really really disappointed me and definitely hurt their rating. –2-star review of Sugarfish by David N.
  • wtf, try to go to Langers on the day after Thanksgiving, and it is CLOSED! For four days!!!–1-star review of Langer’s Delicatessen Restaurant by Carla B.

Call me crazy, but if I were Google, I would put at the top of my search results professional bloggers and journalists who review restaurants.

Try out these, for example:

Jonathan Kauffman

Pete Wells 

Robert Sietsema

Posted in Blog, Google, Restaurant Reviews, SEO | Comments Off on Does Google Cheat?

Are Cat Videos Influencing Your Science News?


[Excerpt from: Finding Reliable Information Online: Adventures of an Information Sleuth]

…For science information in particular, there is concern that crowdsourcing and click rates are influencing what people find when they use a search engine, and this in turn shapes how we make sense of a topic. Broussard calls this a self-reinforcing informational spiral, meaning how people search for a topic then influences how a search engine like Google weighs and retrieves content.

Broussard questions whether we are really making science more accessible to lay people online or if we are moving to a science communication process in which knowledge is greatly influenced by what links search engines pull up: In effect narrowing our options.[i] Moving forward, as many people are fed news, through Facebook, Twitter, or other social media sites, this phenomenon will likely accelerate.[ii] It may be that your friend who keeps “liking” cat videos is also choosing the science news you read.

[i] Brossard, Dominique and Dietram A. Scheufele. “Science, New Media, and the Public.” Science 339, 40 (2013).

[ii]Mitchell, Amy. State of the News Media 2014. Pew Research Journalism Project State of the Media (March 26, 2014); Miller, Claire Cain. “Why BuzzFeed is Trying to Shift Its Strategy.” New York Times (August 12, 2014); Goel, Vindu and Ravi Somaiya. “With New App, Facebook Aims to Make Its Users’ Feeds Newsier.” New York Times (February 4, 2014).


Posted in Blog, Finding Reliable Science Information, Science News | Tagged , , | Leave a comment

Wait Wait… Don’t Tell Me: Taylor Swift, Tenure, and Scholarly Research.

photo of taylor swift

We laugh, but only to stop from sobbing.

On a recent NPR game show, a Taylor Swift joke pokes fun at a common research mistake: anecdotal evidence being thought of as somehow meaningful and generalizable information. This shows up a lot on Q&A sites such as WikiHow or Answers.com, sites that commonly come up near the top of a Google search. People often use their own experience or information gathered from one or two people and then generalize it to an entire population. While it is less common for this type of faulty reasoning to come from a professor, we all make mistakes…

PESCA: Here is your next limerick.

KURTIS: Dads don’t want a Norm Mailer gift, nor oars from some old sailor’s shift. On this Dada’s Day-Day, give albums by Tay-Tay ’cause fathers sure love…

BROWN: Taylor Swift?


KURTIS: Taylor Swift. You are smart.


PESCA: Taylor Swift, indeed.

KURTIS: Very good.

PESCA: Thirteen-year-old girls aren’t the only ones who love Taylor Swift. According to John Covach at the University of Rochester’s Institute of Pop Music, Taylor is a big hit with dads too. Covach knows this because he interviewed a real dad – John Covach.


PESCA: When asked to assess the scholarship in question, John Covach’s employer, the University of Rochester, said (singing):

                  You are never, ever, ever getting back your tenure.

Now, despite a funny video of a Dad and Son lip syncing a song by Taylor Swift together, as far as I can determine no research study or poll has been conducted on whether fathers, in particular, seem to be especially fond of Taylor Swift. Or can someone out there prove otherwise? It’s always challenging to prove a negative.



Posted in Blog | Leave a comment

Does that Sound Like Research to You?

poison symbolThree days after his wife died from cyanide poisoning, medical researcher Robert Ferrante typed a search into Yahoo! Answers:

How would a coroner detect when someone is killed by cyanide?

Several months before his wife was poisoned, Ferrante also searched for:

  • information on divorce laws,
  • how to tell if a woman was having an affair,
  • and the legal definition of “malice of forethought” an apparent reference to the legal term “malice aforethought,” meaning premeditation.

During his trial, the prosecutor used Ferrante’s online searches against him. Ferrante claimed that he was just doing “research” related to his work. In closing arguments the prosecutor noted that:

…one article was titled, “Illinois man wins the lottery, poisoned by cyanide,” and she asked the jury, “Does that sound like research to you?”

Ferrante was later sentenced to life in prison without parole for first-degree murder.

Ferante’s choice to use Yahoo! Answers is a little like choosing to play the lottery: it’s alluring and it occasionally provides good reliable information, but often it doesn’t. For a question about cyanide detection in the body there are a number of reliable sources that surface easily by doing a Google search. If you can manage to pass by the first few hits from companies selling products to detect cyanide, just a few sites down are reputable sites from the National Institute of Health, state poison centers, and scholarly medical articles about cyanide detection and poisoning. These are much more reliable then the random and anonymous person that might answer a Yahoo! Answers question.

Unlike searching for facts, questions like: “Is my wife having an affair?” can be dicey to throw out there on Google or Yahoo! Answers. Suddenly all the vultures pop up to “help.” Affaircare.com, Savemysexlessmarriage.com, and beyondaffairs.com pop to the top of a Google search, but WikiHow snags first billing on my Google search with the enticing click-bait of:

How to Tell if Your Wife is Cheating (With Pictures)

The pictures are disappointing, but WikiHow also cites sources, probably to boost its ranking and get the number one slot in Google. The citations are not to research articles or even marriage counselor advice columns, but to other sites trying to boost their own traffic with quickly thrown together “articles.” One of the four citations is to matchmove.com. It provides a brief useless article in poorly written English on how to determine if your wife is cheating, and then suggests you might want to blow off steam by gaming on, you guessed it, Matchmove.com, their social media gaming platform based in Singapore.

In the same way that Ferrante ended up sidetracked and viewing the site on the lottery winner who died of cyanide poisoning, I find myself quickly sucked into investigating Matchmove.com. Why would WikiHow cite such a shoddy website? I jump on Wikipedia and the article on Mathmove.com screams:

This article has multiple issues … appears to be written by a contributor with a close connection to the subject … is an orphan—has no links to it or from it.

 I hear the little voice in my head mimicking the prosecutor in the Ferrante case: Does that sound like research to you?

[Excerpt from: Finding Reliable Information Online: Adventures of an Information Sleuth]

Posted in Blog | Tagged , , , , | Leave a comment

Is Google the Best Place for Legal Information? Well… Yes and No

blind justice statue

blind justice statue

Ray Tomlinson, a 62-year-old from Warren, Michigan just wanted to get home quickly: driving from Arizona to Michigan was a long trip. When his girlfriend sitting in the passenger seat turned out to be not sleeping but dead from a drug overdose, he knew what to do. Whipping out his smart phone he searched for the laws in Arizona on how soon you were required to contact the police if someone dies. As the policeman who later arrested Ray related:

“He then does an Internet search via his phone,” said Warren police Sergeant Stephen Mills. “He says he finds on the Internet that he has 48 hours to take her to a medical examiner or to a morgue.” The information was wrong, Mills said.[i]

            What did Ray do wrong? Where do I begin…

[Excerpt from: Finding Reliable Information Online: Adventures of an Information Sleuth]

[i] Associated Press. “Man Drives Hundreds of Miles with Corpse Passenger.” Boston Globe (June 5, 2014).


Posted in Blog | Leave a comment

Google’s Knowledge Based Trust Score

english muffinDo You Want to be Popular or do you want to be Trustworthy?

Google Search has been using popularity to determine reliability.

A clever strategy when the Internet was in its infancy, but now, just like in high school, popularity can be overrated. (If you doubt this statement go on Facebook and track down where your high school homecoming queen and quarterback ended up.)

While Google continues to base its search algorithm on how many other pages link to a particular site, they have added and tweaked hundreds of other indicators to improve the quality of search results.

Unfortunately, the search engine optimizers are often one step ahead. The upshot? When we look for reliable information we end up at sites that fall short on quality, but excel at search engine optimization. And research, not surprisingly, indicates that most people equate link order with reliability. [i] It’s hard to resist right? Pop a topic in Google, grab one of the first links, and off you go. And probably 80% of the time the information is “good enough.”

Who created the English muffin?

It’s a piece of, excuse the pun, cake. But what about the other 20% of the time? And what about when the stakes are high? Not just a bet with a friend but a serious health concern or news about a life- threatening event. Companies are not the only ones manipulating search results. The New York Times just reported on Russian trolls who are hard at work causing all kinds of mayhem, such as spreading information around the web of a powerful chemical explosion in Louisiana that was completely fabricated.

So what to do?

Google has decided that it can be the arbiter of truth.

Ok, a Google team is now working on a system that counts the number of incorrect “facts” within a page, and will give a high ranking to sources that have the fewest false facts. The Google team will compute a “Knowledge-Based Trust Score” for each page based on software that will compare it to “verified” facts that have been pulled off the Internet and put in Google’s “Knowledge Vault.” Assuming we now know everything about everything, this shouldn’t be a problem. Anyone discovering any new knowledge will be quickly bumped down in the rankings for daring to pony up an unverified fact.

But kidding aside, what about all the gray areas? As in almost all the world’s knowledge

When I was writing my recent book I spent years researching topics such as:

  • Is Red Wine Good for You?
  • Do open-plan office designs result in greater productivity?
  • Are crowd sourced restaurant reviews reliable? Are the reviews real? Are there alternative sources?
  • Where is the best place to get travel information when you are researching a big trip?
  • Do dogs experience some rudimentary form of empathy? Where is the best place to find reliable science information?

The answers to these questions cannot just be crowd-sourced or simply verified automatically by tapping Google’s Knowledge Vault. They are too complex and nuanced.

In a thoughtful paper on this topic from the Center for Information Retrieval and Microsoft Research, a more grounded approach to finding trustworthy information is discussed. The main question the researchers asked is how to deal with controversial issues, whether search engines should be in the business of serving us what is “good” for us versus what we want, and how one can even determine which information topics are controversial to begin with. Karen Blakeman also provides a good discussion on this topic: And You Thought Google Couldn’t Get Any Worse. As does an article in the New Scientist.

Call me old fashioned, but I’m a big fan of curators, editors, gatekeepers, oh and – LIBRARIES – that help me start my search with collections of information that have been vetted. These immediately point me to piles of trustworthy information, even information that may not always be that popular or part of Google’s knowledge vault.

[i] HargittaI, Eszter, Lindsay Fullerton, Ericka Menchen-Trevino, and Kristin Yates Thomas. “Trust Online: Young Adults’ Evaluation of Web Content.” International Journal of Communication 4: 27 (April 2010).


Posted in Blog | Tagged , , , , | Leave a comment

Five for Five?

Taxi image of signDo you know what that means? It means that after you ride in an Uber Taxi, just before you exit you agree to give the driver a five star rating if they agree to give you a five star rating.

How meaningful are rating systems if everyone gets five stars? After two years spent researching my new book: Finding Reliable Information Online: Adventures of an Information Sleuth, I’m convinced that many rating systems are seriously flawed, some rating systems work really well, and very few people can parse out the difference between the two types.

More importantly, as I learned about the “Psychology of Search,” I realized how important it is to understand what we bring to the table when we are looking for reliable information. It is nearly impossible to stare five stars in the face and not have warm fuzzy feelings about a restaurant, hotel, taxi, or anything else, even if those stars are completely contrived.

So how do you figure out which star rating systems are reliable and which are rigged? Well I could explain it to you, but then I’d have to errrr…maybe you might want to read my new book? Available soon at a library near you: Look it up in WorldCat (after Sept. 2015).

Posted in Blog | Comments Off on Five for Five?