Who’s Your Information Daddy?

Screen Shot 2015-09-05 at 3.07.48 PMGoogle: “Latina Girls.”

Go ahead. I’ll wait.

What did you get? Concerning right?

My top five hits:

  • Sexy Latina Girls (YouTube)
  • 7 Reasons Every Man Should Date a Latina (ReturnofKings.com)
  • Sexy Latina Girls (Facebook)
  • Sexy Latina Girl (Twitter)
  • Best Latina Girls in Boston: Centerfolds (Yelp)

Wait… Did I ask to have sex with a Latina girl?

No.

The only thing Google got right in my filtered results was that I do live in the Boston area, but I’m an old hetero female. Why the sex?

And for the 14-year-old Latina looking for role models: This is what she gets from Google’s filtered results? Ugh.

Safiya Umoja Noble, an assistant professor at UCLA, investigates how search-engine bias affects women and girls negatively. From her research she asks:

 What does it cost us more broadly in terms of the human experience and degradation of our humanity in that we over-invest in private solutions to information and search? 

Advertisers are heavily driving Google search ranking, and it’s not adversely impacting just women and girls. While I agree with the adage “you are what you eat” – with my hips providing ample proof – I also strongly believe that you are what you read. I guess it’s my librarian roots showing.

Eons ago we used to select the information we read by choosing newspapers and magazines and relying on the filters librarians, publishers, and bookstores imposed on us. While there were certainly downsides to this — picture those daily New York Times editors meetings where a group of old privileged white males decided what to put on the front page –at the same time you could always choose to read other publications.

Today a 26-year-old Facebook Engineer named Greg Marra and his team design algorithms based on data that exerts a huge editorial influence on what we read. In fact, he and his small group control the news that 30% of U.S. adults consume every day when they read their Facebook newsfeed.

Marra claims he is not an editor.   He says they have been very careful not to editorialize and instead just focus on giving us what will be most engaging based on data: our data on past likes and friend choices and the data from human raters who work for Facebook.

Not so fast!

While the average Facebook user receives roughly 1,500 posts a day, most of us only look at about 300.  To make sure those 300 are the most meaningful, Facebook uses thousands of data points to customize each feed so that it includes close friends’ posts along with incorporating paid ads into the mix. If you liked your friend Robin’s baby photo out of an act of charity: more baby photos for you! (Not necessarily a good thing!) Facebook’s team of human raters help by ranking news stories on a scale of 1 to 5. I’m just praying they are not all 20-something guys who like playing beer pong. We can only hope.

But what’s the big deal? Facebook, Amazon, Google and other large platforms we use daily are only trying to give us what we will find most engaging in order to keep us on site so that we can see an occasional targeted ad. A small price to pay right?

Unfortunately there are large prices to pay for having algorithms dictate most of what we consume every day. Here’s why:

Algorithms discriminate.

Whatever prejudices and biases exist out there in the world, algorithms have a tendency to magnify them. (Remember the search for Latina Girls?) Also, algorithms are designed by humans who have an interest in engaging our eyeballs so they can show us more ads. If cat videos get a lot of likes (or fawns caught in fences, lions playing with Labrador retrievers, or moose falling through ice), then that’s what the algorithm selects.

In 2014, after the police shooting of Michael Brown in Ferguson, little of the ensuing riots showed up on Facebook in the following weeks. Instead what was heavily weighted by the algorithm was the “Ice Bucket Challenge” because our close friends were participating and videotaping themselves (both close friends and video are heavily weighted in the algorithm). The Facebook newsfeed suppressed news of the protests because this news was not deemed “relevant” or engaging enough for most users. Interestingly, Twitter’s unfiltered feed led the pack in reporting on the latest happenings in Ferguson. This resulted in the mainstream press also picking up on the importance of the story, with Facebook trailing behind. (See this for more.)

While Facebook admits to the Ferguson phenomenon they claim it is “anomalous” and that all they are doing is analyzing data: i.e. they are serving up cat videos to people who “like” cat videos, whereas not many people at that time were “liking” riots.

Algorithms are not Transparent

The New York Times has human editors that decide “all the news that’s fit to print.” While we may not always agree with what they include or do not include, it’s pretty transparent. Everyone views the same paper every day, and though they don’t exactly provide a list of items they’ve “suppressed,” it is on record when they miss something important. With Facebook feeds and Google Search we don’t know what is being withheld or given priority. Everyone’s feed is different and it changes by the minute. It is not just that it is not transparent, a recent survey demonstrated that 62% of users don’t even realize their feed is filtered and tweaked.

Facebook’s now infamous experiment carried out in 2014 that involved taking a pool of 700,000 users and showing half of them positive stories on their feed and the other half negative stories was also not transparent. They got in hot water for experimenting on human subjects without consent, but they also discovered two important issues:

  1. Mass emotional contagion can be produced by tweaking people’s feeds
  2. No one can easily determine when they are being tweaked.

Wow. Can throwing elections be far behind? Does this perhaps explain the rise of Trump?

Algorithms Know More About You Than Your Parents Do

Not only do algorithms pick up on your political affiliations — based on your likes, buying habits, and friends — they can also infer your health, sexual preferences and much more. A father recently was offended and complained that Target was pushing baby product ads to his teenage daughter. He later apologized when he came to learn that by using data Target had figured out well before he had that his daughter was pregnant.

Facebook and Google are not the only ones controlling the information we see. Amazon and other sites make money by using filters. When you type “toaster” into Amazon the filter calculates popularity and positive reviews, but it also selects and brings to the top of the result list paid listings that are not necessarily the best toasters.

Are there alternatives? Could search ever be in the control of the public sector? Like libraries perhaps? (I guess this is my own bias showing –right?) As Dr. Noble says: What does it costs us as a society to have information search controlled by private companies?

This entry was posted in Blog. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *