Using Google for Answers
Earlier in the week, I was working with somebody on getting Dropbox loaded onto her machine. For some reason, Dropbox would not install after you hit the “Install” link. Instead, we kept receiving a blank screen. So I went to Google to see if it could direct me to the answer to our problem and lo and behold, Google directed me to Dropbox help page which laid out several options for downloading Dropbox if the main installation method did not work. So Google is a very useful tool for researching and learning, but I recently learned that there is a dark side to it.
In How Search Engines are Making Us More Racist, the reporter and the author discuss the perils of Google and how it can lead you to racist views or radical thoughts if you are not careful. If you rely on the Internet for your news or sources of information, which a lot of young folks do these days, you could find yourself driven to alarming viewpoints. The sidebar below gives an example of the process; I’ve quoted part of the article word for word because it is really important to understand how we can be manipulated (unintentionally by Google but probably knowingly by bad actors) and putting it in my own words would not convey the danger nearly as well. The article gives a fuller explication of how Google’s business model inadvertently drives the process.
This isn’t a trivial topic, especially in a world where people get more information from search engines than they do from teachers or libraries. For Noble, Google is not just telling people what they want to know but also determining what’s worth knowing in the first place.
These search algorithms aren’t merely selecting what information we’re exposed to; they’re cementing assumptions about what information is worth knowing in the first place. That might be the most insidious part of this.
In his manifesto, Dylann Roof has a diatribe against people of color, and he says that the first event that truly awakened him was the Trayvon Martin story. He says he went to Google and did a search on “black-on-white crime.” Now, most of us know that black-on-white crime is not an American epidemic — that, in fact, most crime happens within a community. But that’s a separate discussion.
So Roof goes to Google and puts in a white nationalist red herring (“black-on-white crime.”) And of course, it immediately takes him to white supremacist websites, which in turn take him down a racist rabbit hole of conspiracy and misinformation. Often, these racist websites are designed to appear credible and benign, in part because that helps them game the algorithms, but also because it convinces a lot of people that the information is truthful.
This is how Roof gets radicalized. He says he learns about the “true history of America,” and about the “race problem” and the “Jewish problem.” He learns that everything he’s ever been taught in school is a lie. And then he says, in his own words, that this makes him research more and more, which we can only imagine is online, and this leads to his “racial awareness.”
And now we know that shortly thereafter, he steps into the “Mother” Emanuel AME Church in Charleston, South Carolina, and murders nine African-American worshippers in cold blood, in order to start a race war.
So the ideas that people are encountering online really matter. It matters that Dylann Roof didn’t see the FBI statistics that tell the truth about how crime works in America. It matters that he didn’t get any counterpoints. It matters that people like him are pushed in these directions without resistance or context
Vox, Sean Illing, “How Search Engines Are Making Us More Racist”, April 6, 2018
But it’s not just Google that has these problems; Facebook and Twitter both have these same kind of issues. Where Facebook’s Fake News Has Deadly Consequences and Does Facebook Just Harbor Extremists provide real world examples of how hate and racism grow through the medium, sometimes leading to death. While the examples are third world or developing world examples, it can happen to the developed world too. Just look at the rise of nativism in Europe and the U.S.
These mediums are unknowingly training us to be extremists, according to the second article on Facebook. The basis of Facebook’s business model is to increase our social engagement so that Facebook can gather data and learn what kinds of ads to put in front of us. It is those ads that earn Facebook’s revenues. Unfortunately, the best way to pull us in or increase our engagement is through fear or anger. The other natural tendency to engagement is through tribalism, which is, kind of in a way, connected to fear and anger. And finally, the reward system of likes and such trains us to post more of this stuff in order to keep earning those likes, and possibly attain viral status.
Add to the incentive system the lack of a filter system to weed out extremism, such as a respected news organization or family or school, the extremism eventually becomes mainstream, especially among the young and the impressionable.
"But as a business, Facebook obviously has an agenda that is about more than just connecting people. Facebook’s “core functions are to deploy its algorithms to amplify content that generates strong emotional responses among its users and then convert what it learns about our interests and desires into targeted ads,” writes Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia."
Daily Beast, Cathrin Schaer, “Where Facebook’s Fake News Has Deadly Consequences”, April 17, 2018.
So how do we protect ourselves from being led astray? I really don’t know. I think being aware is the first step. I will still use Google but the type of questions will play a role. How-to questions such as how to do something or how to fix something should be safe because those kind of questions don’t involve opinions or taste. Otherwise, for questions about world events or history or even facts, I would rely upon known reliable sources such as BBC, New York Times, Washington Post, Wall Street Journal. But even then, do a lot of research and don’t rely on one
source. If multiple sources say the same thing, then the likelihood that the “facts” or “story” is credible. But even then…those sources could be wrong – think about the days before the Iraq War. Most news sources were in favor of the war and yet they turned out to be wrong. You have to be skeptical and question everything.
"There is one fundamental law that the dial-a-hate-message people have broken, and for which they can offer no excuse. In the name of freedom, they disregard the truth. And truth, above all else, is what makes possible our American way of life. Democracy could not long survive on a diet of half-truth and vicious innuendo. Neither can the foolish, irresponsible impressions of the freedom ringers long exist in the glaring light of truth."
Fast Company, Steven Melendez, “Before Social Media, Hate Speech and Propaganda Spread By Phone”, April 2, 2018.
Now where have I heard such sentiments before?
The above quote comes from Before Social Media, Hate Speech and Propaganda Spread by Phone and it turns out that this problem of fake news and racist speech has always been around and is not unique to our times. This article gives a historical review of hate speech and propaganda since at least the ‘40’s, and we struggled through it and overcame it for a time. So we just need to learn again how to fend for ourselves and develop new policies to fit the technologies of our time.
You must be logged in to post a comment.