This reading brought to light a few issues the author felt that technological redlining fosters. Many of these were incredibly eye-opening, evoking a sense of fear and frustration. Safiya Umoja Noble discusses the ways in which the algorithms that control what we see, when we see, and how we see, are made by individuals. Noble says, “… some of the very people who are developing search algorithms and architecture are willing to promote sexist and racist attitudes openly at work and beyond, while we are supposed to believe that these same employees are developing ‘neutral’ or ‘objective’ decision-making tools. Human beings are developing the digital platforms we use…” (2). These mathematic formulas have a human impression, which ultimately leaves the power of bias and discrimination in the hands of the creator. This is a terrifying thought for many reasons. The first being the underlying and sometimes prominent racism on the World Wide Web, and the second being the sense of powerlessness I felt while reading this. The inventors of these algorithms are people who are not only technologically advanced, but those who hold a power and control over what content is distributed. They ultimately have the final say of what material is being shown on different search engines, forms of social media, or advertised to users; leaving everyday people left with the challenge of sifting though and spotting these inequalities. How do we stop this? If the internet is supposed to be a free and open space for all, how can these issues still exist?
Reading these pages in particular made me think of the documentary I brought up in class last week that discussed Cambridge Analytica. For those who are unfamiliar, Cambridge Analytica is a consulting firm that processes user’s data to influence or sway feelings on certain topics. During the case of Presidential election here in the United States back in 2016, it was discovered that Cambridge took personal data from millions of Facebook users, without their consent, in hopes of targeting certain groups of people for political advertising. If the data site saw that one user supported the Republican Party, or liked a page or article having to do with Donald Trump, Cambridge Analytica would then tailor the content that user saw to increase the person’s support of the Republican Party. The same thing would occur for someone who was a Democrat; their content would be altered for the “greater good” of a political campaign. Taking this information from users without consent is damaging to not only their safety, but their mindset as well. If they don’t need to look any further than their timeline for information that supports their political beliefs, then they won’t (no matter if the article or advertisement is true or false). This hinders people’s ability to think freely, and gives political candidates an unfair advantage and leg up over others who are running for office. Facebook and its creator, Mark Zuckerberg, are still being investigated and continue to testify in front of Congress.
In a day and age where information is only a click away, one would hope that the content we view would not be influenced as heavily as it is. This book serves as an eye opener to anyone who reads it; we are never “safe” online, and our search history and information are constantly being monitored and evaluated.
-Adrienne
Discussion Questions:
- What efforts can teachers / professors take to better educate their students about the issue of technological redlining? How can we educate each other to spot and call out racist ideals on the internet?
- Even though studies show that search engines like Google are racist, why do you think people still use them so frequently? Do you think Google’s popularity will ever decrease?
- Can you think of any other search engines or websites that portray groups of people in a certain way? Or sites that contain algorithms that can be racist? Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press, 2018).