Are My Eyes Biased?

In Donna Haraway’s “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective”, Haraway tackles the question of feminist objectivity and what it would mean to establish facts and a reality without privileging those identities who tend to be assumed as the default.

Haraway explains objectivity not as seeing all or trying to see from no perspective, but as establishing truths and being held accountable for them. She argues the act of seeing is not passive objective perception until you make a judgment about it, rather an inherently active and inherently unique and individual process. In order to truly be objective, Haraway says, you can’t pretend like you can see everything in the world perfectly. Vision is a way of modulating knowledge we can’t escape. Instead, we should take responsibility for what we know and how we know it. Rather than privileging only the voices of oppressed people, for example, she argues that they are not perfect or innocent objective positions. They are less likely to claim their own knowledge is true despite their situation. For example, a marginalized person recounting their own experience will be speaking as their own person while acknowledging how their identities have shaped their view of their experience. As Haraway writes “they are preferred because in principle they are least likely to allow denial of the critical and interpretive core of all knowledge,” (584). This is in contrast to identities perceived as the default, where bias and experience don’t get questioned, the person loses the attachment between their self and how they process knowledge, leading to studies and science which is fundamentally flawed or biased.

The problem is we’ve moved past questioning where our knowledge comes from and taking accountability for it. A camera or technology is not objective, rather it too has its own interpretation of the world: how it gathers its data, what its reach and limitations are. To interpret data from a machine is to do the same work as seeing, or seeing from another perspective. However, human vision also isn’t this perfect objective thing. It’s tied to the ideal of knowing and seeing everything, seen as a passive way to acquire knowledge, yet the way we learn is rooted in our identities and experiences. Haraway writes, “We need to learn in our bodies, endowed with primate color and stereoscopic vision, how to attach the objective to our theoretical and political scanners in order to name where we are and are not, in dimensions of mental and physical space we hardly know how to name,” (582). We need an objective foundation, something we can agree on, within our perceptions and our bodies and our histories. 

Knowledge becomes, or always was, inherently personal when viewed through this lens. To depersonalize it, to play into the fantasy of knowing without being, then inherently privileges those who are seen as having the fewest “othering” ties to themselves. It’s very easy for a cisgender straight white man to see himself only as a ‘person’ when those identities are considered the default, while a trans lesbian of color, for example, would have to consider her experiences when putting out knowledge or ideas, because the latter’s ideas are seen as special and other while the former’s experiences are almost invisible. And then the person seen as most capable of being objective is the one least likely to be held accountable for their knowledge.

Questions:
Haraway’s text, being mostly theoretical, doesn’t make much use of examples. As you were reading, what examples popped into your mind to relate this text back to your own life? Did that help or complicate your reading of the text?

How can knowledge be affected by identity and experience? How have the other readings for this class so far played with the ideas of bias and knowledge?

Haraway, D. (1988). Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies, 14(3), 575–599.

Modern Day Suppression

Today people say that racism is disappearing. Technology is subliminally racist in ways that people may not even notice it unless it is directed towards them. My Dad would always say be careful what your saying you never know who you may upset with your words. So then doesn’t that mean that everything is racist in a way if it is seen in the wrong way by someone? Before the internet was invented people had to advertise by word of mouth and advertisements in the newspapers Google and other search engines like yelp can be racist in ways that will only be seen if your looking for it. A Hairdresser that may have thrived in the racist free environment with less technology is not going to be as successful because of the algorithms that are implemented in these different applications. “prior to the changes the Internet brought about, I never had to do much advertising because my name carried weight through the campus and the graduate school.” (173) As a black Hairdresser some of the racism that may occur online is you may not get the front page or get the same attention as other hairdressers that may be privileged. Websites like yelp are if you pay for the better representation you will get it; so, whoever pays more will get more. But that’s not always what happens. It seemed a bit unfair to Kandis the hair stylist because when she first started her business it was all about if you do your job well then people will come via word of mouth but nowadays you have to pay to be put on the top of the search page or you have to have some kind of gold membership so you get more views on your website. Kandis said “The algorithm shouldn’t get to decide whether I exist or not.”(175) this furthers my argument when I said that if you were good at your job then you got the recognition that you deserved, not because you signed up first or paid more money for your account.

              Google today is starting to monopolize our ideas with what is shown to us via the internet. The need for competition is important for our society to become less racist. If you look up martinlutherking.org it will show up as a neonazi white supremist support website known as Stormfront. If google had many more competitors, they would have to fix their algorithm, so it was fair for everyone to use and things like Martinlutherkingjr.org would be what they are said to be. Google hasn’t affected me or offended me personally, but I support everyone having equal rights and equal opportunity in life to succeed.

-Ryan

Questions:

Do you feel like information is suppressed to you now more than ever?

Do you use other search engines besides google for different questions you have in order to get an unbiased answer?

Noble’s Concluding Statements…

 Noble’s “Conclusion” and “Epilogue” in her book: Algorithms of Oppression, highlight her research about the inequalities that exist on the internet- in which popular, public algorithms continue to engrain oppressive and discriminatory ideas in our society. Noble stresses that since these companies are so user-friendly and accessible, people [unaware of systematic oppression] become even more of a target to spread biases on a platform that the public sees as safe, neutral, and objective, and reliable. Noble ties up her book with an interview regarding how platforms have a “lack of identity control” (173). Algorithms affect an individual’s, mostly minorities, life outside the computer too; their identities being negatively targeted, pushed away, or taken control over, to make room for the non-marginalized. 

 Noble’s choice to use an interview for her conclusion was a strong writer’s move because it allows readers to feel close to the issue- to remind her readers that situations such as Kandis’s can happen to almost anyone. Nobles argument with presenting this interview is that the African-American community (as well as other minority groups) must create an influence in the field of technology studies to push and challenge racists, sexist, economic, biases. BFTS, black feminist technology studies, is a topic that I have not heard of- yet assumed was already being practised in the corporate world of technology. As Noble writes, BFTS is an “epistemological approach to researching gendered and racialized identities in digital and analogue media studies, and it offers a new lens for exploring power as mediated by intersectional identities” (171-172). Noble highlights that Black feminist technology studies are crucial when planning progress for the growth of equity online. Having African-American feminist, youth, and women working to erase stigmas by becoming a large percentage of contributions behind the screen of algorithms, and other online platforms that use negative biases, allow for the growth of an inclusive content. This idea reminds me of the push for students learning science and math during the time of WWII. At this time, the United States government was pushing for its students to be experts in this field so they can contribute to the race-to-space. The drive for using children in this competition arose due to American trying to be prized #1 in technological advancement, rather than Russia or any other country beating them. If society pushed equity with technological advancements, it would look like having students of color and more girls being taught in school how to code, learning how to make platforms more user-friendly, learning how to edit credible information on websites: is this not just as important, even more? We need to ask ourselves the question of why aren’t we pushing the idea of getting behind the screen to fix the inequalities that they project back onto us? Technology is going to be here forever, continueoulsy growing whether we decide to admit it, or not.

 Kandis’ issue with Yelp could have been partially solved, at the least, if Yelp had actual people checking what type of content was being shown regarding Kandis’ business, or if algorithms were made to protect the “rights” of their users. Kandis’ personal business started to fall very short when the community around her physically changed, as well as the representation and reputation of her hair salon online. Kandis used Yelp as a way to get her business name out to the public- since before the age of the internet it was so popular that word of mouth got her the amazing reviews she needed for the salon’s intense popularity. Yet, Kandis found that the competition that wasn’t even close to what her salon offers were taking over Yelp, leaving her salon in the shadows of search pages. The positive reviews did not carry over to her online representation of her salon, in which she was paying for her competition of White companies to be ahead of her! Kandis states, “I can’t find myself and why, when I use certain keywords…they are suggesting that I don’t exist” (177, Noble). As previously mentioned, everyone under the Constitution of the United States has rights that can be exercised. Kandis’ non-existent business and online identity reminded me of the roots that this may come from- a seemly far fetched comparison to slavery. If people of color aren’t being represented as their individual self, are considered property online, and are overrun by their controlling power of White companies, what freedom do people of color get, where are their rights? Why does the work of African-Americans become property, even at this point in history where today’s society is supposed to be great at recognizing the rights of an individual? These companies, such as Yelp, carry a greater wealth over its customers in which it gives them more power over certain individuals in multiple contexts, making their title on the internet. 

 Noble’s “Conclusion and “Epilogue” leave readers with a heavy responsibility to try and enact change in any forms against algorithms of oppression. The last two chapters really solidified for me the point of Noble’s research and the immense effect it has on all of society. 

Works Cited: 

Noble, Safiya. Algorithms of Oppression: How Search Engines Reinforce Racism. New York, New York University Press. 2018. Print.      

Discussion Questions:

  1. How would you feel if the company you worked for constantly put your competitors and/or peers identity before yours? What does this do to a person’s emotions? How would you try to make a name for yourself if the company you work for owns everything under your name/identity? How do you feel this is a violation of property? 
  2. Do you agree with Noble’s suggestions on how to “stop” these algorithms of oppression? Why do you think I asked this question with “stop” in quotations? Do you think that it is possible to make progress with the suggestions that Noble explains such as using “alternatives to commercial information platforms”? What do you think we can teach our friends, family, students, and whoever necessary, regarding Noble’s closing chapters?

Who suffers from Google’s Algorithms?

Nowadays, Google is a search engine that individuals often turn to under the notion that the information on the site is factual and yield search results that do not marginalize any group in particular. Most people are unaware that the information that appears on the website is a reflection of the users beliefs and that society still holds a variety of sexist ideas about women and stereotypical views about people of color. This means that the search engine is not the only issue but it serves as a platform to encourage negative outlooks on marginalized populations. Moreover, internet users who are searching on Google are the problem and minority groups are the ones who reap the negative consequences. 

Safiya Umoja Noble discusses what we see and when we see it on the search engine which represents Google’s algorithmic conceptualizations of a variety of people and ideas as a whole. Google is coupled with algorithmic practices of biasing information which is a reflection of advertising interests. This partnership and influence with advertisers makes search results inherently biased. Noble writes, “Deep machine learning, which is using algorithms to replicate human thinking, is predicated on specific values from specific kinds of people, namely, the most powerful institutions in society and those who control them.” (29). In turn, powerful people control what is produced and shown to the public for their own benefit. Often when people in power benefit from large companies like Google minorities, whether social, political, or economic, suffer and are left behind. The search results found on Google are rarely called into question and due to this most internet users have no idea how these ideas come to dominate search results on the first page. The public believes that what rises to the top of the search results is either the most popular or the most credible or both which is not always the case. It is such a scary thing to learn that Google creates advertising algorithms and not informative ones. An example in Safiya Umoja Noble includes when she discusses how Google represents women as incompetent, dependent on men, or underrepresent in the workforce. As a result, this demonstrates how the content and representation of both women and women of color in search engines is consistent with the kinds of biased ideas that live in other advertising channels. These misrepresentations become part of the cycle of oppression of already marginalized groups.

The search engine as a whole, reflects notions that are often resisted by women and people of color. Throughout this reading I have learned how Google is an intersectional power that accounts for the ways in which marginalized people are exponentially harmed by Google. Black women and girls continue to have their image and representations assaulted in the new media environments. This perpetuates the oppression and inequality that ravages underrepresented communities like women and people of color. Another interesting aspect of this article is the internet and its general association with freedom and individual choice. When search engines like Google are further analyzed in this book Noble discusses the internet is a hub of organized and scheduled content that is presented at the will of the creators and controllers of these resources. 

Instead of targeting and oppressing minorities, black women especially, it would be more beneficial to empower them to not only bring new perspectives and diversity into the tech world but also to reach a broader audience of people in a positive way. These accepted norms become so ingrained in society that people generally disregard the possibility for change. And with that mindset, these seemingly unavoidable stereotypes impacted populations vulnerable discrimination.

-Jackie

Discussion Questions

  • How can women and people of color benefit from becoming programmers and building alternative search engines?
  • How might those who are of color ever be able to influence or control the way they are represented on Google?
  • How can people of color/ women be misrepresented online? How can this lead to other consequences? 
  • How can teachers educate students how to look past immediate search results to find resources that are uncommonly used but still credible?

Technological Redlining: Who Suffers?

This reading brought to light a few issues the author felt that technological redlining fosters. Many of these were incredibly eye-opening, evoking a sense of fear and frustration. Safiya Umoja Noble discusses the ways in which the algorithms that control what we see, when we see, and how we see, are made by individuals. Noble says, “… some of the very people who are developing search algorithms and architecture are willing to promote sexist and racist attitudes openly at work and beyond, while we are supposed to believe that these same employees are developing ‘neutral’ or ‘objective’ decision-making tools. Human beings are developing the digital platforms we use…” (2). These mathematic formulas have a human impression, which ultimately leaves the power of bias and discrimination in the hands of the creator. This is a terrifying thought for many reasons. The first being the underlying and sometimes prominent racism on the World Wide Web, and the second being the sense of powerlessness I felt while reading this. The inventors of these algorithms are people who are not only technologically advanced, but those who hold a power and control over what content is distributed. They ultimately have the final say of what material is being shown on different search engines, forms of social media, or advertised to users; leaving everyday people left with the challenge of sifting though and spotting these inequalities. How do we stop this? If the internet is supposed to be a free and open space for all, how can these issues still exist?

Reading these pages in particular made me think of the documentary I brought up in class last week that discussed Cambridge Analytica. For those who are unfamiliar, Cambridge Analytica is a consulting firm that processes user’s data to influence or sway feelings on certain topics. During the case of Presidential election here in the United States back in 2016, it was discovered that Cambridge took personal data from millions of Facebook users, without their consent, in hopes of targeting certain groups of people for political advertising. If the data site saw that one user supported the Republican Party, or liked a page or article having to do with Donald Trump, Cambridge Analytica would then tailor the content that user saw to increase the person’s support of the Republican Party. The same thing would occur for someone who was a Democrat; their content would be altered for the “greater good” of a political campaign. Taking this information from users without consent is damaging to not only their safety, but their mindset as well. If they don’t need to look any further than their timeline for information that supports their political beliefs, then they won’t (no matter if the article or advertisement is true or false). This hinders people’s ability to think freely, and gives political candidates an unfair advantage and leg up over others who are running for office. Facebook and its creator, Mark Zuckerberg, are still being investigated and continue to testify in front of Congress.

In a day and age where information is only a click away, one would hope that the content we view would not be influenced as heavily as it is. This book serves as an eye opener to anyone who reads it; we are never “safe” online, and our search history and information are constantly being monitored and evaluated.

-Adrienne

Discussion Questions:

  1. What efforts can teachers / professors take to better educate their students about the issue of technological redlining? How can we educate each other to spot and call out racist ideals on the internet?
  2. Even though studies show that search engines like Google are racist, why do you think people still use them so frequently? Do you think Google’s popularity will ever decrease?
  3. Can you think of any other search engines or websites that portray groups of people in a certain way? Or sites that contain algorithms that can be racist? Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press, 2018). 

Big Brother and the Digital Divide

Cody Zimmer

The digital age has completely revolutionized our lives both positively and negatively. Information is easily accessible, people are reading and writing on a much larger scale, and it is easier than ever to stay connected with friends and family no matter their geographical location. However, with the spread of the digital movement comes increased xenophobia, discrimination, and hate. The Euro-centric, heteronormative systems of oppression which have been woven into the fabric of American society are becoming digitalized.  The internet began as predominantly a space for upper-class white men and now, they hold sway over the information that gets released and the people who get “red-flagged” through the development of algorithms which closely monitor those who are viewed as socially threatening due to their race, sexuality, or income status. Virginia Eubanks warns us of the hidden algorithms which monitor every decision we make and illustrates how this contributes to the spread of legal and financial inequality. These algorithms are not only an infringement of our civil rights, but are inherently immoral. Everything we buy, view, like, or post is recorded and used as information to sort out potential threats to American society. There is no doubt in my mind that prestigious elites utilize these algorithms in order to maintain the economic immobility of capitalism and to secure their place at the top of the financial food chain. Eubanks elaborates this idea as she states, “digital tracking and automated decision-making hide poverty from the professional middle-class public and give the nation the ethical distance it needs to make inhuman choices. (15)”

In other words, people are getting taken advantage of by the immoral programs designed to restrain social mobility. We have abandoned any ethical notion of eliminating poverty and racism and have actually taken steps in the other direction in order to ensure that it never is eliminated and invisible. If these systems of inequality are not acknowledged or released to the general public, then it allows for them to continue to function. Yet, the more light that is shone on this topic will force people to deal with it, and maybe some social change could occur.

Discussion Questions:

  1. In what ways do you feel that you are being “spied on” and through what social media/websites?
  2. What are some alternatives insurance agencies could use in order to secure against fraud without discriminating against people of low-income?
  3. Do you think people on Welfare should have their purchases monitored? Why or Why Not?

Always Watching, Always Using

Imagine being in a life transition and the worst possible scenario occurs. This is what happened to Virgina Eubanks and her partner, Jason. When you get a new job your insurance switches and there may be a period of time when you are without. For Eubanks, her and her partner’s insurance had quickly started up — or so they thought. Jason encountered a very unfortunate accident which left them with high medical bills, pricey prescriptions, and physical therapy. They soon learned that the start date for their new insurance was after the incident. They had been red-flagged (Eubanks 2-3). Eubanks, determined to get what’s right, fought back. The thing about Virginia and Jason’s encounter was that she knew what was going on and knew what to do about it. Unfortunately, most people would not know the warning signs of a fraudulent scandal. 

The truth us, there are an abundance of people that get used all of the time. It is unbelievably easy. The government, including insurance agencies, knows everything about everyone. This makes us, especially poor people and families, simple targets. We are always being watched. Eukbanks even states “digital tracking and decision-making systems have become routine…I started to hear them described as forces of control, manipulation, and punishment” (9-10). This is exactly what technology has done and is continuing to do. Low-income, working, and poor people/families get it the worst. Typically, if they are in this financial situation, they will not have access to all of the growing technologies. Because of this, it makes them easy to use and manipulate because even if it is the companies fault, most of the time they won’t take blame. Chances are these people will not know what’s going on nor how to handle it correctly. Therefore, they end up in a predicament where they owe all of this money that they don’t have leaving them drowning in debt or filing for bankruptcy. This being said, even if they do realize what is unfairly happening to them they most likely do not have the means to approach it through the court of law. On the other hand, they also do not have the financial income or stability to let insurance agencies, or any company for that matter, take advantage of them when they are already struggling. So, what do they do?

When I was reading this passage by Virginia Eubanks, it really opened my eyes. It made me think of the time when my mom had lost her job and we had lost our insurance. I remember her scrambling around to find something that would work. This happened again when my sister got her own insurance and we no longer were eligible for the plan that we had. Feeling like this for a short period of time was stressful enough, I couldn’t even imagine having to worry all of the time. Eubanks states, “poor and working-class people are targeted by new tools of digital poverty management and face life-threatening consequences as a result” (11). If life wasn’t hard enough, let’s just make it a little more complicated for those who are already struggling. If it wasn’t for having opportunities to technologies and resources, my mom wouldn’t have found an insurance for us as fast as she did. Not everyone has the ability to access technologies; therefore, they are always several steps behind and lacking necessities for everyday health, for everyday life. 

Discussion Questions:

  1. Since there is such a big group associated with digital poverty, what are some other ways information can be given and resources provided? 
  2. Do you think that there are specific groups and people that get targeted more than others? If yes, why?
  3. Do you think companies should reimburse, or work with, people who experience an issue with them due to a technical error or glitch in the system? 
  4. Which do you think is better in relation to companies and handling personal information: technology or actual people?

By, Allison


Source: Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor.

Racism to Glorification

In this time and age, the internet is a very popular place for people to develop and broadcast their own personal identities. In the early nineties white men mostly dominated the population in cyberspace. A digital divide was created between white men and other minority groups on the internet. Nakamura gives the readers a little bit of an insight on the internet and how racial groups are generated on the internet, “while in Cybertypes I focused on the constraints inherent in primarily textual interfaces that reified racial categories, in this work I locate the Internet as a privileged and extremely rich site for the creation and distribution of hegemonic and counter hegemonic visual images of racialized bodies,” (Nakamura 13). The internet does function in ways which promote and create racist ideas towards minority groups however the internet also works in ways to dismantle these racist ideologies. Racism should be spoken about on the internet rather than it being ignored. Nakamura speaks about “genteel racism” and how this is currently an ongoing issue on the internet and in society. “Genteel racism” is similar to the idea of color blind racism “color blindness is a symptom of racism. Rather than see and acknowledge racial difference, we would rather not see at all. Thus remaining blind to the effects of the sight of race in a racist culture is a symptom of racism,” (Nakamura 3). Many people ignore the fact that racism exists and still works in ways where minority groups are continuously being oppressed. 

There are more people from different backgrounds and with different identities now using the internet. “There are many Internet spaces, such as pregnancy bulletin boards, blogs, and livejournals that may now assume a default female user, and others such as petition and dating web sites that assume users of color,” (Nakamura 14). The internet is more diverse with all of the social media platforms. Women and minority groups are using the internet at greater volumes which has created a new internet divide. The internet has the power to build and give definitions to different types of people. However, it also works in ways to destroy these preconceived images and should be continued to do so. In this era, many people are moving away from these internalized racist ideas and becoming more aware of different kinds of people and accepting of other cultures.

Blackfishing is becoming very popular on social media apps such as instagram and twitter. This concept is extremely strange to myself and others because there are people on social media pretending and representing themselves as black or mixed. This has created a controversy on the internet and between black women and white women who pose as if they are black or mixed. Dara Thurmond a nurse who resides in New York said “says her frustration comes when white women who appear to be posing as black don’t know the struggle that black women go through just to be accepted as who they are.” I hope all of you can see this shift from the early 90’s where white men dominated the internet and now in the twenty first century there are people posing as people from a black or mixed race. Many of these social media influencers glorify darker skin, braided hairstyles, curly hair, and fuller lips. 

The questions I have for you have any of you read a post that lead you to be more informed on racial issues or have you read any posts on social media that have changed your ideas on race and or racism and if so what was it about? Why do you think there is this shift in society and why are more people gloriying people from black or from mixed backgrounds? Do any of you see this as an issue?

Nakamura, Lisa. Digitizing Race: Visual Cultural Aspects of the Internet. University of Minnesota Press, 2008.

Virk, Kameron, and Nesta McGregor. Blackfishing: The Women Accused of Pretending to be Black, BBC News, 18 Dec. 2018.

https://www.bbc.com/news/newsbeat-46427180

From AIM Avatar to SMS Bitmoji

Lisa Nakamura’s piece Digitizing Race introduces many interesting and thought provoking points. As someone who has been a technology user since childhood I can see the movement from a textual internet to an internet that shares photos and avatars. As Nakamura states “The primarily textual interest no longer dominates and in some cases no longer exists: many MOOs, MUDs and listservs have gone offline” (1) The benefit of the textual internet is one of anonymity. When you are a person sharing an idea via text, more often than not the ideas you are sharing are not read with the notion of your race, gender, sexuality, etc. unless explicitly mentioned. When avatars come into play, this changes things. This began as an animation where certain sub cultures use different avatar styles. Nakamura states: “..the popular internet and its depictions of racialized and gendered bodies”  (13). The internet offers many platforms for education and idea sharing among many other things. However, the value of said opinions varies based on who is saying it. 

In today’s modern internet, we no longer primarily operate on IM or AIM chat services. Our society engages more with facebook, twitter, instagram etc as our chosen means of internet communication. Beyond social media some may engage in reddit where a profile is not visible yet you can create your own avatar and username. The usernames and pictures we select to represent ourselves is crucial to how seriously or not we are taken in the internet community. 

Nakuma brings up the point: “The interface serves to organize race and gendered bodies in categories, boxes and links that mimic the mental structure of a normative consciousness and set of associations often white and male” (17). Previous to this class I was not aware of this internet bias, but after reading Lisa Nakumara’s excerpt my eyes were opened to the large bias of the internet. I think this is especially clear in opinion based forms such as reddit, quora and 4chan allow us as internet users to share their opinions. They can select usernames and pictures to express themselves and give the person reading their post a window into who is speaking to them. As we know in our culture, more often than not the white cis male perspective is taken most seriously. I wonder if people commonly try and hide or alter their identity on these platforms for the sake of being heard. 

 What interests me is the change from internet avatars in the early 2000’s to a more present from which is bitmoji. Bitmoji if you are unfamiliar is a service that allows you to create a digital selfie down to the race, gender, outfits, and even aurora. Today, Bitmoji is used in SMS messaging, facebook and even classroom settings. The app offers forty different skin tones which is an improvement from their previous twenty five. This is a pro in terms of representation yet in the internet community we exist in it is a flaw. As stated previously by Nakumara, “AIM buddies, pregnant avatars, and other user- created avatars allow users to participate in racial formation in direct and personal ways and to transmit there to large, potentially global audiences of users” (18). This allows people to use Bitmoji as they would use AIM avatars to display their image in the way they so please? 

Some questions that this made me think of are ones of personal use. In the classroom, would you encourage the use of avatars to allow children to express themselves freely and celebrate their culture? Or do you think it is more beneficial to leave it to the default image and allow them to exist anonymously on the internet? Does that help or hurt? I am also curious to know if anyone has felt oppressed in anyway by avatars, bitmoji, etc. Have you felt misrepresented, under represented or content with your experience? 

Digitizing race: visual cultures of the internet

Nakamura – University of Minnesota Press – 2008

Creating A New Education

Technology has brought us many great advancements but it also led our generation to lack social skills, communication skills, and a lack of focus. Our generation has an issue with face to face conversations due to texting, so how could we gain those communication skills back? Who can teach those who struggle with this? Society has changed in many ways, but our education system has always stuck to tradition, leaving us with no outlet to learn how to progress along with technology.

“Were still going to school the way we did in 1993, which is to say, pretty much as we did in 1893.” (6). We still follow the main traditions Socrates created, and focus mostly on getting good grades on standardized tests and working to get into a university. Although those aspects create work ethics and gain us knowledge, we should be learning how to fit in society. School is the one place we are we are told to not use technology, and that never suceeds. Our generation is addicted to the advancements we have made in the technology world, it is hard for us to shut that out for hours inside a classroom. “Its odd and irresponsible that formal education that formal education is the one place where we are not using the devices on which we do our learning all the rest of the time.” (76). Technology has allowed us to have every answer to any question in the palm of our hand, so why would that not be useful to use inside a place of learning? 

Almost all of our assignments are due online and tablets have become the new loose leaf. When we step outside of the classroom, we use the internet to help us complete our assignments. I think if we could do the same inside the classroom, students would be more confident in particiapting and would get more out of the class. In one of my recent classes last semester, I used my laptop as my “go to” to look up a lot of questions my teacher addressed and it helped me to be more confident in participating. 

The issue is that some teachers are technophobic, and can not stear away from their old school traditions. To them, they think they are helping us because “things were easier back then” but in reality, it is only regressing the quality of our education.  We are only taught how to be successful prior to and during college but not after. Once we graduate we may have many science formulas memorized and know ancient history, but we don’t know how to function in society to succeed. I see this as very alarming issue and something I would want to see change, but an important question that still evokes my mind would be the “how”. How can we create this major shift in learning as a society? Many teachers are much older than us and did not experience growing up with technology, therefore they are against it. Say we create a “new education” and cirriculum, how can we be sure that those teachers will know how to properly help us succeed or if they will still stick to their traditions. I think the reality is that we are too far along in advancements to try and forget about technology in certain scenarios. Incorporating technology into classrooms and teaching us how to succeed outside of school will benefit us students more than the old traditional ways. Going to school should mean we are being taught the needs to suceed for our future. “The college education we need today must prepare our students for their epic journey, the mountains and cliff’s edges.” (13). We are constantly modernizing and advancing our society and education should be included in that. 

Discussion Questions:

  • How can we steer away from old traditional classes and create more on how to succeed in society ? Do you think teachers would be upset and stick to their old ways still?
  • Do you believe that incoroporating technology into lectures would help you focus worse or better in class?

Davidson, Cathy N. The New Education: How to Revolutionize the University to Prepare Students for a World in Flux. Basic Books, 2017.

Skip to content