Learning from Timnit Gebru

Reflect on Dr. Timnit Gebru’s work and broad talk to the Middlebury campus
Author

Katie Macalintal

Published

April 29, 2023

Learning from Timnit Gebru

On April 24th, Dr. Timnit Gebru will be giving a talk and visiting our class virtually for further Q&A on her recent work in AI and tech ethics.

Image of Timnit Gebru by Cody O’Loughlin from New York Times

About Dr. Timnit Gebru

Dr. Timnit Gebru is a well-known advocate for diversity in technology, who was named one of the World’s 50 Greatest Leaders by Fortune in 2021 and Time’s Most Influential People in 2022.

While working at Microsoft in 2018, Dr. Gebru co-authored a research paper with Joy Buolamwini called Gender Shades. Gender Shades investigated facial recognition technologies from Microsoft, IBM, and Face++ and found that the models did significantly better on lighter skin males than darker skinned females. More recently in 2021, while working at Google Dr. Gebru authored a paper called On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. This paper questioned the ethics of large language AI models and raised concerns about the environmental impact of them. This paper noted that Big Tech companies were neglecting the biases being built into language models, which could exacerbate existing inequalities. It pointed out that Big Tech companies, including Google, were prioritizing profits over safety, and ultimately led to her getting fired by Google.

Today, Dr. Timnit Gebru serves as the co-founder of Black in AI and founder and leader of Distributed Artificial Intelligence Research Institute (DAIR). She continues to challenge companies to be more thoughtful in their creations and points out the ways in which technology can fail us if not thought about carefully. Through her work, Dr. Gebru pushes us to ask ourselves, “What are we building? Why are we building it? And who is it impacting?”

Dr. Gebru’s Talk at FATE in Computer Vision

In 2020, Dr. Gebru gave a talk as part of a Tutorial on Fairness, Accountability, Transparency, and Ethics (FATE) in Computer Vision.

In this talk, Dr. Gebru discusses the negative effects of image recognition, including its biases, inequalities, and disproportionate harm on various groups and cultures. She emphasizes the impacts that facial recognition can have on our society, highlighting cases such as HireVue’s internal state detection and the Baltimore police’s misuse of facial recognition. She notes that these technologies intended or unintended uses not only further inequities but infringe on our civil rights.

During her talk, Dr. Gebru highlights that the issue of inequalities in AI and technology is not solely due to the lack of diversity in datasets but also a problem with the system itself. While more diverse datasets can yield better testing results, there must be representation beyond datasets. Dr. Gebru stresses that there must be representation from individuals who have been adversely impacted by algorithms to make decisions about these technologies. In order to address the biases and inequalities in AI technology, we must acknowledge the social and structural issues embedded in technology.

Dr. Gebru also urges us to remember that no matter how abstract technology may seem, everything is connected to people in some way. She urges us to be more critical of the automated decisions made for us and whether these tasks are even ethical to begin with. She reminds us that we need a system that investigates algorithms and their side effects before they are released into the world.

TLDR: Due to unrepresentative datasets, systematic problems, and humans’ innate trust in automated decisions, intended and unintended uses of image recognition are deepening inequities in our society.

Questions

  • What are some effective systems of refusual not just for facial recognition, but other AI technologies?
  • What would an effective system that investigates algorithmic bias look like?

Dr. Gebru’s Talk “At” Middlebury

On April 24th, Dr. Timnit Gebru gave a virtual talk to Middlebury about the relationship between artificial general intelligence (AGI) and eugenics, urging us to question who this AGI “utopic” future is really for.

In this talk, she examined the concept of artificial general intelligence (AGI). By comparing definitions from Sam Altman, Peter Voss, Russell & Norvig, and more, she argued that AGI is not a fully well-defined term. Despite the definition one may refer to, AGI still promotes this “God can do anything” mindset, which promotes this idea of a utopic future. It would be a utopic future where AGI will be so intelligent that it will figure out what to do in any scenario and have the capabilities of enhancing transhuman minds.

To better understand who it would be a utopia for, she examined the history and meaning of eugenics. While the first wave of eugenics looked to “improve human stock” through the use of getting rid of those with “undesirable” traits, the second wave of eugenics was more focused on “positive eugenics” where people were more focused on “designing” their children based on hereditarian assumptions about who possesses intelligence. She paid particular attention to the TESCREAL ideologies of the second wave of eugenics in the 1990s, which includes ideas of Transhumanism, Singularity, and Cosmism. By examining the properties of this TESCREAL Bundle, she was able to draw this connection between eugenics and AGI.

Eugenics and AGI are intimately connected to this idea of transhumanism, for they both share this desire to radically modify the human organism. Modifying the human in these ways comes with these ideas of living in this utopic future, but at the same time produce these apocalyptic fears that these technologies will produce “clear and future dangers” that may be unprecedented in human history. They both come with discriminatory views and use intelligence as a way to colonize space and become posthuman. There is also a close financial link between AGI and eugenics, for TESCREAL-ist billionaires are often the ones funding big AGI companies and projects.

Before attending this talk by Dr. Gebru, I never thought about how eugenics and AGI could be related. While her argument was a little challenging to follow, I think that the line she was able to draw between the two subjects definitely convinced me of the concerning relationship between AGI and eugenics. Her talk has encouraged me to be more critical of this “utopian future” that the media tends to portray in such a positive light, for she showed me that that this drive towards AGI is inherently unsafe and undefined. In order for changes to be made, however, we must be critical of the centralization of power in AGI and hold those in power accountable for the harms their technologies bestow onto society.

Reflection

Ultimately, I learned a lot from our interactions with Dr. Timnit Gebru. It was super empowering to hear from and see all the work a woman of color is doing in the field of AI. It has encouraged me to ask more questions of these big tech companies dedicated to developing AI. While she has opened my eyes to these companies’ common and unjust practices, I find it frustrating how difficult it may be for these realities to come to the surface. This centralization of power is somewhat frustrating in how well these companies are able to cover up or hide these realities from the public and tear down those who try to speak up about them. She has also showed me the importance of knowing the funding sources behind certain AGI projects, which can offer valuable insights into the potential applications of such technologies. Our interactions with Dr. Timnit Gebru has made me more curious about how we even can develop AI in an ethical way and when society will implement an effective system to do so.

Resources

https://en.wikipedia.org/wiki/Timnit_Gebru
https://time.com/6132399/timnit-gebru-ai-google/