Professionalism/Timnit Gebru and Ethical AI at Google

From Wikibooks, open books for an open world
Jump to navigation Jump to search

In late 2020, prominent Google AI researcher Timnit Gebru was abruptly fired after being pressured to retract a paper critical of language processing models used by the company.[1] Her dismissal has sparked outrage among employees and industry professionals, ultimately leading to a restructuring of leadership within the Google Ethical AI team.[2] The firing of Gebru brought attention to the issues of research censorship, oppressive work environments, and abuses of power in the technology industry.

AI at Google[edit | edit source]

Diversification efforts[edit | edit source]

Google AI is the artificial intelligence division of Google. The stated mission of the group is to build responsible systems that are accessible to all people. The AI principles that govern the work done by Google AI were adapted in 2020 to address systemic racial bias with the objective of "learning from a diverse set of voices internally and externally."[3] Despite these efforts, Black employees make up less than 4% of the overall technical workforce at the company.[4]

Ethical AI team[edit | edit source]

Google Brain is a research team within Google that focuses on machine intelligence, including deep learning and natural language processing.[5] Within Google Brain, the Ethical AI team was created in 2017 by Margaret Mitchell, a senior research scientist working on aligning AI practices to human values and fairness.[6] Gebru joined the team as a co-lead in September 2018, after persistent recruiting efforts by Mitchell and Jeff Dean, the head of the Google Brain research group.[7]

Timnit Gebru Career and Research[edit | edit source]

Timnit Gebru

Black in AI[edit | edit source]

Before joining the Ethical AI team, Gebru co-founded a non-profit organization to address the lack of diversity in artificial intelligence. The organization aspires to increase the presence of Black people in AI through academic programs, conferences, advocacy, and community.[8] Gebru has heavily advocated for increased diversity in the researchers and developers working on AI systems as an essential component to developing unbiased machine learning systems.[9]

Algorithmic bias[edit | edit source]

In 2018, Gebru co-authored a widely publicized research paper on the racial and gender bias of automated facial analysis systems. The study analyzed three commercially available gender classification products against subjects grouped by gender and skin type. The study found that all three companies had the lowest error rates for male subjects and the highest error rates for darker female subjects.[10] Despite widespread speculation that AI-based systems did not produce uniform results across different populations, this research was one of the first studies to show empirical evidence of bias.[11] Media coverage of the study was extensive, and Gebru became a prolific voice in the fight for fair algorithms.[12]

Large language models[edit | edit source]

Around two years after joining the Ethical AI team at Google, Gebru was working to co-author a paper on the negative consequences of large language models (LMs). The paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" did not pass the internal review process at Google.[13] The paper identified major issues with LMs to be environmental and financial cost, problematic training data, misdirected research efforts, and the amplification of bias. The researchers made multiple critical references to BERT, a machine learning technique for LMs used by Google to improve search results and language processing. The paper concludes by imploring researchers to carefully consider the risks of large LMs and re-evaluate the perceived necessity of technologies that mimic human behavior. [14]

Controversial firing[edit | edit source]

Paper retraction and Gebru's response[edit | edit source]

On November 19, 2020, Timnit Gebru was abruptly asked to come to an afternoon meeting, but was not informed of what the meeting would be about. At the meeting, then-Vice President of Google Research Megan Kacholia informed Gebru and her co-authors that their paper had not been approved for publication. They were mandated to retract the paper.[15]

Gebru was angered by the decision, and emailed a listserv called Google Brain Women and Allies to voice her frustration with the experience and the environment at Google in general. In the email, Gebru states that she had to request feedback on the paper, and was only granted it in the form of a confidential document with anonymous authors. She says that when she tried to engage in a discussion about the feedback, she was ignored.[15]

Gebru also implies that the retraction was not handled in alignment with typical procedures, and that she was treated poorly because of her race and gender. She was infuriated to not be involved in the discussion about whether to retract the paper, saying of herself that from Google’s perspective, “You are not worth having any conversations about this, since you are not someone whose humanity...is acknowledged or valued in this company.”[15]

Gebru also discusses other instances of bias at Google, where Black women such as herself make up only 1.6% of the workforce.[16] Gebru had previously stopped posting to Google Brain Women and Allies after “all the micro and macro aggressions and harassments [she] received after posting [her] stories.” She says that at Google “there is no incentive to hire 39% women: your life gets worse when you start advocating for underrepresented people.”[15] Her writing reflects previous instances of retaliation against activists at Google, such as after The Google Walkouts of 2018.

Gebru's firing[edit | edit source]

After emailing the listserv, Gebru expressed her disappointment to Jeff Dean and Kacholia, and sent them a list of requirements for her to continue working at Google.[17] One requirement was to reveal the anonymous authors of the feedback on her paper.[13] Kacholia responded on December 2, 2020, saying that the demands could not be met, and that she accepted Gebru’s resignation. From Gebru’s perspective, she had been fired, and she believed it was retaliation for the email she sent to Google Brain Women and Allies rather than because of her demands.[18] She tweeted that Kacholia had asked her to not return to work after her Thanksgiving vacation “because certain aspects of the email [she] sent... in the brain group reflect behavior that is inconsistent with the expectations of a Google manager.”[19]

On December 3, 2020, Jeff Dean sent an email to the Google Research Division to provide some clarity about the circumstances of the paper retraction and Gebru’s departure. Dean wrote that the paper “was only shared with a day’s notice before its deadline...and then instead of awaiting reviewer feedback, it was approved for submission and submitted,” which went against Google research procedures. Dean also mentioned that the paper neglected to incorporate recent research on language processing models that addressed some of the issues identified in the paper. However, another researcher at Google Brain later tweeted that his submissions "were always checked for disclosure of sensitive material, never for the quality of the literature review," implying that the rule that Dean believes justify the paper retraction is only selectively applied at Google.[20]

After justifying the paper retraction, Dean states in his email that Gebru was not fired, but rather resigned of her own volition. He also expresses disappointment in the email Gebru sent to the listserv,  imploring employees to keep working on internal diversity initiatives.[13]

Aftermath[edit | edit source]

Initial backlash[edit | edit source]

Many Google employees were outraged by Gebru’s firing. On December 3rd, 2020, a petition was created by Google Walkout for Real Change asking Google for answers about Google's response to Gebru's paper, increased transparency moving forward, and to renew its commitment to ethical artificial intelligence. As of April 25th 2021, the petition had 4302 signatures, 2695 of which are from Google employees.[21]  In December, the Google Ethics AI team wrote a letter to Google management. The letter, titled, “The Future of Ethical AI at Google Research” asks for engineering Vice President Megan Kacholia to be fired. The letter also requested more details from the decision to fire Gebru as well as an end to retaliatory punishment for the concerned employees speaking out. The team also requested Gebru be reinstated.[22]

On December 16th, 2020, nine members of Congress, sent a letter to Google requesting more information about Gebru’s firing. The letter quotes Google’s memos and calls them out for being extremely vague. These members also ask for more detailed policies from Google to ensure that their research is conducted properly and helps identify discriminatory problems in its models.[23]

After Google remains silent, employees begin to take more extreme action. On January 5th, David Baker, an engineering director at Google quit in protest of Gebru’s firing. On February 3rd, Vinesh Kannan, a Google software engineer, quit. In his resignation, he cited the firing of Gebru and April Christina Curley. Curley was a Google recruiter who had been fired in 2020.[24]

Google responds[edit | edit source]

On February 17th, it was announced the Google Ethical AI team would be restructured. The group was not notified internally of the changes until after the news broke. Vice President Megan Kacholia was fired and replaced by Marian Croak.[25]

Dr. Margaret Mitchell, another prominent AI researcher who founded the Google Ethical AI team, was especially vocal on twitter about Gebru’s firing. In January 2021, Mitchell’s Google email access was blocked and she was placed on administrative leave. On February 19th, 2021, Mitchell was officially fired. A Google spokesperson said that she was fired because an official review found that she violated Google’s code of conduct and security policies. When asked her reaction to Google’s statement, Mitchell replied, “I don’t know about those allegations actually. I mean most of that is news to me.” Her firing sparked another wave of outrage amongst artificial intelligence researchers.[26] On March 8th, 2021, the Google Walkout for Real change wrote an open letter. Instead of targeting Google, the letter attempts to place outside pressure on Google. The open letter asks academic conferences to require research submissions to also include corporate research policies and forbid lawyers from editing the papers. The letter also announces the #RecruitMeNot campaign where people interviewing with Google withdraw their application, specifically citing concerns from the firing of Gebru and Mitchell. Researchers are asked to refuse funding from Google and the open letter requests whistleblower protections be strengthened on a state and national level.[27]

Continued backlash[edit | edit source]

On February 26th, the FAccT Conference decided to remove Google from their list of sponsors, citing the recent controversy with the Ethical AI team. The conference is one of the first groups to speak out against Google and shows the industry is paying attention to the controversy.[28] On March 12th, researchers, including Dr. Hadas Kress-Gazit and Dr. Scott Niekum, begin to withdraw from Google’s Machine Learning and Robot Safety Workshop. AI researchers outside of Google are beginning to take a more definitive stance.[29] On April 6th, Google manager Samy Bengio announced his resignation. While he did not mention the firing of Gebru and Mitchell in his resignation letter, he had previously wrote on social media, “I stand by you, Timnit.”[30]

Comments on Professionalism[edit | edit source]

Research censorship[edit | edit source]

Google had a large amount of control over the research of the Ethical AI team. The company could block research papers from being presented at research conferences, creating a potential conflict of interest. Research findings may reveal shortcomings in Google's own AI models and it is important these discoveries are presented to help fix the same issues that may exist in AI models elsewhere.

Oppressive work environments[edit | edit source]

Google management gave company image a very high priority. When forced to make tradeoffs, the short-term reputation of the company came before Google's own employees and helping make text-based AI more ethical.

Abuses of power[edit | edit source]

Google policies when submitting to research conferences were only selectively enforced. Management refused to give specific feedback to help improve the research paper so it could be approved and kept the team in the dark about who was making these decisions and why.

Further Work[edit | edit source]

This controversy is ongoing. Future work could continue to keep up with any further backlash that occurs as well as if the petition and open letters seem to have lasting effects, either at Google or elsewhere. These concerns may die out or lead to lasting policy changes at research conferences and tech companies.

References[edit | edit source]

  1. Bass, D., Banjo, S., & Bergen, M. (2020, December 3). Google’s Co-Head of Ethical AI Says She Was Fired for Email. Bloomberg.https://www.bloomberg.com/news/articles/2020-12-03/google-s-co-head-of-ethical-ai-says-she-was-fired-over-email
  2. Grant, N. & Bass, D. (2021, February 27). Google Revamps AI Teams in Wake of Researcher’s Departure. Bloomberg.https://www.bloomberg.com/news/articles/2021-02-18/google-to-reorganize-ai-teams-in-wake-of-researcher-s-departure?srnd=technology-vp&sref=gni836kR
  3. Google AI. (2020). AI Principles 2020 Progress update. https://ai.google/static/documents/ai-principles-2020-progress-update.pdf
  4. Parker, M. (2020). Google Diversity Annual Report 2020. https://kstatic.googleusercontent.com/files/25badfc6b6d1b33f3b87372ff7545d79261520d821e6ee9a82c4ab2de42a01216be2156bc5a60ae3337ffe7176d90b8b2b3000891ac6e516a650ecebf0e3f866
  5. Dean, J. (2017, September 13). The Google Brain Team’s Approach to Research. Google AI Blog. https://ai.googleblog.com/2017/09/the-google-brain-teams-approach-to.html
  6. Mitchell, M. (n.d.). http://www.m-mitchell.com/mitchell-cv.pdf
  7. Grant, N., Bass, D., & Eidelson, J. (2021, April 21). Google Turmoil Exposes Cracks Long in Making for Top AI Watchdog. Bloomberg.https://www.bloomberg.com/news/articles/2021-04-21/google-ethical-ai-group-s-turmoil-began-long-before-public-unraveling
  8. Black in AI. (n.d.). https://blackinai.github.io/#/about
  9. Snow, J. (2018, February 14). “We’re in a diversity crisis”: cofounder of Black in AI on what’s poisoning algorithms in our lives. MIT Technology Review.https://www.technologyreview.com/2018/02/14/145462/were-in-a-diversity-crisis-black-in-ais-founder-on-whats-poisoning-the-algorithms-in-our/
  10. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research.http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
  11. Lohr, S. (2018, February 9). Facial Recognition Is Accurate, if You’re a White Guy. The New York Times.https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html
  12. Buolamwini, J. (n.d.). Updates. MIT Media Lab.https://www.media.mit.edu/projects/gender-shades/updates/
  13. a b c Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology Review. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
  14. Nayak, P. (2019, October 25). Understanding searches better than ever before. Google Blog. https://blog.google/products/search/search-language-understanding-bert/
  15. a b c d Gebru, T. (n.d.). https://www.platformer.news/p/the-withering-email-that-got-an-ethical
  16. Tiku, N. (n.d.). Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Washington Post. Retrieved April 25, 2021, from https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/
  17. Oremus, W. (2020, December 3). Noted A.I. Ethicist Timnit Gebru Let Go From Google Following Tense Email Exchange. Medium. https://onezero.medium.com/noted-a-i-ethicist-timnit-gebru-leaves-google-following-tense-email-exchange-f602cdb7fe89
  18. Gebru, T. (2020, December 3). I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I’ve been immediately fired :-) [Tweet]. @timnitGebru. https://twitter.com/timnitGebru/status/1334352694664957952
  19. Gebru, T. (2020, December 3). However, we believe the end of your employment should happen faster than your email reflects because certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager. [Tweet]. @timnitGebru. https://twitter.com/timnitGebru/status/1334364735446331392
  20. Nicolas Le Roux. (2020, December 3). Now might be a good time to remind everyone that the easiest way to discriminate is to make stringent rules, then to decide when and for whom to enforce them. My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review. [Tweet]. @le_roux_nicolas. https://twitter.com/le_roux_nicolas/status/1334601960972906496
  21. Google Walkout for Real Change. (2020, December 3). Standing with Dr. Timnit Gebru — #ISupportTimnit #BelieveBlackWomen. Google Walkout for Real Change. https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382.
  22. Elias, J. (2020, December 17). Google AI ethics team demands changes after dismissal of renowned researcher. CNBC. https://www.cnbc.com/2020/12/17/googles-ai-ethics-team-makes-demands-of-executives-to-rebuild-trust.html.
  23. Hao, K. (2020, December 17). Congress wants answers from Google about Timnit Gebru's firing. MIT Technology Review. https://www.technologyreview.com/2020/12/17/1014994/congress-wants-answers-from-google-about-timnit-gebrus-firing/.
  24. Schiffer, Z. (2021, February 4). Two Google engineers quit over firing of ethical AI leader Timnit Gebru. The Verge. https://www.theverge.com/2021/2/4/22266513/google-engineers-quit-timnit-gebru-ethical-ai-april-christina-curley-firing.
  25. Schiffer, Z. (2021, February 18). Google is restructuring its AI teams after Timnit Gebru's firing. The Verge. https://www.theverge.com/2021/2/18/22289264/google-restructuring-ethical-ai-team-timnit-gebru-firing.
  26. Metz, R. (2021, February 20). Google is trying to end the controversy over its Ethical AI team. It's not going well. CNN. https://www.cnn.com/2021/02/19/tech/google-ai-ethics-investigation/index.html.
  27. Google Walkout For Real Change. (2021, March 8). The Future Must Be Ethical: #MakeAIEthical. Medium. https://googlewalkout.medium.com/the-future-must-be-ethical-makeaiethical-9eb3edd7cf3c.
  28. Johnson, K. (2021, March 3). AI ethics research conference suspends Google sponsorship. VentureBeat. https://venturebeat.com/2021/03/02/ai-ethics-research-conference-suspends-google-sponsorship/.
  29. Vincent, J. (2021, April 13). Google is poisoning its reputation with AI researchers. The Verge. https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firing-reputation.
  30. Schiffer, Z. (2021, April 6). Google AI manager resigns following controversial firings of two top researchers. The Verge. https://www.theverge.com/2021/4/6/22370372/google-research-manager-resigns-timnit-gebru-margaret-mitchell-firing.