The email alludes to Google censoring one of Gebru’s research papers without talking to her about it, as well as the poor treatment of those who advocate for underrepresented people at the company. The email was published in full on the outlet Platformer.
After sending the email, Gebru had an exchange with managers and privately threatened to quit unless certain undisclosed conditions were met. Instead, Gebru says she was immediately fired, she told OneZero’s Will Oremus.
Noted A.I. Ethicist Timnit Gebru Let Go From Google Following Tense Email Exchange
Gebru is known for influential research about bias in facial recognition
Gebru’s contributions to the field have shaped modern understanding of how artificial intelligence fails and the technical underpinnings of how algorithms treat underrepresented people differently. A Twitter thread by Fast.ai co-founder Rachel Thomas lays out how Gebru’s years of scholarship have influenced A.I. research, including her co-authoring a seminal work that showed facial recognition is far less accurate on women of color than on white men.
Gebru helped lead of Google’s A.I. ethics team and co-founded Black in A.I., an international organization focused on supporting Black A.I. researchers and expanding access to the traditionally exclusive field.
According to the Platformer, the email reads, in part:
Imagine this: You’ve sent a paper for feedback to 30+ researchers, you’re awaiting feedback from PR & Policy who you gave a heads up before you even wrote the work saying “we’re thinking of doing this”, working on a revision plan figuring out how to address different feedback from people, haven’t heard from PR & Policy besides them asking you for updates (in 2 months). A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week, Nov. 27, the week when almost everyone would be out (and a date which has nothing to do with the conference process). You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company.
Then, you ask for more information. What specific feedback exists? Who is it coming from? Why now? Why not before? Can you go back and forth with anyone? Can you understand what exactly is problematic and what can be changed?
And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored. And you’re met with, once again, an order to retract the paper with no engagement whatsoever.
Then you try to engage in a conversation about how this is not acceptable and people start doing the opposite of any sort of self reflection — trying to find scapegoats to blame.
Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of “responsible AI” adds so much salt to the wounds.