Fired Google Researcher Starts Her Own A.I. Research Center

By B.N. Frank

Artificial Intelligence (A.I.) inaccuracies and vulnerabilities are nothing new (see 1, 2, 3, 4, 5).  People have been accused and convicted of crimes based on inaccuracies (see 1, 2) including from the use of A.I. based ShotSpotter technology.  Of course, experts have warned for years about using A.I. for these reasons.  In fact, one was fired from Google last year for voicing her concerns (see 1, 2).  Now she has opened her own A.I. research center.

From Wired:


Ex-Googler Timnit Gebru Starts Her Own AI Research Center

The researcher, who says Google fired her a year ago, wants to ask questions about responsible use of artificial intelligence.

One year ago Google artificial intelligence researcher Timnit Gebru tweeted, “I was fired” and ignited a controversy over the freedom of employees to question the impact of their company’s technology. Thursday, she launched a new research institute to ask questions about responsible use of artificial intelligence that Gebru says Google and other tech companies won’t.

“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male—and to recruit people from parts of the world rarely represented in the tech industry.

Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies. Google has said she resigned and was not fired, but acknowledged that it later fired Margaret Mitchell, another researcher who with Gebru co-led a team researching ethical AI. The company placed new checks on the topics its researchers can explore. Google spokesperson Jason Freidenfelds declined to comment but directed WIRED to a recent report on the company’s work on AI governance, which said Google has published more than 500 papers on “responsible innovation” since 2018.

The fallout at Google highlighted the inherent conflicts in tech companies sponsoring or employing researchers to study the implications of technology they seek to profit from. Earlier this year, organizers of a leading conference on technology and society canceled Google’s sponsorship of the event. Gebru says DAIR will be freer to question the potential downsides of AI and will be unencumbered by the academic politics and pressure to publish that she says can complicate university research.

DAIR will also work on demonstrating uses for AI unlikely to be developed elsewhere, Gebru says, aiming to inspire others to take the technology in new directions. One such project is creating a public data set of aerial imagery of South Africa to examine how the legacy of apartheid is still etched into land use. A preliminary analysis of the images found that in a densely populated region once restricted to non-white people where many poor people still live, most vacant land developed between 2011 and 2017 was converted into wealthy residential neighborhoods.

A paper on that project will mark DAIR’s formal debut in academic AI research later this month at NeurIPS, the world’s most prominent AI conference. DAIR’s first research fellow, Raesetje Sefala, who is based in South Africa, is lead author of the paper, which includes outside researchers.

Safiya Noble, a professor at UCLA who researches how tech platforms shape society, serves on DAIR’s advisory board. She says Gebru’s project is an example of the kind of new and more inclusive institutions needed to make progress on understanding and responding to technology’s effects on society.

“Black women have been major contributors to helping us understand the harms of big tech and different kinds of technologies that are harmful to society, but we know the limits in corporate America and academia that Black women face,” says Noble. “Timnit recognized harms at Google and tried to intervene but was massively unsupported—at a company that desperately needs that kind of insight.”

Noble recently launched a nonprofit of her own, Equity Engine, to support the ambitions of Black women. She is joined on DAIR’s advisory board by Ciira wa Maina, a lecturer at Dedan Kimathi University of Technology in Nyeri, Kenya.

DAIR is currently a project of nonprofit Code for Science and Society but will later incorporate as a nonprofit in its own right, Gebru says. Her project has received grants totaling more than $3 million from the Ford, MacArthur, Rockefeller, and Open Society foundations, as well as the Kapor Center. Over time, she hopes to diversify DAIR’s financial support by taking on consulting work related to its research.

RE:WIRED 2021: Speaking Truth to Biased Algorithms

DAIR joins a recent flourishing of work and organizations taking a broader and critical view of AI technology. New nonprofits and university centers have sprung up to study and critique AI’s effects in and on the world, such as NYU’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives. Some researchers in AI labs also study the impacts and proper use of algorithms, and scholars from other fields such as law and sociology have turned their own critical eyes on AI.

The White House Office of Science and Technology Policy this year hired two prominent academics who work on algorithmic fairness research and is working on a “bill of rights” to guard against AI harms. The Federal Trade Commission last month hired three people from AI Now to serve as advisers on AI technology.

Despite those shifts, Baobao Zhang, an assistant professor at Syracuse University, says the US public still seems to broadly trust tech companies to guide development of AI.

Zhang recently surveyed AI researchers and the US public on who they trusted to shape development of the technology in the public interest. The results were starkly different: The public were most trusting of university researchers and the US military. Tech companies as a group came slightly behind, ranking similarly to international or nonprofit research institutions such as CERN, but ahead of the US government. AI researchers reported less trust than the general public in the US military and some tech companies, notably Facebook and Amazon, but more in the UN and non-governmental scientific organizations.


Activist Post reports regularly about A.I. and unsafe technology.  For more information, visit our archives.

Image: Pixabay

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Fired Google Researcher Starts Her Own A.I. Research Center"

Leave a comment