Empower users, workers to tackle AI threats - Signal president


Reuters | Updated: 05-06-2023 14:33 IST | Created: 05-06-2023 14:30 IST
Empower users, workers to tackle AI threats - Signal president
Representative Image Image Credit: ANI

Privacy laws and labor organizing offer the best chance to curb the growing power of big tech and tackle artificial intelligence's main threats, said a leading AI researcher and executive.

Current efforts to regulate AI risk being overly influenced by the tech industry itself, said Meredith Whittaker, president of the Signal Foundation, ahead of RightsCon, a major digital rights conference in Costa Rica this week. "If we have a chance at regulation that is meaningful, it's going to come from building power and making demands by the people who are most at risk of harm, " she told the Thomson Reuters Foundation. "To me, these are the front lines."

More than 350 top AI executives including OpenAI CEO Sam Altman last week joined experts and professors in raising the "risk of extinction from AI", which they urged policymakers to equate at par with risks posed by pandemics and nuclear war. But for Whittaker, these doomsday predictions overshadow the existing harms that certain AI systems are already perpetrating.

"Many, many researchers, have been carefully documenting these risks, and have been piling up the receipts," she said, pointing to work by AI researchers such as Timnit Gebru and Joy Buolamwini, who first documented racial bias in AI-powered facial recogntion systems over five years ago. A recent report on AI harms from the Electronic Privacy Information Center (EPIC), lists labor abuse of AI annotators in Kenya who help build predictive models, the environmental cost of the computing power to build AI systems, and the proliferation of AI-generated propaganda, among other concerns.

CURBING POWER When Whittaker left her job as an AI researcher at Google in 2019, she wrote an internal note warning against the trajectory of AI technology.

"The use of AI for social control and oppression is already emerging," said Whittaker, who had clashed with Google over the company's AI contract with the U.S. military, as well as over the company's handling of sexual harassment claims. "We have a short window in which to act, to build in real guardrails for these systems, before AI is built into our infrastructure and it's too late."

Google did not respond to a request for comment. Whittaker sees the current AI boom as part of the "surveillance derivative" business, which has monetized the vast collection of user-generated information on the internet to create powerful predictive models for a small set of companies.

Popular generative AI tools like ChatGPT are trained on vast troves of internet data - including text from Wikipedia entries to patent databases and World of Warcraft player forums, according to a Washington Post investigation. Social media companies and other tech firms also build AI and predictive systems by analyzing their owner users' behavior.

Whittaker hopes that encrypted messaging app Signal and other projects that do not collect nor harvest the data of their users can help curb the concentration of power among a few powerful AI developers. For Whittaker, the rise of powerful AI tools points to the growing concentration of power in a small group of technology companies that are able to make the sizable investments in data collection and computing power that such systems require.

"We have a handful of companies that have ... arguably more power than many nation states," said Whittaker, who will be speaking about privacy-centric apps and encryption at RightsCon, which is hosted by digital rights group Access Now. "We are sort of ceding more and more decision making power, more and more power over our futures — who will benefit and who will lose — to a small group of companies."

PUSHING BACK Whittaker is hopeful for greater regulatory oversight of AI - but also wary of those regulators being overly influenced by the industry itself.

In the U.S., a group of federal agencies announced in April they would be policing the emerging AI space for instances of bias in automated systems, as well as deceptive claims being made about the capabilities of AI systems. The EU in May agreed tougher draft legislation, also known as the AI Act, that will categorize certain kinds of AI as "high-risk" and require companies to share data and risk assessments with regulators.

"I think everyone is scrambling," said Whittaker, who served as a senior advisor on AI to the U.S. Federal Trade Commission before joining Signal in 2022. She sees promise in privacy-centric regulation that seeks to limit the amount of data that companies can collect and therefore deprive AI models of the raw materials they need to build ever more powerful systems.

Whittaker also pointed to the work of labor organizers, such as the recent calls from the Writers Guild of America (WGA) and Screen Actors Guild (SAG) to limit the use of generative AI technologies like ChatGPT in their workplaces.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Give Feedback