AI workers warn of technology risks, call for sweeping company changes

A handful of current and former employees at OpenAI and other prominent artificial intelligence companies warned that the technology poses major risks to humanity in a letter Tuesday, calling on the companies to implement sweeping changes to ensure transparency. and to foster a culture of public debate.

The letter, signed by 13 people, including current and former employees at Google’s Anthropic and DeepMind, said AI could exacerbate inequality, increase disinformation and allow AI systems to become autonomous and cause significant death. . Although these risks can be mitigated, corporations in control of the software have “strong financial incentives” to limit surveillance, they said.

Because artificial intelligence is loosely regulated, the onus rests with company insiders, the employees wrote, calling on corporations to remove non-disclosure agreements and give workers protections that allow them to raise concerns fairly. anonymous.

The move comes as OpenAI faces an exodus of staff. Many critics have seen prominent departures — including OpenAI co-founder Ilya Sutskever and senior researcher Jan Leike — as a rebuke of the company’s leaders, who some employees argue are pursuing profit at the expense of making OpenAI technologies more effective. sure.

Daniel Kokotajlo, a former employee at OpenAI, said he left the start-up because of the company’s disregard for the dangers of artificial intelligence.

caught

Condensed stories to quickly stay informed

“I have lost hope that they will act responsibly, especially when pursuing general artificial intelligence,” he said in a statement, referring to a controversial term that referred to computers that match human brain power.

“They and others have taken the ‘move fast and break things’ approach, and that’s the opposite of what’s needed for such a powerful and poorly understood technology,” Kokotajlo said.

Liz Bourgeois, a spokeswoman at OpenAI, said the company agrees that “rigorous debate is crucial given the importance of this technology.” Representatives from Anthropic and Google did not immediately respond to a request for comment.

Employees said that in the absence of government oversight, AI workers are “the few people” who can hold corporations accountable. They said they are hampered by “broad confidentiality agreements” and that the usual whistleblower protections are “inadequate” because they focus on illegal activity and the risks they warn about are not yet regulated.

The letter calls on AI companies to commit to four principles to allow greater transparency and protection from whistleblowers. These principles are a commitment not to enter into or implement agreements that prohibit criticism of risks; a call to create an anonymous process for current and former employees to raise concerns; supporting the culture of criticism; and a promise not to retaliate against current and former employees who share confidential information to raise the alarm “after other processes have failed.”

The Washington Post in December reported that senior leaders at OpenAI raised fears of retaliation from CEO Sam Altman — warnings that preceded the chief’s temporary departure. In a recent podcast interview, former OpenAI board member Helen Toner said that part of the nonprofit’s decision to remove Altman as CEO late last year was his lack of honest communication about security. .

“He gave us incorrect information about the small number of formal safety processes the company had in place, meaning it was essentially impossible for the board to know how well those safety processes were working,” it said. the one for “The TED AI Show”. ” in May.

The letter was endorsed by AI luminaries, including Yoshua Bengio and Geoffrey Hinton, who are considered the “godfathers” of AI, and renowned computer scientist Stuart Russell.

#workers #warn #technology #risks #call #sweeping #company
Image Source : www.washingtonpost.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top