OpenAI, Anthropic and Google DeepMind workers warn of the dangers of AI

Some current and former employees of OpenAI and other major artificial intelligence companies warned in a letter on Tuesday that the technology poses a grave threat to humanity.

The letter, signed by 13 people, including current and former employees of Anthropic and Google’s DeepMind, said AI could increase inequality, increase misinformation and make AI systems autonomous and potentially lethal. While these risks can be mitigated, software-controlled companies have “strong financial incentives” to limit oversight, they said.

As AI is loosely regulated and accountability rests within the company, employees have called on companies to eliminate non-disclosure agreements and provide workers with protections.

The move comes as OpenAI employees face layoffs. Several critics, including OpenAI co-founder Ilya Sutzkever and senior researcher John Leek, have seen major departures — condemned by company leaders who argue that some employees are chasing profits at the expense of making OpenAI’s technologies safer.

Daniel Cocotajlo, a former OpenAI employee, said he left the start-up because the company ignored the risks of artificial intelligence.

get caught

Short stories for quick information

“As they pursue artificial general intelligence in particular, I have lost faith that they will act responsibly,” he said in a statement, referring to a buzzword that refers to computers that match the power of the human brain.

“They and others have adopted a ‘move fast and break things’ approach, and that’s the opposite of what this powerful and poorly understood technology needs.”

Liz Bourgeois, a spokeswoman for OpenAI, said the company agrees that “robust discussion is important given the importance of this technology.” Representatives for Anthropic and Google did not immediately respond to a request for comment.

See also  Kawakami: What Brock Birdy's bad night and stark self-assessment mean for 49ers

Lacking government oversight, AI workers are the “few people” companies can hold accountable, employees said. They noted that they are hampered by “extensive non-disclosure agreements” and that ordinary whistleblower protections are “inadequate” because they focus on illegal activities and the risks they warn about are still not contained.

The letter calls for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles include a commitment not to enter into or enforce agreements that prohibit criticism of risks; A call to establish an anonymous process for current and former employees to raise concerns; supporting a critical culture; and a pledge not to retaliate against current and former employees who share classified information to raise alarms “after other processes have failed.”

The Washington Post reported in December that senior leaders at OpenAI raised fears of retaliation from CEO Sam Altman — warnings that preceded the leader’s temporary ouster. In a recent podcast interview, former OpenAI board member Helen Donner said part of the nonprofit’s decision to fire Altman as CEO late last year was a lack of honest communication about security.

“He gave us false information about the small number of formal security processes the company had in place, which meant it was essentially impossible for the board to know how well those security processes were working,” he said.TED AI presentation“In May.

AI luminaries including Yoshua Bengio and Geoffrey Hinton, considered the “godfathers” of AI, and renowned computer scientist Stuart Russell endorsed the letter.

Leave a Reply

Your email address will not be published. Required fields are marked *