Home TECH AI risks: OpenAI, Google DeepMind’s current and former employees warn about AI...

AI risks: OpenAI, Google DeepMind’s current and former employees warn about AI risks

95
0
A group of current and former employees at artificial intelligence (AI) companies, including Microsoft-backed OpenAI and Alphabet’s Google DeepMind on Tuesday raised concerns about risks posed by the emerging technology. An open letter by a group of 11 current and former employees of OpenAI and one current and another former employee with Google DeepMind said the financial motives of AI companies hinder effective oversight.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
MIT xPRO MIT Technology Leadership and Innovation Visit
Indian School of Business ISB Product Management Visit
IIT Delhi Certificate Programme in Data Science & Machine Learning Visit

“We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter added.

It further warns of risks from unregulated AI, ranging from the spread of misinformation to the loss of independent AI systems and the deepening of existing inequalities, which could result in “human extinction.”

Researchers have found examples of image generators from companies including OpenAI and Microsoft producing photos with voting-related disinformation, despite policies against such content.

AI companies have “weak obligations” to share information with the governments about the capabilities and limitations of their systems, the letter said, adding that these firms cannot be relied upon to share that information voluntarily.

Discover the stories of your interest


The open letter is the latest to raise safety concerns around generative AI technology, which can quickly and cheaply produce human-like text, imagery and audio. The group has urged AI firms to facilitate a process for current and former employees to raise risk-related concerns and not enforce confidentiality agreements that prohibit criticism.

Separately, the Sam Altman-led firm said on Thursday it disrupted five covert influence operations that sought to use its artificial intelligence models for “deceptive activity” across the internet.