Several heads of artificial intelligence (AI) companies, including Sam Altman, CEO of OpenAI, joined experts and professors in the field on Tuesday to express concern about the “increasing risk of human extinction (not) due to AI.” Reuters and Agerpres report.

Artificial Intelligence (AI)Photo: Horacio Selva / Dreamstime

“Combating the risk of extinction from artificial intelligence must be a global priority, just like other serious risks to all of human society, such as pandemics or nuclear war,” said an open letter published by the nonprofit Center for AI, with 350 signatories. Security (CAIS).

Along with Altman, this letter was signed by the directors of the AI ​​companies DeepMind and Anthropic, as well as the directors of Microsoft and Google.

Also on the list of signatures are Geoffrey Hinton and Joshua Bengio, two of the so-called “godfathers of artificial intelligence,” who received the Turing Award in 2018 for their work in the field, as well as numerous professors from universities such as Harvard. or Tsinghua, from China.

The press release of the CAIS organization mentions the case of the company Meta, where the third “godfather” of artificial intelligence, Jan LeCun, works. No representative of this company was willing to sign the open letter.

Sam Altman and Elon Musk say they are worried about artificial intelligence, but they are working on it

In April, Elon Musk and other experts in this field in turn warned in an open letter about the risks to society as a whole from this technology. The letter calls for a 6-month moratorium on the development of AI systems more powerful than the GPT-4 model.

“Artificial intelligence systems with competitive human intelligence can pose serious risks to society and humanity, as shown by numerous studies and recognized by leading artificial intelligence laboratories,” the signatories stated.

However, less than a month after signing this letter, Musk founded a company in the field of artificial intelligence.

As for Sam Altman, he warned in January, amid the huge success of ChatGPT, that AI could “put out the light for humanity.”

“At worst, and I think it’s important to say, it’s kind of an epiphany for all of us,” Altman said.

However, the CEO of OpenAI said that he is most concerned about the accidental short-term use of artificial intelligence, which can have dire consequences, rather than its general evolution in the future.

“So I think the importance of AI security and AI performance cannot be overstated. I would like a lot to happen,” he added.

PHOTO article: © Horacio Selva | Dreamstime.com