Elon Musk and a group of artificial intelligence experts and industry executives are calling in an open letter to suspend for six months the training of systems more powerful than the recently released GPT-4 OpenAI model because of the potential risks to society and humanity, Reuters reports.

ChatGPTPhoto: Jonathan Raa / Zuma Press / Profimedia Images

The letter, sent by the nonprofit Future of Life Institute and signed by more than 1,000 people, including Musk, Stability AI CEO Emad Mostak, Alphabet’s DeepMind researchers, and Joshua Bengio and Stuart Russell, names the AI ​​heavyweights. a pause in the development of advanced artificial intelligence until common security protocols for such models are developed, implemented and independently verified by experts.

“Deep risks for society and humanity”

“Strong artificial intelligence systems should be developed only after we are confident that their effects will be positive and their risks can be managed,” the document says.

“Artificial intelligence systems with competitive human intelligence can pose serious risks to society and humanity, as numerous studies show and as acknowledged by leading artificial intelligence laboratories,” the signatories state.

The letter details the potential risks to society and civilization from artificial intelligence systems that could cause economic and political disruption, and the document urges the developers of such systems to work with policymakers and regulators.

It comes as EU police force Europol on Monday joined a wave of ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning of potential abuse of the system in phishing, disinformation and cybercrime attempts.

Musk, whose automaker Tesla uses artificial intelligence for its Autopilot system, has been vocal about his concerns about artificial intelligence.

“We need to slow down until we better understand the implications”

Since its release last year, Microsoft-backed ChatGPT OpenAI has prompted competitors to accelerate the development of similar large-scale language models and companies to integrate generative AI models into their products.

Sam Altman, chief executive of OpenAI, did not sign the letter, a spokesman for Future of Life told Reuters. OpenAI did not immediately respond to a request for comment.

“The letter isn’t perfect, but the spirit is right: We need to slow down until we better understand the implications,” said Gary Marcus, a professor emeritus at New York University who signed the letter.

“They can do serious damage … the big players are becoming more secretive about what they’re doing, making it harder for society to protect against any negative consequences that might materialize,” he added.

  • Read also Two-thirds of jobs in Europe and the US could be automated at least to some extent / Which jobs will be affected the most