The White House announced on Thursday the creation of a US consortium of more than 200 companies, including those conducting the most advanced research in the field of artificial intelligence, aimed at ensuring the development and safe use of AI, Reuters reported.

finishedPhoto: Marc John / Imago Stock and People / Profimedia

It was Gina Raimondo, the US Secretary of Commerce, who announced the creation of the Artificial Intelligence Security Institute Consortium (AISIC), which will include OpenAI, Alphabet (Google’s parent company), Microsoft, Anthropic, Apple, Amazon, Nvidia, Palantir, Intel, Cisco, IBM , Hewlett Packard and Meta Platforms, a company founded and run by Mark Zuckerberg.

“The United States government plays an important role in setting the standards and developing the tools needed to address the risks and harness the enormous potential of artificial intelligence,” Raimondo said in a press release.

Biden talked about the risks of AI for US national security

The consortium will also include non-tech companies such as Bank of America, JPMorgan Chase, Northop Grumman or British Petroleum, as well as higher education institutions and US government agencies.

AISIC will operate under the auspices of the Artificial Intelligence Security Institute (USAISI), a federal government regulatory agency tasked with implementing President Joe Biden’s goals of creating standards for the safe development of artificial intelligence systems.

“Artificial intelligence is a huge, huge prospect that offers incredible opportunities, but also poses risks to our society, our economy and our national security,” the US president said last July after a meeting with the heads of major US technology companies. .

Among other things, Biden ordered relevant federal agencies to establish standards for testing and creating scenarios that prevent the use of AI in connection with the launch of chemical, biological, radiological or cyber attacks.

The success of ChatGPT has sparked new debate about a possible “Terminator” scenario.

The huge success of OpenAI’s ChatGPT chatbot, as well as the rapid development of artificial intelligence technologies developed by other companies, has provoked public debate that artificial intelligence may pose a threat to the existence of humanity, a hypothesis dubbed the “Terminator scenario”. the famous film of the same name.

Among those who have issued warnings is Sam Altman, CEO of OpenAI, the company that developed ChatGPT, as well as other top Silicon Valley executives, as well as advanced technology experts and academics.

The US Congress, in turn, has tried to pass legislation that regulates the development of this technology, in parallel with the efforts of the Biden administration to establish clear rules related to the safety of artificial intelligence.

Efforts in the Washington legislature, however, have stalled despite several legislative proposals and high-level meetings with representatives of the technology sector.