US President Joe Biden said on Tuesday that it remains to be seen whether artificial intelligence is dangerous, but stressed that technology firms have a responsibility to ensure the safety of their products before releasing them to the public, Reuters reported.

ChatGPTPhoto: Jonathan Raa / Zuma Press / Profimedia Images

Biden told his science and technology advisers that artificial intelligence could help tackle disease and climate change, but it was also important to address potential risks.

“Technology companies have a responsibility to make sure their products are safe before they go to market,” he said at the start of a White House meeting of the President’s Council of Advisors on Science and Technology.

Asked whether artificial intelligence is dangerous, he said: “That remains to be seen. May be”.

Biden will urge Congress to pass bipartisan privacy legislation to protect children and limit the personal data that tech companies collect on all of us.

Artificial intelligence is becoming a hot topic for politicians

The tech ethics group Center for Artificial Intelligence and Digital Policy last month asked the US Federal Trade Commission to stop OpenAI from releasing new commercial versions of GPT-4, which has stunned and horrified users with its human-like ability to generate written responses to queries.

The letter, sent by the nonprofit Future of Life Institute and signed by more than 1,000 people, including Musk, Stability AI CEO Emad Mostak, Alphabet’s DeepMind researchers, and Joshua Bengio and Stuart Russell, names the AI ​​heavyweights. a pause in the development of advanced artificial intelligence until common security protocols for such models are developed, implemented and independently verified by experts.

US Democratic Senator Chris Murphy urged society to take a break from considering the implications of AI.

Last year, the Biden administration released a draft Bill of Rights to ensure user rights are protected as technology companies design and develop artificial intelligence systems.

“Deep risks for society and humanity”

“Strong artificial intelligence systems should be developed only after we are confident that their effects will be positive and their risks can be managed,” the document says.

“Artificial intelligence systems with competitive human intelligence can pose serious risks to society and humanity, as numerous studies show and as acknowledged by leading artificial intelligence laboratories,” the signatories state.

The letter details the potential risks to society and civilization from artificial intelligence systems that could cause economic and political disruption, and the document urges the developers of such systems to work with policymakers and regulators.

It comes as EU police force Europol on Monday joined a wave of ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning of potential abuse of the system in phishing, disinformation and cybercrime attempts.

Musk, whose automaker Tesla uses artificial intelligence for its Autopilot system, has been vocal about his concerns about artificial intelligence.

“We need to slow down until we better understand the implications”

Since its release last year, Microsoft-backed ChatGPT OpenAI has prompted competitors to accelerate the development of similar large-scale language models and companies to integrate generative AI models into their products.

Sam Altman, chief executive of OpenAI, did not sign the letter, a spokesman for Future of Life told Reuters. OpenAI did not immediately respond to a request for comment.

“The letter isn’t perfect, but the spirit is right: We need to slow down until we better understand the implications,” said Gary Marcus, a professor emeritus at New York University who signed the letter.

“They can do serious damage … the big players are becoming more secretive about what they’re doing, making it harder for society to protect against any negative consequences that might materialize,” he added.