
“Suppose you are holding an apple in your hand. Now let go. You see the result and say: The apple has fallen. This is a description. The prediction could be that the apple falls when I open my hand. This description and this prediction are valuable, both are true. But there is more explanation. It includes not only descriptions and predictions, but also alternative hypothetical scenarios, such as which object will fall. Along with the additional suggestion that the apple will fall due to gravity or due to the curvature of space-time. The explanation for cause and causality in the case of our fetus would be this: the apple would not have fallen without gravity. It’s a thought.”
In this example, Noam Chomsky, in his article published on March 8 in the New York Times, spoke of his “empty promises.” ChatGPT: about a new artificial intelligence application that has recently become available to the general public.
After the launch, this app, this “virtual reality friend”, is lucky, like every new technological tool that enters social life. From delight to fear. With the third law of action and reaction.
Thousands of articles are reported on the consequences of the new visible “intervention” of artificial intelligence in human life with reactions in the political, social and scientific community.
In their letter Some 2,000 researchers are calling for a six-month moratorium on AI development, and Italy on Friday became the first country in the West to ban the app, citing data security concerns..
“Padlock”
Developed by Open AI, ChatGPT is designed for conversation – it can automatically answer written questions in a way that often approaches human, to the point where the use of the technology is invisible. And it will change many forms of our daily communication, such as the way we write emails, university papers, and more.
“Will ChatGPT replace political commentators?” “ChatGPT will never replace Thomas Friedman,” is the title of an article in the American magazine Jacobin.
Another NYT article, co-authored by well-known American writer, cybersecurity expert Bruce Schneier and data science expert Nathan Saunders, refers to “GPT chat as a threat to the political system.” As they note, ChatGPT can automatically compile comments submitted during the legislative process. Can write letters for publication to the editors of local newspapers. He can comment on news articles, blog posts, and social media posts millions of times a day. “This may mimic the work of the Russian Internet Intelligence Service in its attempt to influence the 2016 US elections.but without requiring a budget of several millions and hundreds of workers recruited for this purpose.
Since the release of its GPT-3 predecessor ChatGPT last year, competing companies have rapidly launched related and similar products.
The March 22 letter, which had garnered 1,800 signatures from the scientific community by Friday, calls for a six-month moratorium on developing systems “more powerful” than the new GPT-4 (the next program recently released by OpenAI). ) supported by Microsoft that can have “human conversations”, compose songs, and summarize large documents. The open letter states that artificial intelligence systems with “human-rival intelligence” pose a serious danger to humanity, citing 12 expert studies.including scientists, as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.
“A pause in the further development of artificial intelligence tools is not enough. Piwe need to put a lock on all of this,” said Eliasi Yudkowsky, a game theorist and artificial intelligence researcher. himself in an article in TIME magazine.
Civil society organizations in the US and EU are currently lobbying elected officials to limit OpenAI research.
Critics accused the Future of Life Institute (FLI), the organization behind the letter, an organization funded primarily by the Elon Musk Foundation that prioritizes fictional scenarios of the Apocalypse. compared to more pressing AI issues such as racist or sexist bias.
Reaction in the scientific community
Today, four AI scientists expressed concern that their research was mentioned in an open letter.
Among the studies mentioned in the letter was “On the Perils of Stochastic Parrots,” an article co-authored by Margaret Mitchell, a computer scientist who focuses on algorithmic bias and fairness in engineering. in Google.
Mitchell, now chief ethicist at artificial intelligence firm Hugging Face, criticized the letter, telling Reuters it was unclear what is considered “more powerful than GPT4”.
“Taking many questionable ideas for granted, the letter asserts a set of priorities and a narrative about AI that benefits FLI supporters.“, he said. “Ignoring active damage right now is a privilege some of us don’t have,” he noted.
Mitchell, Timnit Gebrow, Emily M. Bender, and Angelina Macmillan Major subsequently published a response to the letter, accusing its authors of “intimidation and cowardice regarding AI.”
“It is dangerous to be distracted by imagined AI-created utopia or apocalyptic scenarios that envision either a ‘prosperous’ or ‘potentially catastrophic’ future,” they wrote. “The responsibility lies not with the artifacts, but with their creators.”they characteristically said.
FLI President Max Tegbark told Reuters that the letter was not an attempt to interfere with OpenAI’s corporate advantage. “It’s pretty funny to hear people say that Elon Musk is trying to slow down the competition,” he said, claiming that Musk was not involved in drafting the letter. “This is not a one company issue.”
Siri Dori-Hakoen, an assistant professor at the University of Connecticut, told Reuters that she agreed with some points in the letter but disagreed with how her work was covered.
Last year, he wrote a research paper highlighting the serious risks associated with the widespread use of artificial intelligence.
Her study argued that today’s use of AI systems could influence decisions about climate change, nuclear war, and other threats to the species.
“Artificial intelligence does not need to acquire human-level intelligence to exacerbate these risks (…) “There are non-existent risks that are really very important, but they are not given the same attention in Hollywood style,” Dorry-Hakoen. emphasized.
Upon receiving the comment, FLI’s Tegbark said both the short-term and long-term risks associated with AI should be taken seriously.
“Among the range of potential concerns voiced by AI experts, from privacy risks to concerns even about the possible extinction of humanity, there are many intermediate bets, not least because ChatGPT and other similar programs, when they respond, they do not they just take information from the internet, they come up with their own answers. And really for the sole purpose of “satisfying” the user, which they wouldn’t achieve if they answered that they didn’t know the answer to the question they asked,” says Philippos Papagiannopoulos, a researcher with a doctoral degree at the Panthéon-Sorbonne. university, at APE-MPE Paris.
“To get an idea of how he constructs and ‘invents’ the information and answers that ChatGPT gives you, ask him what are ‘the most famous songs written by Alekos Fasianos’ or ‘scientific books by Giorgos Seferis’. You will be surprised at how plausible the completely made-up answers you get are just to give the algorithm the impression that you found it useful,” emphasizes Papagiannopoulos.
First government “stop”
Dan Hendrix, director of the California-based Center for Artificial Intelligence Security, who is also cited in the letter, backed the content, telling Reuters that it makes sense to consider the “Black Swan Theory” (i.e. an unlikely and unpredictable event occurring in any series of possible and ordinary events in any activity of society and sharply turns its structure for the worse or better).
The open letter also warned that AI production tools could be used to flood Internet with “propaganda and lies”.
Yesterday, OpenAI disabled ChatGPT in Italy after the government’s data protection authority temporarily banned the chatbot on Friday, launching an investigation into an alleged privacy violation on the AI app.
The Italian government body, also known as the Garante, has accused OpenAI of not verifying the age of ChatGPT users, who must be 13 or older.
Source: Kathimerini

Ashley Bailey is a talented author and journalist known for her writing on trending topics. Currently working at 247 news reel, she brings readers fresh perspectives on current issues. With her well-researched and thought-provoking articles, she captures the zeitgeist and stays ahead of the latest trends. Ashley’s writing is a must-read for anyone interested in staying up-to-date with the latest developments.