Home World Artificial intelligence: resignation, many questions

Artificial intelligence: resignation, many questions

0
Artificial intelligence: resignation, many questions

In 2012 Geoffrey Hinton and two of his students from the University of Toronto created one technologies which was supposed to change the way the world perceives machines, and machines perceive the world. They laid the foundations for building systems artificial intelligence (or just AI). However, fifty years after he began his career in technology, Hinton has switched sides. On Monday, he officially joined a growing group of critics of AI systems who argue that the frenzied pace at which artificial intelligence is taking over the market poses unimaginable risks.

Hinton says he quit his job at Google, where he has worked for more than ten years, so that he can speak freely about these dangers. Part of him now regrets his life’s work. It would have been better if he hadn’t let something so dangerous breathe. “On the other hand, I console myself with the usual excuse: If I hadn’t done it, someone else would have done it,” Hinton says in a lengthy interview in the living room of his Toronto home, a few meters away from where he and his students made a revolutionary invention.

Radical changes

Global industry leaders believe that the introduction of new artificial intelligence systems could be as important to our lives as the advent of the Internet in the early 1990s, and could lead to radical changes in areas such as pharmaceutical research and education.

But people like Hinton are haunted by the fear that the industry is leaving something very dangerous behind. AI is not only dangerous because it can produce misinformation in quantity and quality that will completely change our relationship to truth. Soon it may also threaten the occupation of a person. But even this hides something more: AI can be a danger to humanity itself. “It’s hard for me to understand exactly how we’re going to prevent attackers from using AI for malicious purposes,” Hinton commented.

“No” to the Pentagon

In the 1980s, Hinton was a professor of computer science at Carnegie Mellon University in Pittsburgh. But he went to Canada because he didn’t want to accept funding from the Pentagon. At that time, the US was very eager to fund artificial intelligence, but he himself was categorically against the creation of what he called “robot soldiers.” “Few then believed that this thing could one day become smarter than people,” he said. “Even I thought it was far away. I thought it would take over 50 years to do something like this. Obviously I don’t believe in it anymore,” he commented. When asked how he could work on potentially dangerous technologies, he paraphrased Robert Oppenheimer, the “builder” of the first atomic bomb: “When you have an interesting technological challenge, you just do it. “Now he doesn’t say the same.

Artificial intelligence: resignation, many questions-1
Hinton, who left Google, says part of him regrets his life’s work. Photo by CHLOE ELLINGSON / NEW YORK TIMES

1000 technology leaders warn

At the end of March, more than 1,000 tech giants signed an open letter warning of the dangers of the rapidly growing use of artificial intelligence.

A group that includes entrepreneurs such as Elon Musk but scientists like AI godfather Geoffrey Hinton have asked all companies working on these systems to “freeze” their activities for at least six months so that the scientific community can more deeply investigate the risks they could bring. “Artificial intelligence systems should only be developed when we are all confident that their impact will be positive and the risks associated with them will be manageable,” the letter says.

Yoshua Benzio, professor and researcher in artificial intelligence at the University of Montreal, said that “our ability to predict what might go wrong with an artificial intelligence system is extremely limited. We have to be very careful.” Benzio is perhaps the most important person to sign this letter. Working alongside Geoffrey Hinton and Jan LeCun, AI lead at Meta, Benzio has spent four decades developing the neural network technology that powers all AI applications that can produce written speech today. A neural network is essentially a mathematical system that can develop various abilities by analyzing data. All text generation applications based on a neural network system are “fed” with a huge amount of data.

By recognizing patterns in this data, they can reproduce them in the long run, creating their own primary text. While the technology still has a number of pitfalls, Benzio warns that as researchers uncritically harden these systems, they also create new risks.

Truth or lie

Because these systems answer questions with absolute certainty, it becomes difficult to tell if the information they provide is true or false. Many experts fear that in the future people will trust them with medical advice, psychological support, or simply information that could influence their personal choices. “Now we have systems that communicate with us in natural language, and it will not be easy for us to distinguish the real from the fake,” Benzio emphasized.

Already its designers ChatGPT they claim that their system can replace some workers, such as people who proofread texts on the Internet. Technology may not yet be powerful enough to replace lawyers or doctors, but it could soon replace personal assistants and translators. According to a study by researchers at OpenAI, 80% of the US workforce will see at least 10% of AI impact their work.

Some of the signatories of the letter do not hide the fact that they fear the most catastrophic scenario – that this technology will get out of human control and destroy humanity. They fear that AI systems, as they process more and more specialized data, are likely to be able to write their own code. This still remains almost unbelievable, but if it did, the consequences would be truly unpredictable.

Author: CADE METTS / THE NEW YORK TIMES

Source: Kathimerini

LEAVE A REPLY

Please enter your comment!
Please enter your name here