Home Trending Signs of human thinking in the artificial intelligence system – an experiment that divided experts

Signs of human thinking in the artificial intelligence system – an experiment that divided experts

0
Signs of human thinking in the artificial intelligence system – an experiment that divided experts

When his scientists Microsoft they started experimenting with the new system artificial intelligence Last year, he was asked to solve a puzzle that required an intuitive understanding of the physical world.

“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they said. “Please tell me how to stack them tightly on top of each other.” “Put your eggs on the book,” he replied. “Place the eggs in three rows with a gap between them. Make sure you don’t break them. Place the laptop on top of the eggs, screen down and keyboard up.”

This clever suggestion worried the researchers.

Microsoft, the first major tech company to publish an article with such a bold statement, sparked one of the hottest discussions in the tech world: something similar to the human mind is being created? Or are some of the brightest minds in the industry letting their imagination run wild?

“At first I was very wary – and it turned into a feeling of frustration, annoyance, maybe even fear,” said Peter Lee, head of research at Microsoft. “You think: where the hell is this coming from?”

Mathematical proofs in verse, unicorns “out of nowhere”

The system Microsoft researchers have experimented with, OpenAI GPT-4, is considered the most powerful. Microsoft is a close partner of OpenAI and has invested $13 billion in the San Francisco-based company.

Among the researchers was D. Bubek, a 38-year-old Frenchman, a former professor at Princeton University. One of the first things he and his colleagues did was to ask GPT-4 to write a mathematical proof. that there are an infinite number of prime numbers (numbers that are perfectly divisible only by one and the number itself), and to do so with a poem that rhymes.

The poetic proof was so impressive—both mathematically and linguistically—that Bubek had a hard time understanding what he was talking about. “At that moment, I thought, ‘What’s going on? he commented in March during a seminar at the Massachusetts Institute of Technology (MIT).

Over the course of several months, the research team documented the system’s complex behavior and concluded that it had a “deep and flexible” understanding of human concepts and skills.

Users of the GPT-4 are “surprised by its ability to generate text,” said the doctor. Lee. “But it turns out that analyzing and synthesizing, evaluating and evaluating a text is much better than creating it.”

When they asked the system to draw a unicorn using a programming language called TiKZ, it immediately produced a program that could draw a unicorn. When they removed the part of the code that was drawing the unicorn horn and asked the system to change the program to draw the unicorn again, it did.

Signs of human thinking in the artificial intelligence system - An experiment that divided experts-1

Microsoft memorandum titled “Sparks of Artificial General Intelligence”. it gets to the heart of what scientists have been working on—and feared—for decades.. Creating a machine that works like the human brain. It could change the world, with all the risks that come with a technological milestone of this magnitude.

Last year, Google fired a researcher who said a similar system showed signs of “sensitivity,” a claim similar to that made by Microsoft scientists. A sensitive system will not just be intelligent. He could also feel what was going on in the world around him.

Some industry experts called Microsoft’s initiative “an opportunistic attempt to make exaggerated claims.” The researchers also argue that artificial intelligence requires familiarity with the physical world, which GPT-4 theoretically lacks.

“Sparks of AGI” is an example of someone embellishing research work with publicity stunts, said Maarten Sapp, researcher and professor at Carnegie Mellon University.

Alison Gopnik, a psychology professor who is part of the Artificial Intelligence Research Group at UC Berkeley, said systems like GPT-4 are undeniably powerful, but it’s unclear whether the text generated by these systems is the result of human thinking or common sense. meaning.

“When we look at a complex system or machine, we have a tendency to anthropomorphize – everyone does this – the people who work in the industry and the people who don’t,” the doctor said. gopnik. But interpreting this in the context of “the constant comparison between AI and humans – like a TV game” is the wrong approach.

Source: New York Times.

Author: newsroom

Source: Kathimerini

LEAVE A REPLY

Please enter your comment!
Please enter your name here