
For decades, artificial intelligence (AI) that will destroy humanity has been discussed, science fiction novels and movies have been written, and leading scientists have questioned our ability to control it. Reply? Almost certainly NOT.
The problem: Controlling a superintelligence far beyond human understanding would involve modeling what the superintelligence can analyze and control. And if we are unable to understand, such a simulation is impossible.
Rules like “do no harm to people” cannot be established unless we understand what scenarios AI can create, say the authors of the scientific study. As soon as a computer system works at a level that is beyond the capabilities of our programmers, we will no longer be able to set limits on it.
“Superintelligence poses a fundamentally different problem than those usually studied under the heading of ‘robot ethics.’ And that’s because superintelligence is multifaceted and, therefore, potentially capable of achieving goals that are incomprehensible to humans, let alone controllable,” the researchers write.
Part of the team’s reasoning comes from the “halting problem” raised by Alan Turing in 1936. The problem is whether the computer program can come to a conclusion and answer (and stop) or whether it will run forever trying to find
As Turing proved through higher mathematics, although we can know this for certain programs, it is logically impossible to find a way to know this for every potential program that will ever be written.
And we return to AI, which in a superintelligent state will have a memory and will be able to easily stop any computer program.
Any program written to stop an AI from harming humans and destroying the world, for example, may or may not reach a conclusion (and stop) – it’s mathematically impossible to be sure one way or the other, so we won’t go into that.
An alternative would be to limit the capabilities of superintelligence. It can be removed, for example, from areas of the Internet or certain networks.
Researchers also rejected the idea, saying it would limit artificial intelligence. The argument is this: If we’re not going to use it to solve problems beyond human capabilities, then what’s the point of creating it?
If we continue to search for new paths to artificial intelligence, we may not even know when a superintelligence will emerge that is independent of us, so inexplicable will it be. This means that we must begin by asking serious questions about the directions in which we are going.
“A super-intelligent machine that controls the world sounds like science fiction.
But there are already some that perform important tasks on their own, without the programmers fully understanding how they learned,” said computer scientist Manuel Cebrian of the Max Planck Institute for Human Development. “So the question is whether at some point it will become uncontrollable and dangerous to humanity.”
The study was published in the “Journal of Artificial Intelligence Research”
Source: Hot News RO

Robert is an experienced journalist who has been covering the automobile industry for over a decade. He has a deep understanding of the latest technologies and trends in the industry and is known for his thorough and in-depth reporting.