
I am also happy to have lived to see the day when climate crisis, “code red for humanity”, climate Armageddon, climate extinction and other such linguistic inventions have left the list of inexorable human-caused disasters. What happened?
May 30, 2023 AI Security Center published a general statement that was intended to provoke:
Reducing the risk of extinction through artificial intelligence should be a global priority alongside other societal risks such as pandemics and nuclear war.
Oho! Is it so? So, is humanity’s global priorities excluding human-caused climate change for the first time? What about the IPCC?! And what about Greta?! What will they do from now on?!
This is a statement that deserves attention, if only we read the names of more than 350 signatories: founders and employees of OpenAI, Google Deepmind, Anthropic and other artificial intelligence laboratories, billionaire Bill Gates, visionary Ray Kurzweil, Martin Reese, astrophysicist and others – the founder of the Center for the Study of existential risk at the University of Cambridge, climate activist Bill McKibben, researchers, university teachers and others. The statement was signed by Geoffrey Hinton and Joshua Bengio, two of the three researchers who won the Turing Award for pioneering work in neural networks and are often considered the “godfathers” of the modern artificial intelligence movement.
Dan Hendricks, its director, secured the acquisition of the accessions AI Security Center and the author of a series of related papers, including one entitled “Natural Selection Favors Artificial Intelligence over Humans,” from which I borrowed the title of my contribution (with a modification I’ll explain in a later paper).
Like hydraulic fracturing – invented in 1947 but only brought to public attention in 2010 via a fake documentary GaslandArtificial intelligence can be traced back to 1935, with the machine proposed by Alan Turing and the test that bears his name. The development and progress of artificial intelligence has occurred in smaller or larger leaps over the past nine decades, but public opinion and the blogosphere only really took off after the arrival of ChatGPT (November 2022) and especially the improved version of GPT4, introduced just four months later. (March 2023).
While ChatGPT could only process text, the new version of GPT4 showed amazing progress: creating a fully functional website from a simple sketch on a piece of paper, highlighting the AI’s multimodal ability to process images, video, audio, text and writing. code. I won’t repeat here the same things you (probably) have read/seen on the news and social media. I would just like to point out that 40 years after its formulation by the mathematician and writer Vernor Vinge, we are entering a period, the first signs of which have already appeared – technological feature:
A technological singularity is a hypothetical event that occurs when artificial intelligence surpasses human intelligence, resulting in an exponential acceleration of technological progress, resulting in an unprecedented and unpredictable transformation of human and machine civilization.
In addition, Vinge wrote that this event would lead to an exponential acceleration of technological progress and an unprecedented and unpredictable transformation of human civilization. As the new superintelligence continues to self-improve and evolve at rates technologically unfathomable today, the human era will eventually come to an end. After which he added that he would be surprised if the event happened before 2005 or after 2030. We are in leap year 2023.
The name of the superintelligence proposed by Vinge, today AGI is General artificial intelligence – and one proposed definition begins with the fact that human intelligence is limited (not just to the “jellyfish” described in the motto!), while machine intelligence has the potential to surpass human capabilities in any field. Thus, AGI is theoretically capable of performing any cognitive task that a human can perform: understanding natural language, reasoning, learning, problem solving, and decision making. An AGI organization would probably pass the Turing Test. Those who watched the movie Ex Machina (2014) you have an example of the human qualities of a highly developed humanoid artificial intelligence. We’re not quite there yet with GPT4, but surprisingly, there are more and more behaviors and capabilities that may indicate we’re pretty close.
In an essay published on May 6, 2023, Dan Hendricks makes a bold hypothesis about how artificial intelligence will surpass human intelligence. Natural selection can cause AI to “behave selfishly” when trying to survive (see the “selfish gene” theory, Richard Dawkins, 1976). More seriously, it is argued that natural selection creates incentives for AI agents to act against AI. human interests:
Competitive pressures between corporations and the military will lead to the emergence of artificial intelligence agents that will automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, it could lead to a loss of control over humanity’s future.
Natural selection operates on systems that compete and change, and selfish species usually have an advantage over altruistic ones. This Darwinian logic can also be applied to artificial agents, because these agents may ultimately be better able to survive in the future if they behave selfishly and pursue their own interests without regard for humans, which may pose catastrophic risks.
For now, these predictions can be dismissed as we do not yet know how the changes caused by natural selection will be implemented. Presumably, the new AIs will simply be improved versions of current chatbots or programs that can beat humans at any time and at any level at chess or Go (see Artificial Intelligence as Black swan and impact on the future, 2017).
In addition, they can be put together using various machine learning methods (machine learning) or, why not, they can create a new existential paradigm. Humans, by combining their cognitive abilities, can create a collective intelligence (such as language, culture, the Internet) that has made them the dominant species on the planet. These collective intelligences have a higher IQ than an individual. But artificial intelligence can also form a collective intelligence with a higher IQ. Two aspects should be mentioned here. A) The computer “brain” is faster than the human mind and is getting faster. Microprocessors work at least a million times faster than human neurons. That is, what artificial intelligence can “think” in one second, it will take a person at least 11 days. B) Compared to forming ensembles of collective intelligences, people find it difficult to work in large groups and can be indoctrinated and manipulated into groupthink or other forms of collective idiocy.[[1]]In addition, as I presented in The Paradox of Wisdom, the Gossip Trap, and Some Climate Change, the organization and functioning of our brains do not allow us to maintain stable social relationships with more than 150 people (the so-called Dunbar number). Or artificial intelligence can act on groups of thousands or even millions of objects simultaneously, as our Internet-connected computers do. This method of assembly creates important prerequisites for self-organizing forms of artificial intelligence, helping them to surpass human collective intelligence.
Given the uncertainties and (yet) unknown elements of AI builds, can we still make informed predictions about the future of AI?
The answer is YES. Hendricks states:
In the past, people could successfully predict lunar eclipses and planetary motions without a full understanding of gravity. They developed the dynamics of chemical reactions even without a correct theory of quantum physics. They formulated the theory of evolution long before DNA was known. In the same way, we can predict whether natural selection will apply in a given situation and predict which traits natural selection will favor.
The future of artificial intelligence, controlled/determined by natural selection, begins with two postulates, related to each other in different ways and depending on the conditions of implementation (Fig. 1):
- Natural selection may be the dominant force in the development of AI. Competition and power struggles can moderate the effects of protective measures, allowing “natural” forces to select surviving AI agents.
- Evolution by natural selection tends to produce selfish behavior. Although in some cases there are examples of cooperative behavior (ants, bees), the development of artificial intelligence will be selfish, not altruistic, which will weaken human control.
Evolutionary pressures often lead to selfish behavior among organisms: manipulation, violence, or deception. A well-known example of the latter category is a female cuckoo laying her eggs in another bird’s nest, which she is tricked into thinking are her eggs.
After about 4 billion years, during which Earth has known only organic life forms, we could be witnessing the first inorganic life forms, or at least the emergence of inorganic AI agents. It’s no secret that people have been afraid of artificial intelligence since the dawn of the computer age, and various cultural media (films, novels) have exploited this fear. Science fiction films of 2001: Space Odyssey, Terminator, Minority Report, Wall-E or Matrix there are only a few examples.-Read the whole article and comment on Contributors.ro
Source: Hot News

James Springer is a renowned author and opinion writer, known for his bold and thought-provoking articles on a wide range of topics. He currently works as a writer at 247 news reel, where he uses his unique voice and sharp wit to offer fresh perspectives on current events. His articles are widely read and shared and has earned him a reputation as a talented and insightful writer.