​There are more and more people who believe that humanity can be destroyed by artificial intelligence technologies that we humans are also creating. In many companies developing generative AI software, there is a conflict between those who say we should move forward cautiously and those who say we should release the best as soon as possible. What is “effective altruism”? Why do so many people believe this idea?

Artificial IntelligencePhoto: Shutterstock

The movement called “effective altruism” (fr. Altruisme efficacious) originated in philosophy, but after 2014 it spread to the field of artificial intelligence.

  • “Danger to humanity”: OpenAI employees warn their bosses that they have achieved a great technological leap in the field of artificial intelligence / What we know about the secret project Q*

In short, proponents of this movement believe that intelligence and data can be used to do as much good as possible. In recent years, they have warned that artificial intelligence systems are becoming too powerful, could get out of control and put humanity at risk of existence. The number of people who believe in this idea has increased dramatically since the success of ChatGPT.

Proponents of the “effective altruism” movement say they can build much safer AI systems because they are willing to invest in what they call “equalization,” meaning employees can have 100 percent control over the technology they develop and 100 percent confident that these technologies are compatible. with a set of human values.

The people who expressed their fears that artificial intelligence will “wipe” us off the face of the Earth are called “doomers”, and there were very few of them, but in recent years they have entered the “mainstream”, they are increasingly silent, they have started petitions and asked the authorities to take measures and establish clear rules in the development of AI.

In the “soap opera” of OpenAI, the movement of “effective altruism” played an important role: the two board members who fired Sam Altman had ties to “altruistic” groups, and Ilya Sutzkever, who worked as the “chief scientist” of OpenAI, supports the ideas of the movement and initially appeared to be Altman’s No. 1 opponent.

Also, OpenAI, the company that launched ChatGPT in 2022, was founded in 2015 based on the ideas of effective altruism, and no one believed that OpenAI would eventually be valued at $80 billion. At that time, no one suspected that OpenAI would create and release one of the most popular programs.

One of the movement’s biggest supporters is disgraced cryptocurrency “tycoon” Sam Bankman-Fried (he was convicted of fraud).

What is “effective altruism”

The effective altruism movement tackles the dangers associated with the development of artificial intelligence (AI). The perspective of “effective altruism” offers a rational approach to managing potential risks.

The movement believes that the development of artificial intelligence capable of surpassing human intelligence could pose a risk to existence. As such, the movement promotes the idea that AI research and development should be accompanied by particular attention to safety and ethics.

The development of technologies to ensure control and transparency of AI, prevention of possible negative consequences is encouraged.

The movement also promotes international cooperation to address the risks associated with AI. Things like establishing ethical standards, relevant regulations and sharing information between countries and organizations are suggested.