ChatGPT and generative artificial intelligence have taken the world by storm. They allowed direct and mass human contact with technology that had previously been disguised. Solutions based on machine learning and big data (Big Data) have entered our lives for a long time, from auto-correction or recommendation algorithms on Netflix and Amazon to selecting resumes for work or interpreting medical images. However, artificial intelligence remained behind the scenes, inaccessible to the general public. In recent months, young and old, from children to grandfathers, we have experimented with artificial intelligence, played with it and resented it in a lively, emotional way. It inevitably changed the public debate as well. We are concerned about the future of schools and jobs. Will AI become smarter than humanity? Will we become slaves to machines? I think that in the heat of discussion we risk losing some essential bets.

Razvan RuginisPhoto: Personal archive

The current debate begins with the premise of the inevitability of turning to AI. Wider development and more frequent use of artificial intelligence seems necessary in the conditions of an inexorable step of technological progress. How can we stop, or at least slow down, so that others don’t do it before us? A frequent variant of this “fear of missing out” on a planetary scale is the idea that if we do not introduce artificial intelligence into all areas of society, China will become the master of the future. Now or never, to shape a different destiny, we are called to make the most of artificial intelligence.

The huge variety of possible applications of artificial intelligence thus boils down to the common denominator of the geopolitical conflict between the West and China. This argument is built on the false premise that AI is homogenous relative to the society that creates it. Artificial intelligence is always part of specific technological products, trained on the data of a particular society and optimized according to the rules that describe that social model. In a surveillance society, people adapt their behavior to the local structure of punishments and rewards. Artificial intelligence models identify, reproduce and optimize the specific correlates of these social adaptations. On the contrary, in societies based on the wide autonomy of citizens in relation to the state, people make decisions about the norms and guidelines of personal responsibility. An AI model trained in China will not work in Europe or the US because the social game rules and dominant strategies are very different.

Tech giants, big AI manufacturers, and big data collectors promise and threaten that artificial intelligence is the unique key to increased productivity and social well-being. But who reaps the benefits of this productivity? Also, what are the hidden costs of switching to universal automation? We risk becoming a society where citizens are just a resource for training large AI models. The population will become a collective of data generators, and professionals will become data cleaners or bug cleaners. However, it is entirely possible that technology and artificial intelligence in particular will be put at the service of societal needs, rather than the other way around.

What kind of artificial intelligence is useful in the society we want? Images created with Midjourney

When you have a hammer in your hand, it seems that everything around you has nails. We are becoming a society obsessed with how we can use artificial intelligence, more and more, with little time to reflect and understand our own problems. Technology should be a means to an end, but it has become an answer to a question. It is important to identify the needs for which certain AI solutions provide a net benefit, as well as situations where the benefits of AI cannot offset the privacy, social, financial, or energy costs of new technologies.

This is also the approach of European legislators. After the GDPR curbed surveillance intrusions through the collection of personal data, two other laws clarify legitimate or problematic scenarios for the use of digital technologies. The Digital Services Act and the Digital Markets Act, which have already been approved, anchor the digital space in respecting fundamental human rights and ensuring fair economic competition. The AI ​​law is currently in the European Parliament for negotiations until the end of the month.

The European Union has cut the Gordian knot of perceiving AI as a life-saving, unique and inevitable solution. The law distinguishes several scenarios for the use of artificial intelligence depending on the risk. It establishes both possible and well-tested trajectories of accelerated innovation evolution. What would this mean for a society where each of us is constantly evaluated, evaluated based on constant information about our actions? The so-called “social score” practice introduced by the Chinese state opens the horizon of a dystopia in which human rights are quantified by the “trust score” that citizens receive from the state or corporations. The Law on AI in its current form prohibits the practice of social scoring both at the level of state authorities and at the level of private organizations. Other highly limited use cases for artificial intelligence involve predicting individuals based on biometric data. For example, in the space of European societies, the recognition of the faces of passers-by in real time will not be allowed. Neither the automatic determination of people’s emotions, nor the use of subconscious methods of persuasion will be legal. All these restrictions are aimed at preserving the autonomy of European citizens in relation to the state and any other entities that will try to manipulate our elections through AI.

Read the whole article and comment on Contributors.ro