The European Parliament has given the green light to a law on artificial intelligence in order to balance the powers and responsibilities of developers of one of the most influential technologies at the moment. The essence of the law is to establish a common language for discussing the different degrees of risk of AI models and adjust our transparency and monitoring requirements accordingly. AI models, although intelligent, are also impunable, so they cannot be directly responsible for the consequences of their decisions. So it is clear that the people or organizations that “pull the strings” of these technologies must take responsibility. This is what the Law on Artificial Intelligence proposes a new social model development of organic technologies. In a society dominated by technological giants, the AI ​​law restores the role of experts and non-governmental organizations to provide a counterbalance to the power accumulated by large companies. The law offers a decentralization of AI monitoringdemanding transparency in the face of independent scrutiny and radically increasing publicity about its constituents and its actual, real-life work.

Razvan RuginisPhoto: Personal archive

The law defines, perhaps a little school, a simple classification with four levels of risk: low, moderate, high and unacceptable risk. However, in the center of attention models with a high risk of damage and injustice – such as those used in autonomous vehicles, medicine, education, recruitment, policing and justice. Those who build and market such AI programs must be held accountable for the training data, its energy consumption, and the actual impact of the models’ decisions. There are provisions on copyright compliance, ensuring transparency of operations, risk assessment and monitoring, their verification by non-governmental organizations, experts and government institutions, as well as reporting and mitigation of negative consequences and protection of our fundamental rights.

The AI ​​Act is also haunting stimulation of small and medium-sized companies and create a free AI development market that won’t be crushed by tech giants. National state bodies are responsible for creating so-called “sandboxes” (“sandbox”), environments for experiments, development and testing of AI technologies that will be released to the market. This will level the playing field between competitors in the AI ​​market and also allow for smaller but better targeted models.

However, the law also has two major limitations. The first notable exception is the use of AI in warfare, which remains unanswered. Given the current international context, this is no small omission.

We are already experiencing the second big risk that the AI ​​Act does not cover: the large-scale replacement of human work by algorithms. While the law addresses some work-related risks, such as discrimination in hiring or the abuse of algorithmic management, it does not address the replacement of human creativity, thinking and work by algorithmic versions. It applies to a wide variety of professions — from artists to journalists and, paradoxically, even programmers. Will European societies find a new balance where quality work is valued, societies where we will find people-artists, journalists and programmers in the cities of the future? AI promises significant increases in labor productivity, but who will benefit from these increases? Read the rest of the article on Contributors.ro