EU lawmakers have agreed the terms of landmark legislation to regulate artificial intelligence, moving forward with the world’s most restrictive regime for the technology’s development.

ChatGPT and OpenAIPhoto: Costfoto/NurPhoto / Shutterstock Editorial / Profimedia

European Commissioner Thierry Breton considers the reached agreement “historic”.

“The EU becomes the first continent to establish clear rules for the use of artificial intelligence,” he wrote. “The AI ​​Law is much more than that – it is a springboard for EU startups and researchers to be at the forefront of the global AI race.”

The agreement was reached after lengthy discussions between member states and members of the European Parliament on how AI should be regulated, the FT reports.

Breton also said lawmakers agreed to a two-tiered approach with “transparency requirements for all general-purpose AI models (such as ChatGPT)” as well as “stricter requirements for robust models with systemic effects” across the EU.

Breton said the rules would introduce safeguards for the use of artificial intelligence technology, avoiding an “undue burden” on companies.

Among the new rules, lawmakers agreed to strict restrictions on the use of facial recognition technology, except in narrowly defined cases.

The legislation also prohibits the use of artificial intelligence to “manipulate human behavior to circumvent free will.”

The use of artificial intelligence to exploit people who are vulnerable due to age, disability or economic status is also prohibited.

Companies that do not follow the rules risk a fine of 35 million euros or 7% of global revenue.

European companies have expressed concern that overly restrictive regulations on the fast-growing technology, which gained popularity after the popularization of OpenAI’s ChatGPT, will stifle innovation. In June, some of Europe’s biggest companies, such as France’s Airbus and Germany’s Siemens, said the rules as they were set were too rigid and stifled innovation.

Last month, the UK hosted an AI Security Summit, which led to a broad commitment from 28 countries to work together to address the risks posed by advanced artificial intelligence. The development has attracted the attention of leading tech figures such as OpenAI’s Sam Altman, who has previously criticized the EU’s plans to regulate the technology.

In the end, a “political agreement” was reached on the text, which should promote innovation in Europe while limiting possible abuses of these highly advanced technologies, according to AFP and Agerpres.

The process was influenced at the end of last year by the appearance of ChatGPT, a text generator from the California company OpenAI, capable of writing dissertations, poems or translations in a few seconds. This system, like those capable of creating sounds or images, revealed to the general public the enormous potential of artificial intelligence, but also certain risks. The spread of fake photos on social networks drew attention to the danger of manipulating public opinion.

This phenomenon of generative artificial intelligence has been included in the current negotiations at the request of MEPs, who are pushing for special oversight of this type of high-impact technology. In particular, they called for more transparency about the algorithms and giant databases that are at the heart of these systems.

The political agreement reached on Friday evening must be complemented by technical work to finalize the text.” We will carefully analyze today’s compromise and ensure in the coming weeks that the text preserves Europe’s ability to develop its own AI technologies and preserves its strategic capabilities. of autonomy,” answered the Minister of Digital Technologies of France, Jean-Noel Barro.

The technology sector is crucial. “It seems that speed prevails over quality, which could lead to catastrophic consequences for the European economy,” said Daniel Friedländer, European manager of CCIA, one of the main lobbies. According to him, “technical work” is now “necessary” for important details.

The gist of the draft consists of a list of rules that only apply to systems that are considered “high risk,” essentially those used in sensitive areas such as critical infrastructure, education, human resources or law enforcement.

These systems will be subject to a series of obligations, such as ensuring human control over the machine, drawing up technical documentation or even creating a risk management system.

The legislation provides special oversight for AI systems that interact with humans and will require them to inform the user that they are interacting with a machine.

The bans will be rare and will concern applications that run counter to European values, such as citizen scoring or mass surveillance systems used in China, or even remote biometric identification of people in public places to avoid mass surveillance of the population. On the latter aspect, however, states have been granted exemptions for certain law enforcement missions, such as counter-terrorism.

Unlike the voluntary codes of conduct of individual countries, European legislation will be provided with means of supervision and sanctions with the creation of the European Office of AI within the framework of the European Commission. For the most serious crimes, fines of up to 7% of turnover with a maximum amount of 35 million euros can be applied.