The European Parliament on Wednesday approved the Law on Artificial Intelligence, a regulation on the use of artificial intelligence that aims to guarantee safety and respect for fundamental rights, as well as to stimulate innovation.

Artificial Intelligence (AI) regulated in the European UnionPhoto: Budrul Chukrut/SOPA Images / Shutterstock Editorial / Profimedia

The regulation, agreed in December 2023 as part of negotiations with member states, was adopted by the European deputies with 523 votes “for”, 46 “against” and 49 “abstentions”, according to the press release of the European legislative body.

The bill aims to protect fundamental rights, democracy, the rule of law and environmental sustainability in the face of high-risk artificial intelligence systems. The legislation also aims to encourage innovation and ensure a leading role in this field for Europe. The regulation imposes obligations on artificial intelligence depending on its potential risks and expected impact.

Some uses are prohibited

The new rules prohibit certain uses of artificial intelligence that threaten the rights of citizens, including biometric classification systems based on sensitive characteristics of individuals.

Aimlessly extracting facial images from the Internet or CCTV footage to create facial recognition databases is also one of the prohibited uses.

Emotion recognition in the workplace and schools, social assessment, predictive control systems (if they are based only on profiling or analyzing a person’s characteristics), and artificial intelligence that manipulate people’s behavior or exploit vulnerabilities will also not be allowed for humans.

Law enforcement exceptions

The use of biometric identification systems by law enforcement agencies is prohibited in principle, with the exception of exhaustively listed and strictly defined situations.

“Real-time” biometric identification systems can only be installed under strict safeguards, for example if their use is limited in time and geographically and with prior approval by a judicial or administrative authority. They can be used, for example, to find a missing person or prevent a terrorist attack.

The use of these post-facto systems (“post-biometric identification systems”) is considered a high-risk use case and is only possible with a criminal court authorization.

Liability for high-risk systems

The text also provides clear obligations for other high-risk AI systems due to their high potential for adverse effects on health, safety, fundamental rights, the environment, democracy and the rule of law.

Here are some examples of high-risk uses of AI: critical infrastructure, education and training, essential public and private services (healthcare, banking, etc.), some systems used by law enforcement agencies, migration and border management, justice and democratic processes (e.g. to prevent influencing the election).

These systems must assess and mitigate risks, maintain so-called “log files” that allow automatic recording of events, be transparent and accurate, and subject to human oversight.

Citizens will have the right to file complaints against AI systems and receive explanations for decisions based on high-risk AI systems that affect their rights.

Requirements for transparency

General-purpose AI systems and the general-purpose AI models they rely on must comply with certain transparency requirements, including EU copyright law and the publication of detailed summaries of the content used to train the AI ​​models.

Stronger models for general-purpose AI systems that may pose systemic risks must meet additional requirements, including model evaluation, systemic risk assessment and mitigation, and incident reporting.

In addition, artificial or fake (“deepfake”) images, audio and video content must be clearly marked as such.

Measures to support innovation and SMEs

At the national level, it will be mandatory to create regulatory testing sites and organize testing in real conditions. SMEs and startups will need to have access to these tools to develop and train innovative AI before bringing them to market.

“Finally the world’s first binding law on AI”

During Tuesday’s plenary debate, Internal Market Commission co-rapporteur Brando Benifei (S&D, Italy) said:

  • “We finally have the world’s first binding AI law that reduces risks, creates opportunities, fights discrimination and ensures transparency. Thanks to the Parliament, unacceptable AI practices will be banned in Europe, and the rights of workers and citizens will be protected. An Office of Artificial Intelligence will now be created to help companies start complying with the rules before they come into effect. We made sure that European people and values ​​were at the heart of the development of artificial intelligence.”

Co-rapporteur of the Commission on Civil Liberties Dragos Tudorake (Renew, Romania) said:

  • “The EU has achieved results. We connected the concept of artificial intelligence with the fundamental values ​​that underlie our societies. However, we will have a lot of work to do outside the law itself regarding AI. AI will force us to rethink the social contract that underpins our democracies, our educational models, our labor markets, and how we conduct military operations. The AI ​​Act is the starting point for a new governance model built around technology. We must now focus on enforcing this law.”

Next steps

The Regulation is currently being reviewed by legal linguists and is expected to be adopted by the end of the legislative period (according to the correction procedure). The legislation has yet to be officially adopted by the EU Council.

It will enter into force 20 days after publication in the Official Journal and will be fully applicable 24 months after entry into force, subject to the following exceptions: prohibitions related to prohibited practices apply six months after the date of entry into force; codes of good practice (nine months from the date of entry into force); rules on artificial intelligence systems for general use, including management (12 months from the date of entry into force); and liabilities for high-risk systems (36 months).