The EU agrees on the first artificial intelligence law in the world

The institutions of the European Union agree on the artificial intelligence law that allows or prohibits the use of technology depending on the risk it poses to people and that seeks to boost the European industry against giants such as China and USA.

“The EU artificial intelligence law is a pioneer in the world. A single legal framework for the development of artificial intelligence that can be trusted,” said the President of the European Commission, Ursula von der Leyen, in a message on the social network X (formerly Twitter).

The agreement was reached after 36 hours of negotiations and will still have to be ratified by the European Parliament and the Council of the EU, the institution that represents European governments.

Real-time surveillance

One of the most sensitive points of the negotiations has been the use that law enforcement agencies will be able to make of biometric identification cameras in public spaces to guarantee national security.

The institutions of the European Union agree on the artificial intelligence law that allows or prohibits the use of technology depending on the risk it poses to people and that seeks to boost the European industry against giants such as China and USA.

“The EU artificial intelligence law is a pioneer in the world. A single legal framework for the development of artificial intelligence that can be trusted,” said the President of the European Commission, Ursula von der Leyen, in a message on the social network X (formerly Twitter).

The agreement was reached after 36 hours of negotiations and will still have to be ratified by the European Parliament and the Council of the EU, the institution that represents European governments.

Real-time surveillance

One of the most sensitive points of the negotiations has been the use that law enforcement agencies will be able to make of biometric identification cameras in public spaces to guarantee national security.

The cameras may be used with prior judicial authorization to prevent a “genuine and foreseeable” or “genuine and present” terrorist threat, that is, one that is occurring at that moment.

They can also be used to locate or identify a person who has committed crimes of terrorism, human trafficking, sexual exploitation or, for example, an environmental crime and to search for the victims of these crimes.

During the negotiations, governments have pushed to expand the list of crimes, while the European Parliament has tried to limit it as much as possible and obtain strong safeguards for fundamental rights.

Prohibited systems

The rule also prohibits all biometric categorization systems based on political, religious, or philosophical beliefs or based on race and sexual orientation.

Nor will systems that rate people based on their behavior or personal characteristics or artificial intelligence capable of manipulating human behavior be used.

Systems to expand or create facial databases by indiscriminately capturing data through the Internet or audiovisual recordings will also be prohibited.

In addition, artificial intelligence systems that can recognize emotions will also be prohibited in workplaces or schools.

Generative AI

The other big issue that has focused the negotiations is the regulation of generative artificial intelligence systems, on which models such as ChatGPT, from the company OpenAI, or Bard, from Google, are based .

They will have to meet transparency criteria, such as specifying whether a text, song or photograph has been generated through artificial intelligence and guaranteeing that the data used to train the systems respects copyright.

Initially, the law was not intended to regulate these types of systems, because they had not yet become popular when Brussels proposed the law in April 2021, but community institutions have seen the need to legislate them since the outbreak of ChatGPT last year.

The regulation does not prohibit their use but has established a series of criteria to detect models that may generate a high risk depending on the context in which they are used and requires their developers to comply with stricter safeguards before putting them on the market.

“Passionate” negotiations

The negotiations were “passionate,” since the objective of the law is to regulate the use of a technology with great possibilities for society, which at the same time raises doubts and some questions that the developers of artificial intelligence still do not know how to answer, according to sources familiar with the debates.

The regulation allows or prohibits the use of artificial intelligence depending on the risk it generates for people and identifies high-risk systems that can only be used if it is demonstrated that they respect fundamental rights.

For example, those that can be used to influence the result of an election, those used by financial institutions to evaluate solvency and establish credit rating.

The Spanish presidency of the EU Council has achieved one of its main objectives in this semester with the agreement of this pioneering standard in the world.

“We have achieved an important milestone, that citizens can decide what can and cannot be done with artificial intelligence,” said the Spanish Secretary of State for Telecommunications and Digital Infrastructure, Carme Artigas.

European Artificial Intelligence Office

The regulation provides for the creation of the European Artificial Intelligence Office, which will coordinate the use of technology between national authorities and will be advised by a panel of scientists and civil society organizations.

The law is scheduled to come into force in 2026 but will be applied in phases: the European office will be created immediately, the ban on banned artificial intelligence systems will reach six months, and the requirements for generative AI systems and models , at 12.

The regulation provides for fines ranging from 35 million euros or 7% of the companies’ global business volume, to 7.5 million euros or 1.5% of the companies’ global business volume.

By Ch Fahad khan Janda