The European Union reportedly agreed on the AI Act, the world's first legislation governing and regulating artificial intelligence, meant to be implemented within the 27-nation bloc. 

Time reports that the process saw the member countries and several negotiators overcoming significant divisions on contentious issues such as police use of facial recognition surveillance and generative AI. 

Now, with the AI rules officially reaching a provisional agreement, they have a broader impact worldwide for humanity and artificial intelligence. Associated Press reports a closer look at the AI act and how it works.

EU Lawmakers Grapple With AI Regulation Dilemma as Deadline Looms: Report
(Photo : OLIVIER MORIN/AFP via Getty Images)
This illustration picture shows the AI (artificial intelligence) smartphone app ChatGPT surrounded by other AI apps in Vaasa on June 6, 2023.

The AI Act reportedly focuses on regulating applications of AI rather than the technology itself, adopting a "risk-based approach" to goods and services that leverage AI. The laws uphold democracy, the rule of law, and fundamental liberties like the right to free expression while promoting investment and creativity.

The regulations, the riskier an AI application is. For example, content recommendation systems and spam filters simply need to abide by minimal rules, such as disclosing that they use artificial intelligence.

Medical devices are examples of high-risk systems that must adhere to stricter standards, such as using high-quality data and giving consumers clear information.  

Read Also: First-Ever Detailed AI Regulation Agreement, Unveiled by US, UK, Other Countries 

AI Act's Banned Applications

The European Union has also banned multiple applications due to its "potential threat to citizens' rights and democracy," which includes biometric categorization systems that exploit sensitive features (such as political, religious, or philosophical views, sexual orientation, or race); untargeted internet or CCTV video scraping of faces to build databases for facial recognition; acknowledgment of emotions in the workplace and educational settings;

The following applications are also prohibited based on their possible harm to citizens' rights and democracy, such as social score determined by an individual's social behavior or personal traits; artificial intelligence technologies that control human behavior to evade free will; and lastly, AI technologies utilized to take advantage of people's weaknesses (caused by their age, handicap, social or economic condition). 

These banned applications include exemptions for law enforcement, such as using biometric identification systems, subject to previous judicial authorization. They are only permitted for a limited number of crimes.

When conducting a focused search on an individual found guilty of a major crime or suspected of doing so, "post-remote" biometric identification systems will only be utilized.

While real-time use of biometric identification systems is only permitted for the following uses:

  • The prevention of a specific and present terrorist threat.
  • The localization or identification of an individual suspected of having committed one of the specific crimes listed in the regulation (e.g., terrorism, trafficking, sexual exploitation, murder).
  • Targeted searches of victims (abduction, trafficking, sexual exploitation).

High-Risk AI Systems

Clear requirements were established for high-risk AI systems because of their substantial potential to endanger human health, safety, fundamental rights, the environment, democracy, and the rule of law. High-risk AI systems are also used to sway voter behavior and election results.People can reportedly file complaints against AI systems and request information on choices made using high-risk AI systems that affect their legal rights. 

It was decided that general-purpose AI (GPAI) systems and the GPAI models they are built on would have to abide by the transparency standards that Parliament first put out to account for the vast array of jobs AI systems may do and the rapid increase of its capabilities. 

Depending on the infraction and the scale of the organization, noncompliance with the guidelines may result in fines ranging from 35 million euros, or 7% of worldwide revenue, to 7.5 million euros, or 1.5% of turnover.

Related Article: EU Warned by Experts, AI Models Could be Over-Regulated "Out of Existence" 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion