ARKANSAS, Dec 10 (Future Headlines)- European Union (EU) policymakers and lawmakers achieved a significant milestone on Friday by reaching a deal on the world’s first comprehensive set of rules governing the use of artificial intelligence (AI). The regulations cover a broad spectrum of AI applications, including systems akin to ChatGPT and biometric surveillance. The development signals a major step in shaping the ethical and legal framework surrounding AI technologies, with the potential to set a precedent globally. This comprehensive set of rules addresses the risks associated with AI deployment, emphasizes transparency, and outlines prohibitions on certain applications.
Voluntary AI Pact: While the legislation is expected to go into force early next year and be applicable in 2026, companies are encouraged to participate in a voluntary AI Pact. This initiative calls for the implementation of key obligations outlined in the rules even before they become mandatory.
High-Risk AI Systems: AI systems categorized as high-risk, with the potential to cause significant harm to health, safety, fundamental rights, the environment, democracy, elections, and the rule of law, will be subject to stringent requirements. This includes undergoing a fundamental rights impact assessment and meeting obligations to access the EU market.
Transparency for Limited-Risk AI Systems: AI systems posing limited risks will be subject to lighter transparency obligations. This includes disclosure labels indicating that the content is AI-generated, providing users with information to make informed decisions about its use.
Use of Biometric Surveillance: The use of real-time remote biometric identification systems in public spaces by law enforcement will be restricted. Permitted applications include identifying victims of kidnapping, human trafficking, sexual exploitation, and addressing specific and present terrorist threats. Additionally, these systems can be used to locate individuals suspected of various criminal activities.
GPAI and Foundation Models: Governance Partnership on AI (GPAI) and foundation models will be subject to transparency requirements, involving technical documentation, compliance with EU copyright law, and dissemination of detailed summaries related to algorithm training content.
Requirements for Systemic Risk and High-Impact GPAI: Foundation models categorized as posing systemic risk and high-impact GPAI will need to conduct model evaluations, risk assessments and mitigations, adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and report on energy efficiency. Harmonized EU Standards: Until harmonized EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulations.
The regulations prohibit various AI applications, including biometric categorization systems using sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and educational institutions, social scoring based on behavior or personal characteristics, AI systems manipulating human behavior against free will, and exploitation of vulnerabilities due to age, disability, social or economic situations.
The regulations introduce sanctions for violations based on the severity of the infringement and the size of the company involved. Fines can range from 7.5 million euros ($8 million) or 1.5% of global annual turnover, escalating to a maximum of 35 million euros or 7% of global turnover.
The EU’s landmark deal on AI regulations carries significant implications for the technology landscape, global AI ethics, and the responsible deployment of AI systems. By establishing clear guidelines and restrictions, the EU aims to strike a balance between fostering innovation and safeguarding against potential harms associated with AI technologies.
The voluntary AI Pact provides companies with an opportunity to proactively align with the forthcoming regulations, emphasizing a commitment to ethical AI practices. The focus on high-risk AI systems reflects a nuanced approach to ensuring that technologies with the potential for substantial impact undergo thorough assessments and adhere to stringent obligations.
The restrictions on biometric surveillance, particularly in public spaces, underscore the importance of safeguarding individual privacy and civil liberties. The regulations acknowledge specific scenarios where such surveillance is justifiable, such as addressing serious crimes or threats.
The comprehensive prohibitions on certain AI applications, including emotion recognition in sensitive environments and the manipulation of human behavior, underscore a commitment to ethical considerations in the development and deployment of AI technologies. The regulations also address potential discriminatory practices by prohibiting biometric categorization systems based on sensitive characteristics.
The sanctions for violations provide a clear framework for accountability, with fines linked to the severity of the infringement and the size of the company involved. This serves as a deterrent against non-compliance and encourages companies to prioritize adherence to ethical AI standards.
While the regulations are a groundbreaking development, the EU acknowledges that details will be further thrashed out in the coming weeks. This allows for additional refinements and clarifications to address specific nuances and emerging challenges in the dynamic field of AI.
Reporting by Alireza Sabet; Editing by Sarah White