By: Lisa Iantorno
In April 2021, the European Commission (“EC”) proposed the first European Union (EU) artificial intelligence law, the EU Artificial Intelligence Act (“EU AI Act”), which established a risk-based AI classification system and constitutes the world’s first comprehensive AI framework. AI systems that can be used in different applications are analyzed and classified according to the risks they pose to users. The different risk levels determine AI compliance requirements, with riskier systems requiring more stringent measures.
The EC framed the EU AI Act to try to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. Another policy underscoring the EU AI Act is that systems should be overseen by people, rather than by automation, to prevent harmful outcomes.
The new rules establish obligations for providers and users depending on the level of risk an application or system poses. While many AI systems pose minimal risk, they need to be assessed. The rules also define “unacceptable risks.” For instance, banned AI applications in the EU include the following:
- Cognitive behavioral manipulation of people or specific vulnerable groups, such as voice-activated toys that encourage dangerous behavior in children.
- Social scoring AI applications that classify people based on behavior, socio-economic status, or personal characteristics.
- Biometric identification and categorization of people.
- Real-time and remote biometric identification systems, such as facial recognition in public spaces.
Some exceptions may be allowed for law enforcement purposes. For example, “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes (with court approval).
In addition, the EU AI Act designates AI systems that negatively affect safety or fundamental rights as “high risk” and divides them into two categories:
- AI systems that are used in products falling under the EU’s product safety legislation, such as toys, aviation, cars, and medical devices.
- AI systems falling into specific areas that must be registered in an EU database, such as:
- Management and operation of critical infrastructure;
- Education and vocational training
- Employment, worker management, and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Assistance in legal interpretation and application of the law.
High-risk AI systems will be assessed at inception and throughout their lifecycle, and people will have the right to file complaints about AI systems to designated national authorities.
With respect to transparency, generative AI is not classified as high-risk, but must comply with transparency requirements and EU copyright law by:
- Disclosing that AI generated the content.
- Designing the model to prevent it from generating illegal content.
- Publishing summaries of copyrighted data used for training.
High-impact general-purpose AI models that might pose systemic risk would have to undergo thorough evaluations, and any serious incidents would have to be reported to the EC.
Content that is either generated or modified with the help of AI, such as deepfakes, must be clearly labeled as AI-generated so that users are aware when they encounter such content.
The EU AI Act, which was adopted in June 2024, promulgates the world’s first rules on AI use. The EU AI Act will be fully applicable 24 months after implementation, but some provisions are already in force as follows:
- The ban of AI systems posing unacceptable risks began to apply on February 2, 2025.
- Codes of practice began to apply nine months after implementation.
- Rules on general-purpose AI systems that need to comply with transparency. requirements began to apply 12 months after implementation.
High-risk systems will have more time to comply with the requirements, i.e., 36 months after implementation.
The EU AI Act could have sweeping global implications for the use of AI applications in business and by ordinary layperson users. AI technology affects many aspects of life and is expected to become even more entrenched. For instance, AI applications currently influence what information users see online by predicting what content is engaging to them. Applications can also use facial recognition to enforce laws or personalize advertisements and are used to diagnose and treat diseases such as cancer.
Like the EU’s General Data Protection Regulation (GDPR), in 2018, the EU AI Act could become a global standard, regardless of where a person or business is physically located.
Developments in the United States
While the United States has not yet adopted a comparable law at the federal level, most states (with the exception of Alaska, Oklahoma, and the District of Columbia) have adopted their own laws and regulations, some of which are currently in force. Other provisions will come into effect later in 2026.
It is important to note that state-level penalties for non-compliance range from substantial fines (some of which are applied per violation) to criminal penalties, underscoring the need to consult with the appropriate advisers to ensure compliance prior to undertaking any new AI business initiatives. Controls should also be tested and implemented prior to adoption of AI in workflows, and oversight needs to include a robust governance framework that also considers processes and does not focus exclusively on models.