The European Union has taken a significant step forward by implementing its AI Act, a comprehensive regulation governing the use and development of artificial intelligence. This legislation introduces a tiered approach to AI regulation, based on the level of risk associated with different AI applications.
The AI Act is experiencing immediate changes, but its full scope won’t be implemented until mid-2026. The first major deadline, which targets specific prohibited AI uses, is six months away. For example, the Act prohibits authorities from using remote biometric identification in public spaces.
Risk-Based Classification
- Low/No Risk: Most AI applications fall into this category and are not subject to the Act’s regulations.
- High Risk: AI systems used in critical areas like biometric identification, medical software, and educational tools face stringent compliance requirements. Developers must undertake pre-market assessments and may be subject to audits. High-risk systems used by public authorities must be registered in an EU database.
- Limited Risk: Technologies like chatbots and deepfake generators must adhere to transparency requirements to prevent user deception.
Penalties for non-compliance vary based on the severity of the breach:
- Up to 7% of global annual turnover violations of banned applications.
- Up to 3% for breaches of other obligations.
- Up to 1.5% for supplying incorrect information to regulators.
Developers of general-purpose AI, such as GPT models, face lighter transparency requirements but must still provide summaries of training data and comply with copyright rules. The most powerful GPAIs, defined by a total computing power exceeding 10^25 FLOPs, will need to implement risk assessment and mitigation strategies.
Compliance requirements, particularly for high-risk AI systems, are still being finalized. European standards bodies are expected to deliver detailed requirements by April 2025, after which the EU will review and endorse these standards.
OpenAI, the creator of ChatGPT, has indicated its commitment to working closely with EU authorities to align with the AI Act, offering guidance to developers on compliance.
What’s Next?
U.S. organizations utilizing AI technologies that might be affected by the AI Act should start by classifying their AI systems according to the new regulations. It’s crucial to understand whether their AI applications fall into the high-risk category and prepare for necessary compliance measures. Consulting with legal experts can help you overcome these complex requirements.
For the latest updates on global AI regulations and other technology news, follow TechHub.