EU AI Act Guide
This guide provides context on the EU AI Act and how AIGovHub helps with compliance.
What is the EU AI Act?
The EU Artificial Intelligence Act (AI Act) is the world's first comprehensive legal framework for AI. It:
- Entered into force on August 1, 2024
- Applies to AI systems placed on the EU market or used in the EU
- Establishes a risk-based approach to AI regulation
- Requires transparency, documentation, and human oversight
Key Dates
| Date | Milestone |
|---|---|
| August 2024 | AI Act enters into force |
| February 2025 | Prohibited AI systems banned |
| August 2025 | General purpose AI obligations |
| August 2026 | High-risk AI requirements apply |
| August 2027 | Full enforcement |
Risk Categories
The AI Act classifies AI systems into four risk categories:
1. Prohibited AI (Unacceptable Risk)
Banned entirely in the EU:
| Practice | Description |
|---|---|
| Social scoring | Government social scoring systems |
| Cognitive manipulation | Exploiting vulnerabilities of specific groups |
| Biometric categorization | Real-time biometric identification in public spaces (with exceptions) |
| Emotion recognition | At workplace and educational institutions |
| Predictive policing | Based solely on profiling |
| Facial recognition databases | Untargeted scraping for facial recognition |
2. High-Risk AI (Annex III)
Subject to strict requirements:
| Domain | Examples |
|---|---|
| Biometrics | Remote biometric identification |
| Critical infrastructure | Safety components in transport, utilities |
| Education | Admission decisions, assessment scoring |
| Employment | CV screening, hiring decisions, task allocation |
| Essential services | Credit scoring, insurance risk assessment |
| Law enforcement | Risk assessment, evidence evaluation |
| Migration | Asylum processing, border control |
| Justice | Research assistance for judges |
| Democratic processes | Election influence detection |
High-risk obligations include:
- Risk management systems
- Data governance
- Technical documentation
- Human oversight
- Accuracy and robustness requirements
- Cybersecurity measures
3. Limited Risk (Transparency Obligations)
Requires disclosure to users:
| System Type | Requirement |
|---|---|
| Chatbots | Users must know they're interacting with AI |
| Emotion recognition | Users must be informed |
| Biometric categorization | Users must be informed |
| Deepfakes/synthetic content | Must be labeled as AI-generated |
4. Minimal Risk (No Specific Obligations)
Most AI systems fall here:
- Spam filters
- Video game AI
- Inventory management
- General recommendation systems
No specific regulatory requirements but general product safety laws apply.
How AIGovHub Helps
1. AI System Discovery
AIGovHub automatically identifies AI systems in your codebase:
aigovhub scan /path/to/repositoryThis creates an inventory of AI systems - the first step for any compliance program.
2. Documentation Generation
The aigovhub.yaml artifact provides:
- Audit trail: Version-controlled record of AI systems
- Technical documentation: Source files, dependencies, model files
- Classification placeholder: Structure for risk categorization
3. CI/CD Integration
Continuous monitoring for new AI systems:
# .github/workflows/ai-compliance.yml
on: [push]
jobs:
compliance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install aigovhub-cli
- run: aigovhub scan . --no-llm
- run: aigovhub validate --strict4. Future Risk Classification (Roadmap)
AIGovHub will help with:
- Automated risk category suggestions
- High-risk domain detection
- Compliance checklist generation
Determining Risk Category
Decision Tree
Is the AI system on the prohibited list?
├─ Yes → PROHIBITED (remove from EU market)
└─ No
│
Is it listed in Annex III (high-risk areas)?
├─ Yes → HIGH RISK (full compliance needed)
└─ No
│
Does it interact directly with people?
├─ Yes → LIMITED RISK (transparency required)
└─ No → MINIMAL RISK (no specific obligations)
Common AI System Classifications
| AI System | Typical Risk Category |
|---|---|
| Customer sentiment analysis | Minimal |
| Product recommendations | Minimal |
| Chatbot / virtual assistant | Limited |
| Document classification | Minimal |
| Fraud detection | High (if used for credit) |
| CV screening | High |
| Medical diagnosis | High |
| Facial recognition (access control) | High |
| Content moderation | Limited |
| Predictive maintenance | Minimal |
Recording Risk Classification
After determining the risk category, update your artifact:
ai_systems:
- id: "ai-001"
name: "customer-chatbot"
type: "llm_integration"
detection_confidence: 0.95
classification:
risk_category: "limited_risk" # Updated from null
intended_purpose:
description: "Customer support chatbot for FAQ handling"
domain: "customer_service"High-Risk Compliance Requirements
If your AI system is classified as high-risk, you must:
Technical Documentation
| Requirement | AIGovHub Support |
|---|---|
| General system description | name, type, intended_purpose |
| Design specifications | source.files, source.dependencies |
| Model information | model_card |
| Risk management | Manual addition required |
| Data governance | Manual addition required |
| Monitoring measures | Manual addition required |
Ongoing Obligations
- Conformity assessment before market placement
- CE marking and EU declaration of conformity
- Quality management system
- Post-market monitoring
- Incident reporting to authorities
Resources
Official Sources
Standards
- ISO/IEC 42001 - AI Management System
- CycloneDX ML-BOM - ML Bill of Materials
Industry Guidance
Disclaimer
This guide provides general information and should not be considered legal advice. For compliance decisions, consult with legal professionals familiar with the EU AI Act and your specific use cases.