FAQ AI Act 2025: The 10 Key Questions for Italian Companies
Introduction — The AI Act Gets Real: What Changes for Companies
The EU AI Act (Regulation EU 2024/1689), fully applicable between 2025 and 2026, is the world’s first comprehensive legal framework for the responsible and safe use of artificial intelligence. Its risk-based approach (Articles 6 and 7) defines tiered obligations depending on the system's impact.
Here are the most frequently asked questions we receive from Italian companies — with updated answers based on the latest EDPB guidelines.
1. Which Systems Are Classified as "High-Risk"?
According to Annex III of the AI Act, high-risk systems include those used in:
- HR/Recruiting (selection, evaluation, promotions)
- Finance (credit scoring, fraud detection, KYC)
- Healthcare (diagnosis, triage, medical devices)
- Education, law enforcement, critical infrastructure
- Note: Compliance for existing systems is required starting August 2, 2026 (Source).
2. What Are the Penalties for Non-Compliance?
Administrative fines can reach up to €35 million or 7% of global annual turnover (Art. 71). These are among the highest in EU regulations, exceeding those under the GDPR (Text).
3. How Should Companies Start Preparing?
- Gap analysis against the AI Act, mapping all AI systems in use
- AI governance policies: appoint a compliance officer, update data processing registers, activate internal audits
- Track key deadlines: obligations on AI literacy and GPAI start February–August 2025; high-risk systems from August 2026
4. What Is Post-Market Monitoring?
It means continuously monitoring the performance, safety, and potential incidents of AI systems after deployment (Art. 61). As of August 2, 2026, logging incidents, anomalies, and technical updates becomes mandatory (Source).
5. Is Specific Training Required? Who Should Be Involved?
Yes. Article 4 mandates training on AI literacy and risk awareness for all roles involved in managing AI systems (e.g., Data Owners, IT, Legal, Compliance, HR). This becomes mandatory starting February 2025.
6. How to Address Bias and Fairness?
- Regular audits of models (technical and legal)
- Use fairness metrics (e.g., disparate impact, equalized odds)
- Implement logging and explainability (Art. 13, 15)
- Recommended tools: IBM AIF360, Google What-If Tool (IBM)
7. What’s the Impact on SMEs?
The regulation provides simplified regimes and dedicated support for SMEs and startups (Art. 53–55). These include partial exemptions, regulatory sandboxes, and supporting guidelines (EDPB).
8. How Does It Relate to the GDPR?
They are fully integrated: the AI Act requires applying privacy-by-design principles (Arts. 5 and 25 GDPR) in high-risk AI systems. Each AI system must also be evaluated for data protection impact assessments (DPIAs).
9. Compliance Timelines: By When Must You Be Ready?
- Prohibited practices and AI literacy: from February 2, 2025
- GPAI and transparency obligations: from August 2, 2025
- High-risk systems: from August 2, 2026
- AI embedded in regulated products: from August 2, 2027
- (Official timeline)
10. How to Verify AI Vendors' Compliance?
- Request updated technical documentation (Art. 28)
- Demand audit trails, performance metrics, impact assessments
- Assess certifications like ISO/IEC 42001, NIST AI RMF, or the EU AI Trust mark
Conclusion & Resources
The AI Act is more than a compliance checklist: it’s a framework to strengthen governance, transparency, and reliability in AI systems — bringing both risks and competitive opportunities.
Try our free assessment to evaluate your company’s AI maturity and contact our experts for tailored support.