Navigating the EU AI Act: Compliance Strategies for Businesses
Tuesday 19. August at 12:00 - 14:00 CEST
Online
The European Union has taken a historic step by formally adopting the AI Act — the world’s first comprehensive and binding legal framework regulating artificial intelligence. This groundbreaking legislation will shape how AI technologies are developed, marketed, and used across all EU Member States — and its ripple effects will be felt globally. Whether your business builds AI models, integrates third-party systems, or simply uses AI in daily operations — this law applies to you.
Unlike previous soft law initiatives or ethical guidelines, the AI Act introduces legally enforceable obligations and strict compliance requirements, grounded in a tiered, risk-based structure. The regulation defines and categorizes AI systems by the level of risk they pose to health, safety, and fundamental rights — from “unacceptable risk” (which are prohibited) to “high-risk” (subject to rigorous controls), to “limited-risk” and “minimal-risk” applications (subject to transparency and labeling rules).
This is not a niche regulation for Big Tech. The AI Act has broad scope and cross-sectoral application. It applies to companies of all sizes, including startups, SMEs, SaaS providers, platform operators, public sector institutions, and multinational conglomerates — whether they are located in the EU or operate within its digital market. It affects industries as diverse as healthcare, education, employment, finance, transport, law enforcement, and public administration.
The obligations cover the entire AI lifecycle — from data collection and model training to deployment and post-market monitoring. This means companies will need to implement new procedures for risk management, data governance, human oversight, technical documentation, conformity assessments, CE marking, transparency notifications, and more.
The AI Act is already in force, and key compliance deadlines are rapidly approaching. Certain provisions — such as the ban on specific high-risk or manipulative systems — take effect as early as February 2025. Obligations for providers and deployers of high-risk systems will apply from August 2026, but businesses are encouraged to begin compliance preparations immediately to avoid legal, financial, and reputational risks.
This webinar is a must-attend session for any organization that develops, integrates, or uses AI systems within the EU market. It will help you understand how the AI Act will affect your products, services, business model, and compliance strategy. You will gain clarity on your obligations — whether you're a provider, deployer, importer, distributor, or user — and what practical steps you need to take now to prepare.
If you are unsure how the AI Act applies to your company — or what it demands of your teams — this session will provide a comprehensive breakdown of the regulation’s key provisions, timelines, roles, and risks. The AI Act is not just a legal framework — it is a transformative shift in how AI is governed in the digital economy. The time to act is now. Join us.
What we’ll cover:
🔍 WHY IS THE EU REGULATING AI NOW?
Understand the drivers and goals of this landmark regulation:
- Addressing fundamental rights and safety risks
- Preventing market fragmentation and legal uncertainty
- Creating a harmonized internal market for trustworthy AI
- Supporting innovation — especially for SMEs and startups
📅 COMPLIANCE TIMELINE – WHAT’S COMING WHEN
Navigate key milestones to stay ahead:
- Prohibited Practices – banned from February 2025
- GPAI and AI Office obligations – start August 2025
- Full compliance for high-risk systems – required by August 2026
- Real-world testing, sandboxing, CE marking, and post-market monitoring – phased in 2025–2026
📌 FOUR CORE AREAS EVERY AI PLAYER MUST UNDERSTAND:
1. AI RISK CLASSIFICATION AND LEGAL TIERS
- Understand the four levels: prohibited, high-risk, limited-risk, and minimal-risk AI
- Which use cases fall into each category?
- Learn how this impacts your systems and products
2. HIGH-RISK AI SYSTEM OBLIGATIONS
- Applies to areas like employment, credit scoring, healthcare, education, and law enforcement
- Learn what's required: risk management, human oversight, data quality, technical documentation, and CE marking
- What happens if you're a deployer vs. a provider?
3. GPAI MODELS AND SYSTEMIC RISK
- New rules for general-purpose AI (GPAI) models and foundation models
- Special obligations if your model is considered to pose “systemic risk”
- Documentation, transparency, copyright compliance, and model governance essentials
4. THE ROLE OF THE NEW AI OFFICE
- A new EU-level watchdog for AI enforcement and coordination
- Powers of investigation, model evaluations, and issuing binding decisions
- How it supports and supervises Member States and GPAI providers
🚨 BIG PICTURE: WHAT THIS MEANS FOR YOUR BUSINESS
- Mandatory transparency and labeling for certain AI outputs
- Real-world testing and regulatory sandboxes — what’s allowed?
- New documentation and audit trails across the AI lifecycle
- Prepare for fines up to €35 million or 7% of global turnover
👥 Who should attend:
This session is critical for:
- AI and software product teams
- CTOs, CIOs, and digital transformation leads
- Compliance and legal professionals
- EU and global startups scaling AI products
- Tech advisors, consultants, and regulators
The AI Act is not just a legal text — it’s a paradigm shift for AI governance and business models. Get briefed, get ready, and stay compliant.