Webinar "Navigating the EU AI Act: Compliance Strategies for Businesses" + TOOLKIT
Otrdiena 19. augusts plkst. 12:00 - 14:00 CEST
Tiešsaistē
The European Union has launched its most far-reaching technology regulation to date: the Artificial Intelligence Act (AI Act). This landmark law is the first comprehensive AI regulation in the world, and it will transform how AI is developed, deployed, and governed across all EU markets.
This webinar is a must-attend for any organization that builds, buys, integrates, or relies on AI systems — whether you’re a provider, deployer, platform, SME, public sector institution, or enterprise innovator. These aren’t piecemeal updates — the AI Act introduces entirely new legal obligations tied to AI system risk levels, cross-border responsibilities, and product lifecycle management. And they are enforceable, with major penalties for non-compliance.
The regulation applies whether you are based in the EU or simply operate in it, and whether your AI is embedded in physical products, accessed via APIs, or used in services ranging from recruitment and credit scoring to healthcare and customer service.
AI compliance is no longer optional. The deadlines are set. Your systems will need to be classified, documented, tested, risk-mapped, registered, and — in many cases — CE-marked. New EU-level enforcement bodies will monitor high-risk and foundation models, while Member States roll out national sandboxes, audits, and databases.
Whether you’re a startup building with open-source models, a SaaS firm integrating general-purpose AI, or an enterprise optimizing internal workflows — this regulation affects you. And it comes with significant new roles and responsibilities for providers, deployers, and GPAI developers alike.
This webinar and toolkit combo is your strategic entry point into full compliance.
What we’ll cover:
🔍 WHY IS THE EU REGULATING AI NOW?
Get clear on the EU’s motivations and goals:
- Prevent legal fragmentation and promote trust in AI
- Protect fundamental rights, democracy, and safety
- Ensure transparency and accountability in high-risk AI
- Strengthen innovation pathways for startups and SMEs
- Become a global standard-setter for ethical AI
📅 AI ACT TIMELINE – WHEN KEY RULES START TO APPLY
Your rollout and readiness schedule:
- February 2025 – Prohibited practices banned (e.g. manipulative or discriminatory systems)
- August 2025 – GPAI transparency obligations and AI Office enforcement begins
- August 2026 – Full compliance required for all high-risk AI systems
- 2025–2026 – Regulatory sandboxes, CE marking, conformity assessments, real-world testing
📌 FOUR PILLARS OF THE AI ACT EVERY BUSINESS MUST UNDERSTAND:
1. AI SYSTEM RISK CLASSIFICATION
- What counts as "unacceptable", "high-risk", "limited-risk", or "minimal-risk"?
- See how systems used in HR, finance, healthcare, and law enforcement are classified
- Learn to evaluate the AI lifecycle: from training to post-market monitoring
2. PROVIDER VS. DEPLOYER OBLIGATIONS
- Understand the legal difference between developing and using an AI system
- What you're responsible for: documentation, transparency, oversight, and risk controls
- Special rules for importers, distributors, and third-party integration
3. GENERAL-PURPOSE AI MODELS (GPAI) AND SYSTEMIC RISK
- Key transparency and copyright requirements for GPAI providers
- When foundation models face "systemic risk" designation
- What to disclose, document, and report — even for open-source models
4. THE EU AI OFFICE – SUPERVISION, INVESTIGATION, AND GUIDANCE
- Meet the new central enforcement authority for GPAI oversight
- Powers include compliance evaluations, risk audits, model documentation reviews
- How it supports cross-border enforcement and national implementation
🚨 BIG PICTURE: WHAT THIS MEANS FOR YOUR ORGANIZATION
- Legal exposure for non-compliant AI systems — with fines up to €35M or 7% of global turnover
- Documentation and technical audits become mandatory for high-risk uses
- CE marking and conformity assessments required for critical use cases
- Deployers must inform users, train staff, and assess fundamental rights impacts
- GPAI model creators must track training data summaries, copyright risk, and downstream use
👥 Who should attend:
This session is essential for:
- CTOs, CIOs, CDOs and product leaders
- Heads of AI, data science, and ML operations
- In-house legal and compliance teams
- Software, SaaS, platform, and embedded AI developers
- Startups and scaleups operating in regulated sectors
- Public sector institutions and digital service providers
- Technology consultants and digital transformation advisors
Whether you're developing AI models, integrating third-party systems, or managing compliance — this is the regulatory moment you can’t ignore.
PLUS: YOUR AI ACT COMPLIANCE TOOLKIT INCLUDES:
✔️ Legal Summary & Annotated Regulation – Get the full AI Act with expert annotations and business-friendly summaries
✔️ High-Risk System Checklists – Step-by-step guides for identifying, classifying, documenting, and registering high-risk AI systems
✔️ Deployment Readiness Templates – Rights impact assessment templates, model documentation sheets, deployer duty checklists
✔️ Internal Training Slide Deck – Pre-made slides to onboard legal, tech, data, and executive teams with clear roles and responsibilities
✔️ Timeline & Roadmap – Track what’s mandatory now vs. what’s coming in 2025 and 2026
🛡️ Why this matters now:
By the time the full compliance deadline arrives in August 2026, it will be too late to start preparing. Documentation, technical controls, internal awareness, model evaluations, and vendor due diligence all take time — and your competitors are already adapting.
This webinar + toolkit is your fastest, clearest route to strategic compliance — and a way to show customers, partners, and regulators that your business is AI-ready, responsible, and resilient.