The EU AI Act is comingβ€”is your team prepared?

We train management, legal, compliance, and tech teams in responsible AI, fairness testing, and EU AI Act requirements. From awareness to deep dive. We want you to be able to use the EU AI Act to your advantage.

Even the best compliance strategy will fail if teams don’t understand what needs to be done. We bring Responsible AI, the EU AI Act, and practical tools to your teamsβ€”in a hands-on, interactive way, in German.

In 4 hours to 2 days, we provide the knowledge your teams needβ€”from C-level overviews to hands-on bias testing for data scientists.

⏱️ 4 hours – 2 days | πŸ‘₯ Up to 25 participants | πŸ’° From €2,000 | πŸ“ Remote or on-site

βœ“ 25+ years of experience in training and change management | βœ“ Practical examples from real projects | βœ“ Interactive (no lectures) | βœ“ Materials in German | βœ“ 4 weeks of post-workshop support

Why Responsible AI workshops for compliance projects?

The biggest hurdle to successful AI compliance is not a lack of technology or budgetβ€”it is a lack of understanding. Different stakeholders need different knowledge, speak different languages, and have different priorities.

Compliance requires expertise

πŸ“œ The EU AI Act requires not only processes, but also competent people:

  • Art. 4 (AI Literacy): All stakeholders must understand AI
  • Art. 9: Risk management requires trained risk officers
  • Art. 14: Human oversight requires qualified operators

Without training: Processes remain paper tigers
With training: Teams can implement independently

Internal expertise is cheaper

πŸ’° External consultants are expensive

ROI calculation:

  • Workshop: $5,000 (one-time fee)
  • Empowers 15 employees
  • Saves 50-100 hours of external consulting per year = €15k-50k saved

Plus: Internal expertise = faster decisions, less dependency

Change management

🧠 AI compliance means changeβ€”people need to be brought along:

Without workshops:

  • "Even more compliance bureaucracy" (resistance)
  • "Why is that important?" (incomprehension)
  • "That doesn't concern me" (lack of ownership)

With workshops:

  • Understanding "why" (acceptance)
  • Practical tools (self-efficacy)
  • Common language (team alignment)

Three Responsible AI workshops for different stakeholders

We do not offer “one-size-fits-all” training courses. Our workshops are modular, target group-specific, and practice-oriented. Each stakeholder receives the knowledge they need for their roleβ€”in their language and at their level.

Executive Briefing
(Half-Day, 4-6 hours)

Participants:

C-level, board, senior management

Objective:

Strategic understanding: What is the EU AI Act, what does it mean for us, what do we want to do?

Contents:

EU AI Act Overview

  1. What is the AI Act? (Goals, structure, timeline)
  2. Alignment with corporate goals
  3. Risk categories (Prohibited, High-Risk, Limited, Minimal)
  4. What applies to us? (Quick Assessment)

Business Impact & Strategy

  1. Compliance requirements for our systems
  2. Costs vs. risks (ROI of compliance)
  3. Competitive advantage through responsible AI

Roadmap & Next Steps

  1. What needs to happen by when?
  2. Who is responsible?
  3. Budgets & Resources
  4. Q&A

Deliverables:

  • Workshop slides (PDF)
  • Miro Boards/Whiteboards documented
  • Executive Summary (5 pages)
  • High-level roadmap
  • 2 weeks of post-workshop support (email/calls)

Responsible AI Fundamentals
(Full Day, 8 hours)

Participants:

Product managers, legal, compliance, risk officers, AI project managers

Objective:

Practical understanding: How do we implement the EU AI Act and Responsible AI?

Contents:

Responsible AI Fundamentals

  1. What is Responsible AI? (Fairness, Transparency, Accountability)
  2. EU AI Act Deep Dive (most important articles)
  3. Case studies (Apple Card, Amazon HR Tool)

Fairness & Bias

  1. What is bias? (Types, examples)
  2. How do you measure fairness? (Overview of metrics)
  3. Hands-on: Calculating Disparate Impact (Excel Exercise)
  4. mitigation strategies

Transparency & Explainability

  1. What does Article 13 require? (Transparency obligations)
  2. Explainability methods (SHAP, LIME overview)
  3. Creating model cards (template exercise)

Implementation & Governance

  1. How do you build an AI governance framework?
  2. Roles & Responsibilities
  3. Documentation requirements
  4. Monitoring & Post-Market Surveillance

Workshop & Action Planning

  1. Group exercise: Roadmap for our company
  2. Q&A
  3. Next Steps

Deliverables:

  • Workshop slides (PDF)
  • Excel templates (bias calculation, risk assessment)
  • Model Card Template
  • Checklists (data governance, documentation)
  • Case study collection (10 documented bias cases)
  • 4 weeks of post-workshop support

Technical Deep Dive
(Two-Day, 16h)

Participants:

Data scientists, ML engineers, AI developers

Objective:

Hands-on expertise: fairness testing, explainability tools, mitigation code

Contents:

Part 1: Fairness & Bias Testing

Fairness Theory & Metrics

  • 70+ fairness metrics (which ones when?)
  • Trade-offs (demographic parity vs. equal opportunity)
  • When to use what

Hands-on: IBM AIF360

  • Setup and installation
  • Dataset loading
  • Calculate bias metrics
  • Coding exercise (own data)

Bias mitigation

  • Pre/in/post-processing strategies
  • AIF360 Mitigation Algorithms
  • Trade-off analyses (fairness vs. accuracy)

Fairlearn

  • Constraint-based optimization
  • Grid search for fair models
  • Comparison of AIF360 vs. Fairlearn

Part 2: Explainability & Data Governance

SHAP Deep Dive

  • Shapley Values explained
  • SHAP Installation & Usage
  • Global vs. Local Explanations

Hands-on: SHAP

  • SHAP Summary Plots
  • SHAP Force Plots
  • Feature Interactions
  • Coding Exercise

Data Quality with Great Expectations

  • Data governance requirements (EU AI Act Art. 10)
  • Great Expectations Framework
  • Automated Data Testing

Integration & Best Practices

  • How do you integrate fairness testing into CI/CD?
  • Monitoring setup (Alibi Detect overview)
  • Production-ready workflows
  • Q&A & Closing

Deliverables:

  • Workshop slides (PDF)
  • Jupyter Notebooks (all exercises, executable)
  • Code repository (GitHub)
  • Data samples (for exercises)
  • Tool Installation Guides
  • Best practice documentation
  • 4 weeks of post-workshop support (including code reviews)

πŸ’‘ Need something more specific?

We can customize workshops:

  • Industry-specific (finance, healthcare, HR)
  • Use case-focused (credit scoring only, chatbots only)
  • Multi-level (different target groups, different days)
  • In English (if desired)

Examples of detailed workshop agendas

Here you will find examples of the complete agendas for the three workshop formats:

09:00 – 10:30 | EU AI Act overview (90 min)

What is the EU AI Act? (30 min)

  • History & political context (why does it exist?)
  • Objectives: Harmonization, fundamental rights, innovation
  • Structure: Titles, chapters, articles, annexes
  • Timeline: 2024-2027, important deadlines

Understanding risk categories (30 min)

  • Prohibited AI (Art. 5): What is prohibited?
  • High-risk AI (Art. 6 + Annex III): The 8 areas in detail
  • Limited risk AI (Art. 50): Transparency obligations
  • Minimal risk AI: What is outside the scope?

Quick assessment: What applies to us? (30 min)

  • Interactive exercise: Classify your systems
  • Which ones fall under Annex III?
  • Initial assessment: High risk yes/no?
  • Q&A

10:30 – 10:45 | Break (15 min)

10:45 a.m. – 12:00 p.m. | Business Impact & Strategy (75 min)

Compliance requirements for your systems (30 min)

  • If high risk: What are the obligations? (Overview of Articles 9-15)
  • Risk Management, Data Governance, Transparency, Human Oversight
  • Documentation & Record-Keeping
  • Conformity Assessment & CE Marking

Costs vs. risks: ROI of compliance (20 min)

  • Costs: assessment, testing, monitoring, documentation
  • Risks: Fines (up to €35 million), damage to reputation, product recall
  • ROI calculation: When does compliance pay for itself?

Competitive advantage through responsible AI (25 min)

  • Early compliance = market differentiation
  • Trust as an asset (customers, investors, regulators)
  • Case studies: Who benefits from responsible AI?

12:00 p.m. – 1:00 p.m. | Roadmap & next steps (60 min)

What needs to happen by when? (20 min)

  • EU AI Act Timeline: 2024-2027
  • Your critical deadlines
  • Phases: Assessment β†’ Testing β†’ Implementation β†’ Monitoring

Who is responsible? (15 min)

  • Governance structure: AI Officer, Risk Committee
  • Roles & responsibilities (legal, compliance, tech, product)
  • RACI matrix for AI compliance

Budgets & resources (10 min)

  • Typical costs (assessment, testing, monitoring)
  • Build vs. buy (internal capacity vs. external consultants)
  • Resource planning (FTE, budget)

Q&A & Next Steps (15 min)

  • Open questions
  • Concrete actions for the next 4 weeks
  • Follow-up: Who does what by when?

09:00 – 10:30 | Responsible AI Fundamentals (90 min)

What is Responsible AI? (30 min)

  • Definition & Core Principles (Fairness, Transparency, Accountability)
  • Why is it important? (ethical, legal, business)
  • History: From OECD Principles to EU AI Act

EU AI Act Deep Dive (40 min)

  • Structure & key articles (6, 9, 10, 13, 14, 15, 72-73)
  • High-risk categories (Annex III in detail)
  • Obligations for providers & deployers
  • Sanctions & enforcement

Case studies: When AI goes wrong (20 min)

  • Apple Card gender bias (2019)
  • Amazon HR Tool (2018)
  • COMPAS Recidivism Algorithm
  • Lessons Learned

10:30 a.m. – 10:45 a.m. | Break

10:45 – 12:30 | Fairness & Bias (105 min)

What is bias? (30 min)

  • Types: Data Bias, Algorithmic Bias, User Bias
  • Where does bias arise? (Training data, features, labels, model)
  • Examples from different domains

Understanding fairness metrics (35 min)

  • Demographic parity (equal approval rates)
  • Equal opportunity (equal true positive rates)
  • Equalized odds (equal error rates)
  • Trade-offs between metrics (you can't fulfill them all!)
  • Which metric when? (Depending on the use case)

Hands-on: Calculating disparate impact (40 min)

  • Excel exercise with sample data
  • Calculation: Selection rate ratio
  • Interpretation: <0.80 = problematic (80% rule)
  • Group work: Analyze your use cases

12:30 p.m. – 1:30 p.m. | Lunch break

1:30 p.m. – 3:00 p.m. | Transparency & Explainability (90 min)

What does Art. 13 (transparency) require? (20 min)

  • Transparency requirements for high-risk systems
  • What must be communicated? (To whom?)
  • User information requirements

Overview of explainability methods (30 min)

  • Black box vs. glass box models
  • Post-hoc explanations (SHAP, LIME)
  • Global vs. local explanations
  • Feature Importance Rankings

Hands-on: Creating Model Cards (40 min)

  • What is a model card? (Google/TensorFlow standard)
  • Template exercise: Model card for your system
  • Sections: Intended Use, Training Data, Performance, Limitations
  • EU AI Act Compliance: Model Card = Part of Technical Documentation

3:00 p.m. – 3:15 p.m. | Break

3:15 p.m. – 4:45 p.m. | Implementation & Governance (90 min)

Building an AI Governance Framework (30 min)

  • Governance structures (AI board, risk committee)
  • Roles & Responsibilities (who does what?)
  • Policies & processes (risk assessment, approval, monitoring)

Documentation requirements (25 min)

  • What needs to be documented? (Articles 11, 12, 13)
  • Technical documentation (extensive!)
  • Logs & Records (Art. 12: automatic logging)
  • Templates & Checklists (practical examples)

Monitoring & Post-Market Surveillance (35 min)

  • Art. 72: Post-market monitoring obligation
  • What to monitor? (Performance, fairness, drift)
  • How to monitor? (tools, processes, frequency)
  • Incident reporting (Art. 73: "Serious Incidents")

4:45 p.m. – 5:30 p.m. | Workshop & Action Planning (45 min)

Group exercise: Create a roadmap (30 min)

  • Teams: Outline a roadmap for your company
  • What are your first 3 steps?
  • Timeline: Who does what by when?
  • Presentation & Feedback

Q&A & next steps (15 min)

  • Open questions
  • Resources & further reading
  • Post-workshop support

DAY 1: Fairness & Bias Testing

09:00 – 10:30 | Fairness Theory & Metrics (90 min)

Deep dive: 70+ fairness metrics (45 min)

  • Group Fairness: Demographic Parity, Equal Opportunity, Equalized Odds
  • Individual Fairness: Similar people β†’ Similar outcomes
  • Calibration: Predicted probabilities = actual probabilities
  • Trade-offs: Why it is impossible to satisfy all metrics simultaneously
  • Impossibility Theorems (Kleinberg et al.)

Which metric when? (45 min)

  • Credit scoring: Equal opportunity often important
  • HR recruiting: Demographic parity for diversity
  • Medical Diagnosis: Calibration critical
  • Decision Framework: Use Case β†’ Stakeholder β†’ Metric

10:30 a.m. – 10:45 a.m. | Break

10:45 a.m. – 12:30 p.m. | Hands-on: IBM AIF360 (105 min)

Setup & Installation (15 min)

  • Python Environment Setup
  • pip install aif360
  • Dependencies & Troubleshooting

AIF360 Basics (30 min)

  • Dataset Loading (StandardDataset, BinaryLabelDataset)
  • Calculating Metrics (ClassificationMetric)
  • Code example: Disparate impact, equal opportunity

Coding exercise: Your own bias test (60 min)

  • Sample dataset or your data (anonymized)
  • Calculate bias metrics
  • Interpreting the results
  • Troubleshooting & Q&A

12:30 p.m. – 1:30 p.m. | Lunch break

1:30 p.m. – 3:00 p.m. | Bias mitigation (90 min)

Mitigation strategies overview (20 min)

  • Pre-processing (data level): reweighting, sampling
  • In-Processing (Training Level): Prejudice Remover, Adversarial Debiasing
  • Post-processing (prediction level): calibrated equalized odds, reject option

Hands-on: AIF360 Mitigation Algorithms (50 min)

  • Implementing reweighing
  • Before/after comparison
  • Trade-off analysis: Fairness improvement vs. accuracy drop

Best practices (20 min)

  • When to use which mitigation?
  • Consider the business context
  • Communicating trade-offs to stakeholders

3:00 p.m. – 3:15 p.m. | Break

3:15 p.m. – 5:00 p.m. | Fairlearn (105 min)

Why Fairlearn? (15 min)

  • Microsoft's approach: constraint-based optimization
  • Comparison of AIF360 vs. Fairlearn
  • When to use which?

Hands-on: Grid Search for Fair Models (60 min)

  • Fairlearn installation
  • GridSearch with Fairness Constraints
  • Pareto front visualization (fairness vs. accuracy)
  • Model selection: Which trade-off is acceptable?

Coding exercise: Your use case (30 min)

  • Applying Fairlearn to your data
  • Create a trade-off matrix
  • Formulate recommendations for stakeholders

DAY 2: Explainability & data governance

09:00 – 10:30 | SHAP Deep Dive (90 min)

Shapley Values explained (30 min)

  • Cooperative game theory basics
  • Why Shapley Values are fair
  • From Game Theory to ML Explainability

SHAP Installation & Setup (15 min)

  • pip install shap
  • Model Compatibility (Tree-based, Neural Networks, Linear)

SHAP in practice (45 min)

  • Global explanations: summary plot, feature importance
  • Local Explanations: Force Plot, Waterfall Plot
  • Feature Interactions: Dependence Plots
  • Code examples for different model types

10:30 a.m. – 10:45 a.m. | Break

10:45 a.m. – 12:30 p.m. | Hands-on: SHAP (105 min)

Coding Exercise 1: SHAP Summary Plots (35 min)

  • Load sample model (Scikit-Learn)
  • Calculate SHAP values
  • Create and interpret summary plot

Coding exercise 2: SHAP force plots (35 min)

  • Explain individual predictions
  • Visualize force plot
  • Counterfactual analysis: What would need to change?

Coding Exercise 3: Your Models (35 min)

  • Apply SHAP to your production models (if available)
  • Extract insights
  • Deliverable: Explanation report for stakeholders

12:30 p.m. – 1:30 p.m. | Lunch break

1:30 p.m. – 3:00 p.m. | Data quality with Great Expectations (90 min)

EU AI Act Art. 10: Data governance (20 min)

  • "Relevant, representative, free of errors, complete"
  • What does that mean in technical terms?
  • Measurable metrics for every requirement

Great Expectations Framework (30 min)

  • What is Great Expectations?
  • Installation & setup
  • Define expectations (e.g., "Missing Values <5%")
  • Automated testing in pipelines

Hands-on: Data Quality Tests (40 min)

  • Load sample dataset
  • Create expectations
  • Run validation suite
  • Generate data quality scorecard
  • Failing tests: What to do?

3:00 p.m. – 3:15 p.m. | Break

3:15 p.m. – 5:00 p.m. | Integration & best practices (105 min)

CI/CD integration (30 min)

  • Fairness tests in deployment pipeline
  • Automated testing: Pre-deployment checks
  • Fail-safe: Deployment stop in case of critical bias
  • Code examples (Jenkins, GitLab CI, GitHub Actions)

Monitoring setup (Alibi Detect overview) (25 min)

  • Post-deployment: Continuous monitoring required
  • Alibi Detect for drift detection
  • Alert setup for performance/fairness degradation
  • Link to monitoring service

Production-ready workflows (25 min)

  • From notebook to production code
  • Best practices: logging, error handling, documentation
  • Team workflows: Data scientists ↔ ML engineers

Q&A and conclusion (25 min)

  • Open questions
  • Next steps for your projects
  • Post-workshop support (4 weeks, including code reviews)
  • Resources & Community

Who would benefit from Responsible AI workshops?

βœ… Workshops make sense when:
  • You develop/operate AI systems
  • The EU AI Act affects you (high-risk systems)
  • Teams have little AI/compliance experience
  • Change management required (new processes)
  • Budget for external consultants limited
βœ… Ideal for:
  • Medium-sized companies (50-500 employees) – the perfect size
  • Startups (AI-focused) – establish culture early on
  • Corporations (>1,000 employees) – multiple waves, different teams
  • Public Sector – Administrative Digitization
❌ Less useful if:
  • Teams are already AI experts
  • Only 1-2 people affected
  • No AI systems in the foreseeable future

Typical company situations

Situation 1: “We’re starting with AI”
β†’ Executive briefing (management buy-in)
β†’ Fundamentals (product/engineering)
β†’ Establish culture from the outset

Situation 2: “The EU AI Act is coming, we need to become compliant”
β†’ Fundamentals (compliance/legal)
β†’ Technical deep dive (data science)
β†’ Everyone knows what needs to be done

Situation 3: “We have AI, but little expertise in fairness”
β†’ Technical deep dive (data scientists)
β†’ Learn hands-on
tools β†’ Independent implementation possible

Situation 4: “We are growing and need an onboarding process”
β†’ Ongoing education package
β†’ Quarterly workshops for new employees
β†’ Consistent knowledge transfer

Frequently asked questions about Responsible AI workshops

Both are possible, the choice is yours.

Remote (Zoom/Teams):
Advantages: More affordable, more flexible, more participants possible
Disadvantages: Less interaction, distractions
Recommended for: Executive briefings, follow-ups

On-site (at your location):
Advantages: More interaction, better team building, more focused
Disadvantages: More expensive (travel costs), room setup required
Recommended for: Fundamentals, technical deep dive

Hybrid:
Also possible (e.g., management remote, tech on-site)

Yes, absolutely.

Standard workshops are 80% complete, and we customize 20%:

  • Your industry (finance, healthcare, etc.)
  • Your use cases (credit scoring, HR, etc.)
  • Your challenges (specific pain points)
  • Your language (German/English)

Custom content development:

  • €150/hour for new slides/exercises
  • Typically 4-8 hours for significant customization
  • Let's discuss this in a preliminary meeting.

Example:
“Fundamentals Workshop, but only for HR recruiting use case”
β†’ 4 hours of custom development ($600) + workshop fee

Minimum:

  • Room (with projector/screen, for on-site)
  • List of participants (names, roles)
  • Your use cases/systems (rough description)

Ideal (for technical workshops):

  • Laptops for participants (Python installed)
  • Access to your AI systems (for realistic exercises)
  • Sample data (anonymized, for hands-on purposes)

We bring:

  • All slides
  • Templates/Checklists (digital)
  • exercise materials
  • Technical: Pre-configured notebooks

Book your Responsible AI Workshop

πŸ’¬ Consultation (30 min)

We will discuss your situation and recommend a suitable format.

Free of charge & non-binding

πŸ’Ό Request a workshop directly

Briefly describe your needs and we will contact you immediately to discuss the options and the next steps (non-binding inquiry).