Comprehensive responsible AI expertise (technical, regulatory, business strategy) for your AI success
We combine technical excellence in responsible AI with a deep understanding of the EU AI Act, standards, and policy approaches—for compliance that works.
Responsible AI requires both technical expertise in testing, mitigation, and monitoring AND a deep understanding of the regulatory landscape.
We cover the entire spectrum—from bias detection algorithms to deriving EU AI Act requirements from fundamental principles.
Our conclusion for you: AI compliance does not only exist on paper, but is easily implemented and remains visible to you.
✓ Doctor of Business Administration in Data Science | ✓ 25+ years of digitalization experience | ✓ 11-tool RAI toolchain | ✓ Comprehensive knowledge of the EU AI Act and ISO 42001 | ✓ Member: AI Federal Association, BVMW e.V.
Our core competencies: AI, technology, and regulation
Successful AI compliance rests on two pillars. We have mastered both—and, above all, their interaction.
🛠️ From testing to monitoring
We test, correct, and monitor AI systems for fairness, transparency, and robustness—using scientifically validated methods and production-ready tools.
1. Testing (analysis & diagnosis)
We test for …
- Fairness & bias (70+ metrics) in algorithms and training data
- Explainability (why does the system make decisions?)
- Data quality (is training data representative, complete, error-free?)
- Robustness (how does the system react to unexpected inputs?)
How do we test?
- Quantitative metrics (disparate impact, equal opportunity, etc.)
- Qualitative analyses (root cause for bias)
- Subgroup analyses (intersectional fairness)
- Benchmark against best practices
Output:
Detailed test reports with specific findings, severity ratings, and recommendations
2. Mitigation (correction & optimization)
Bias found – what now?
Pre-processing (data):
- Data reweighting (giving greater weight to disadvantaged groups)
- Sampling strategies (balancing)
- Feature engineering (removing/adjusting problematic features)
In-Processing (Training):
- Fairness constraints (e.g., “Demographic Parity ≥ 0.80”)
- Adversarial debiasing (the model learns to be fair)
- Optimization for multiple objectives such as accuracy and fairness
Post-processing (outputs):
- Threshold optimization (different thresholds per group)
- Calibration (output calibration for fairness)
Trade-off analyses:
Every mitigation measure has a cost (usually accuracy). We present all options with clear trade-offs.
3. Monitoring & Reporting (Continuous Monitoring)
AI systems change during use – monitoring is mandatory (EU AI Act Art. 72).
What do we monitor?
- Data drift (are input data changing?)
- Concept Drift (are patterns changing?)
- Performance drift (is accuracy declining?)
- Fairness drift (does fairness deteriorate over time?)
How do we monitor?
- Automated monitoring pipelines (Alibi Detect)
- Real-time alerting (for critical thresholds)
- Dashboard visualization (Evidently AI)
- Quarterly Reports (for compliance verification)
Output:
Live dashboards, alerts, quarterly compliance reports
💡 We cover the entire lifecycle: from initial diagnosis to correction and continuous monitoring.
⚖️ Comprehensive regulatory expertise
We understand not only WHAT the EU AI Act requires, but also WHY, and how to implement it in practice. From fundamental ethical principles to audit-proof documentation.
1. Derivation—Why is regulation necessary?
We understand the philosophical and ethical foundations:
- EU Charter of Fundamental Rights (dignity, non-discrimination, data protection)
- OECD AI Principles (transparency, fairness, accountability)
- High-Level Expert Group Ethics Guidelines (Trustworthy AI)
- Political goals (European Digital Strategy, Green Deal)
Why is this important?
Only those who understand the derivation can anticipate new requirements and interpret them meaningfully.
Example:
“Why does Article 10 require ‘representative’ data?”
→ Derivation: Fundamental right to non-discrimination + OECD Fairness Principle
→ Understanding: It’s about avoiding group bias through skewed training data.
→ Implementation: Distribution matching against target population
2. Objective (What does the legislator want to achieve?)
We know the intentions behind every article:
Key objectives of the EU AI Act:
- Risk-based approach (different requirements depending on risk)
- Harmonization (single EU internal market for AI)
- Protection of fundamental rights (dignity, discrimination, data protection)
- Promotion of innovation (through legal certainty)
- Human Oversight (humans remain in control)
Example articles:
Art. 14 (Human Oversight) → Goal: People can understand and override system
output Art. 72 (Post-Market Monitoring) → Goal: Continuous improvement & risk management
Why is this important?
Understanding the goal enables pragmatic solutions instead of ineffective box-ticking compliance.
3. Implementation (How do you meet requirements technically?)
We translate legal texts into specific technical requirements:
Translation methodology:
Legal text → Interpretation → Technical requirements → Measurable metrics → Tools/processes
Example Art. 15 (Accuracy, Robustness, Cybersecurity):
“Accuracy”:
→ Performance metrics across all subgroups
→ Tool: Fairness Indicators (Subgroup Performance Matrix)
→ Threshold: Max 5% performance gap between groups
Robustness:
→ Adversarial testing (how does the system react to manipulated inputs?)
→ Tool: IBM ART (Adversarial Robustness Toolbox)
→ Threshold value: <10% performance drop for standard attacks
Cybersecurity:
→ Security testing (can the system be hacked?)
→ Tool: Penetration testing + security audit
→ Standard: OWASP Top 10 for ML
4. Use (How can this be implemented in the company?)
From compliance requirements to operational practice:
Governance structures:
- AI Governance Board (who makes the decisions?)
- Roles & Responsibilities (who is responsible for what?)
- Escalation paths (for incidents)
- Review cycles (how often to review?)
Processes:
- AI System Lifecycle Management
- Risk assessment workflows
- Incident Response Procedures
- Documentation Standards
Integration:
- Integration into existing processes (GDPR, ISO 27001, etc.)
- Toolchain integration (CI/CD pipelines)
- Training & Awareness (Empowering teams)
Example:
Art. 9 (Risk Management System):
→ Governance: Risk Committee (monthly meetings)
→ Process: Risk assessment for each release
→ Integration: Risk scores in JIRA tickets
→ Training: Quarterly risk assessment workshops
💡 We think from principle to practice: Why → What → How → Who/When/Where
The interplay: Why both pillars are essential
Technology without regulatory understanding leads to inefficient compliance. Regulatory knowledge without technical implementation remains theory. We combine both. Three examples of the interplay between technology (AI) knowledge and regulatory understanding:
legal text
Training data shall be relevant, representative, free of errors, and complete.
↓
Our understanding (regulatory expertise)
- Derivation: Fundamental right of non-discrimination
- Goal: Avoiding data-induced bias
- Implementation: 4 separate requirements, each measurable
↓
Technical implementation (RAI expertise)
- Testing: Great Expectations (Data Quality Assessment)
- Metrics: Completeness Score, Representativeness Index
- Mitigation: Data augmentation, resampling
- Monitoring: Data Drift Detection (Alibi Detect)
- Reporting: Data Quality Report (quarterly)
↓
result
✓ Compliant with the law
✓ Audit-proof documentation
✓ Technically implemented
✓ Continuously monitored
legal text
High-risk AI systems shall be designed to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output.
↓
Regulatory level
- What does “sufficiently transparent” mean? → Context-dependent
- Who are “users”? → Operators vs. Affected Persons
- What does “interpret” mean here? → Understanding the reasons behind decisions
↓
Technical level
- Tool: SHAP (SHapley Additive exPlanations)
- Output example: “Loan denied due to: Income (−15 points), Age (−8), History (−5)”
- Documentation: Model Card (standardized)
- User Interface: Explanation Dashboard
↓
integration
- Training for operators (how to interpret SHAP values?)
- User-facing explanations (non-technical language)
- Escalation process (if explanation is unclear)
legal text
Providers shall establish and document a post-market monitoring system.
↓
Regulatory level
- Goal: Continuous improvement and risk detection
- Scope: What needs to be monitored? (Performance, fairness, safety)
- Frequency: How often? (Depends on risk, typically quarterly)
- Documentation: What needs to be documented?
↓
Technical level
- Monitoring stack: Alibi Detect + Evidently AI
- Metrics: Performance, Fairness, Data Drift, Concept Drift
- Alerting: Real-time for critical thresholds
- Dashboards: Live visualization for stakeholders
↓
process
- Quarterly review meetings (Risk Committee)
- Annual Comprehensive Audit
- Incident Response Playbook (for critical findings)
💡 Our strength: With our comprehensive responsible AI expertise, we speak three languages—legal (“Art. 10 requires…”), technical (“Great Expectations measures…”), and economic.
What sets us apart from:
- Legal firms (can interpret laws, but cannot implement them technically)
- Tech consultants (can operate tools, but cannot justify them from a regulatory perspective)
- Big Consulting (can do both superficially, but not in depth)
Scientifically sound, tried and tested in practice
Our expertise is based on scientific research and practical experience:
Academic background:
- PhD in Data Science (Dr. Valentin [last name])
- Research focus: Responsible AI, AI governance, algorithmic fairness
- Peer-reviewed publications
Methodology:
✓ Evidence-based: Every recommendation backed by research
✓ Reproducible: Documented analyses, repeatable
✓ Statistical rigor: Significance tests, confidence intervals
✓ Continuous learning: 20% of time spent on research and tool evaluation
Research partners:
German universities, European research projects, industry-academic collaborations
Why waveImpact?
| area | Big Consulting | Law firms | waveImpact |
|---|---|---|---|
| focus | Wide, shallow | Purely legal focus | Testing/Monitoring, Responsible AI Expertise, AI Business Strategy |
| consulting objective | Push for AI & follow-up projects | Legal protection | Competitive advantage through effective and compliant use of AI |
| Contact person/team | Key Account, Junior & Offshore Teams | Lawyers and staff | Senior AI Consultant & Compliance Specialist |
| consulting content | Slide decks | Legal memos | Implementation and documentation |
Our differentiation
✓ Double depth: RAI technology AND regulation (not just one)
✓ End-to-end: From legal analysis to code implementation
✓ Scientific: Methods from research, not from slide decks
✓ Practical: Production-ready tools, not just concepts
✓ Fair: SME prices, no big consulting markups
✓ Focused: Only AI compliance, no generic IT consulting
Further details on our Responsible AI expertise
🔧 Responsible AI Expertise: Development, Testing, Mitigation, Monitoring
Complete description of our tool chain, detailed methodology, composite scenarios from practice.
⚖️ Regulatory knowledge: EU AI Act, standards, policy
Article-by-article analysis, integration with GDPR & ISO 42001, all relevant standards, compliance roadmaps.
Take advantage of our responsible AI expertise—let’s talk about how you can gain a competitive advantage in AI with our support, because you’re working on innovation, not compliance.
