Why many companies are struggling with the EU AI Act and ISO 42001 – and how you can do better
The EU AI Act is gradually coming into force, and ISO 42001 is establishing itself as the international standard for AI management—but many companies are massively underestimating the effort involved. Those who repeat the same mistakes made during the introduction of the GDPR risk fines of up to €35 million or 7 percent of global annual turnover. This article analyzes the ten most serious pitfalls in AI compliance preparation.
The regulatory landscape is undergoing fundamental change
With the EU AI Act, the European Union has created the world’s first comprehensive set of rules for artificial intelligence. The regulation follows a risk-based approach: the higher the risk potential of an AI system, the stricter the requirements. At the same time, ISO has published ISO 42001, an international standard for AI management systems that helps organizations develop and use AI responsibly. Both sets of regulations complement each other – and both are underestimated by many companies.
The parallels to the introduction of the GDPR in 2018 are striking: back then, experts also warned early on about the requirements, but many companies reacted too late and ended up paying hefty fines. With AI compliance, there is a risk of history repeating itself—with even higher penalties and more complex technical requirements.
Mistake 1: „That doesn’t affect us“ – A fatal misjudgment
The most common and dangerous mistake is to assume that one’s own company is not affected by the EU AI Act. This misjudgment usually results from an overly narrow understanding of what constitutes an „AI system.“ The EU AI Act defines AI systems very broadly: any machine-based system that operates with varying degrees of autonomy and derives, from received inputs, how to generate outputs such as predictions, content, recommendations, or decisions potentially falls under the regulation.
In concrete terms, this means that even a seemingly simple recommendation system in an online shop, an automated applicant pre-selection process, or a predictive maintenance tool in manufacturing can be subject to regulatory obligations. What is particularly tricky is that companies that use AI systems from third-party providers are classified as „deployers“ (operators) under the EU AI Act and bear their own responsibilities – regardless of whether they developed the system themselves.
Definition of terms – Deployer: In the EU AI Act, the term „deployer“ refers to a natural or legal person who uses an AI system under their supervision – as opposed to a „provider,“ who develops and markets the system. Deployers have their own compliance obligations, such as human oversight and informing affected parties.
Mistake 2: No systematic AI inventory
Before a company can understand its compliance obligations, it needs to know which AI systems it is actually using. In practice, however, most organizations lack a comprehensive view of their AI landscape. AI components are often embedded in purchased software solutions, procured independently by individual departments, or created informally using cloud APIs.
A structured AI inventory should include at least the following for each system:
- the purpose and area of application
- the type of data processed
- the provider or developer
- the user groups affected
- the current governance measures
- the deployment status
Without this basis, meaningful risk classification under the EU AI Act is impossible.
Mistake 3: Superficial risk classification
The EU AI Act distinguishes between four risk levels: prohibited AI practices, high-risk AI systems, AI systems with limited risk (such as chatbots with transparency requirements), and AI systems with minimal risk. Correct classification is crucial, as it determines the overall compliance effort.
Many companies underestimate the complexity involved. For example, an AI system is considered „high risk“ if it is used in one of the areas listed in Annex III of the regulation:
- Biometric identification
- Critical infrastructure
- Education and vocational training
- Employment and human resources management
- Access to essential services
- Law enforcement
- Migration and asylum
- Administration of justice and democratic processes
A seemingly harmless tool for automated CV pre-selection thus falls into the high-risk category, as does an AI-supported scoring system for credit decisions. Classification requires a deep understanding of both your own business processes and regulatory requirements.
Definition – GPAI (General Purpose AI): General-purpose AI models such as GPT-4 or Claude are subject to specific rules in the EU AI Act. Providers of such models must provide technical documentation, supply information to downstream providers, and comply with EU copyright law. In the case of „systemic risk“ (from a 10²⁵ FLOPS training effort), additional obligations apply, including model evaluations and cybersecurity measures.
Mistake 4: Documentation as an afterthought
Both the EU AI Act and ISO 42001 impose extensive documentation requirements. For high-risk AI systems, the EU AI Act requires technical documentation that includes, among other things:
- A general description of the system and its purpose
- Detailed information on the development methodology and the algorithms used
- Descriptions of the training data and its origin
- Validation and test protocols
- Information on human oversight
- Details on cybersecurity measures
ISO 42001 supplements this with requirements for a documented AI management system, including an AI policy, a statement of applicability, documented risk assessments and impact assessments, and records covering the entire AI life cycle.
The critical point: this documentation cannot be created retrospectively. Anyone who has not implemented documentation processes during the development or procurement of an AI system faces an enormous amount of reconstruction work – or the impossibility of providing certain evidence at all. The documentation must also be retained for ten years after the system is placed on the market.
Mistake 5: Underestimating human oversight
Article 14 of the EU AI Act requires that high-risk AI systems be designed to be effectively supervised by natural persons throughout their service life. This „human oversight“ is much more than a formal checkbox—it requires fundamental design decisions.
Specifically, supervisors must be able to:
- understand the relevant capabilities and limitations of the AI system
- recognize anomalies and unexpected behavior
- correctly interpret the system’s output
- refrain from using the system in any situation or ignore, override, or reverse its output
- interrupt the system using a „stop“ button or similar method
Many companies underestimate the training effort involved. The supervisor must not only be able to operate the system, but also understand how it works and its limitations—a requirement that goes far beyond typical user training. This is compounded by the risk of „automation bias“: the regulation explicitly requires supervisors to be aware of the tendency to automatically trust the outputs of an AI system.
Mistake 6: Neglecting data governance
Article 10 of the EU AI Act sets out detailed requirements for the data used to train, validate, and test high-risk AI systems. The training data must be relevant, representative, error-free, and complete—a requirement that poses significant challenges in practice.
Avoiding bias is particularly critical. The regulation recognizes that bias can be inherent in historical data and amplified in real-world environments. Companies must therefore be able to demonstrate what measures they have implemented to detect and correct bias—a requirement that is difficult to meet without the appropriate technical tools and expertise.
ISO 42001 supplements these requirements with comprehensive controls for data governance, including data quality, data provenance (traceability of origin), data preparation, and handling of sensitive data. Anyone who does not take these aspects into account from the outset when developing or procuring an AI system will face significant compliance gaps.
Mistake 7: No integration with existing compliance framework
AI compliance does not exist in a vacuum. The EU AI Act must be considered in conjunction with the GDPR, sector-specific regulations (e.g., in the financial or medical sectors), and existing quality management systems. ISO 42001 was deliberately designed to integrate with other management system standards, such as ISO 27001 (information security), ISO 9001 (quality management), and ISO 14001 (environmental management).
In practice, however, this integration is often lacking. AI compliance is treated as an isolated project instead of being embedded in existing governance structures. The result is redundant processes, inconsistent documentation, and unnecessary overhead. The lack of integration with the GDPR is particularly problematic: where AI systems process personal data, the requirements of both sets of regulations must be met jointly.
Mistake 8: Risk management as a one-time event
Article 9 of the EU AI Act requires a continuous risk management system for high-risk AI systems. This must be understood as an iterative process planned and implemented throughout the system’s life cycle, requiring regular, systematic reviews and updates.
However, many companies treat risk management as a one-time exercise when introducing a system. This misunderstands the nature of AI risks: they can change during operation, for example, due to data drift (when the input data changes over time), changes in the context of use, or new insights into potential harmful effects. The EU AI Act explicitly calls for a post-market monitoring system to detect and respond to such changes.
Definition – Data drift: Refers to the phenomenon whereby the statistical properties of the input data of an AI system change over time. A model trained on historical data may lose accuracy if the real environment changes. Classic example: A fraud detection system trained on fraud patterns from 2020 may not recognize new forms of fraud from 2024.
Error 9: Lack of technical logging infrastructure
Article 12 of the EU AI Act requires that high-risk AI systems must be technically capable of automatically recording events (logs) throughout their entire lifetime. This logging capability must enable the recording of events that are relevant to:
- Identifying situations in which the system could pose a risk
- Facilitating post-market monitoring
- Monitoring system operation
Even stricter requirements apply to biometric identification systems: Here, at least the start and end times of each use, the reference database used, the input data that led to a match, and the identity of the persons involved in verifying the results must be logged.
Many existing AI systems lack the necessary logging infrastructure. Adding such functions retrospectively can be technically complex or even impossible with purchased systems. Companies should therefore take this requirement into account when procuring or developing new systems.
Mistake 10: Underestimating the timelines
The EU AI Act is coming into force in stages:
| time | requirement |
|---|---|
| February 2025 | Prohibited AI practices in effect |
| August 2025 | GPAI requirements apply |
| August 2026 | Key requirements for high-risk AI |
| August 2027 | High-risk AI as product components |
At first glance, these deadlines appear generous—but don’t underestimate the effort involved. A complete compliance readiness assessment for a medium-sized company typically takes six to eight weeks. Depending on the complexity, implementing the necessary measures can take six to eighteen months. Given that qualified AI compliance consultants are increasingly booked up, it is critical to get started early.
What is at stake: An overview of the sanctions
The fines under the EU AI Act are tiered and can threaten the existence of a company:
| violation | Maximum penalty |
|---|---|
| Prohibited AI practices | €35 million or 7% of global annual sales |
| Other violations of requirements | €15 million or 3% of global annual revenue |
| Incorrect or incomplete information | €7.5 million or 1.5% of global annual sales |
When determining fines, supervisory authorities consider the nature, severity, and duration of the violation, as well as the company’s size. For SMEs, including start-ups, the regulation provides for proportionate administrative penalties—but even these can reach painful heights.
The path to compliance: Specific recommendations for action
In view of these sources of error, a structured approach in three phases is recommended:
Phase 1: Inventory (4-8 weeks)
Create a complete AI inventory of all AI systems in use. Perform an initial risk classification in accordance with the EU AI Act. Identify compliance gaps in documentation, governance, and technical infrastructure. Evaluate integration with existing compliance frameworks (GDPR, sector-specific regulations).
Phase 2: Roadmap development (2–4 weeks)
Prioritize measures according to risk and implementation effort. Define clear responsibilities and schedules. Estimate resource requirements and budget realistically. Plan training measures for relevant employees.
Phase 3: Implementation (6-18 months)
Implement technical requirements (logging, monitoring, human oversight interfaces). Develop documentation and processes. Implement continuous risk management. Establish regular reviews and audits.
Conclusion: Compliance as a strategic investment
The AI compliance requirements of the EU AI Act and ISO 42001 are complex but manageable. The key lies in taking early, systematic action. Companies that invest now will not only gain regulatory certainty but also a competitive advantage: customers and business partners will increasingly value AI that is demonstrably responsible.
The parallels with the introduction of the GDPR should serve as a warning: those who acted too late at the time not only paid fines, but also lost valuable time and resources in frantic attempts to rectify the situation. The situation is even more acute for AI compliance: technical requirements are more complex, fines are higher, and time pressure from staggered implementation deadlines is real.
The good news is that, unlike with the GDPR, companies now have the opportunity to learn from the mistakes of others. Take advantage of this opportunity—before the regulatory authorities do it for you.
Checklist: First steps toward AI compliance
☐ Create a complete inventory of all AI systems
☐ Perform risk classification according to the EU AI Act
☐ Check documentation status
☐ Evaluate human oversight mechanisms
☐ Analyze logging infrastructure
☐ Check integration with GDPR compliance
☐ Clarify responsibilities and budget
☐ Create a schedule for compliance implementation

