EU AI Act 2026: what really changes for companies and professionals
Three weeks ago, the CFO of a manufacturing company with 80 employees called us and said: “We have software that analyses CVs. The vendor says it’s compliant. Our lawyer says we should run checks. HR says they don’t have time. I don’t know if this is a real problem or something I can postpone.”
This conversation is being repeated in dozens of companies. 2026 is being described as the turning point year for the European AI Act, but nobody really knows what to do right now. Waiting feels prudent (the rules might change), moving feels risky (what if I invest in the wrong things?).
The real problem is not the technical complexity of the law. It’s that it forces you to face uncomfortable organisational questions: who decides whether a system is high‑risk? Who takes responsibility if we classify it wrongly? How am I supposed to “effectively oversee” an AI system if the people using it don’t even understand how it works?
The AI Act is not a compliance issue you can just hand over to the legal team. It’s a governance issue: who knows what, who decides what, who is accountable for what.
In our training courses, we often see situations like this. Our aim in this article is not to explain every detail of the AI Act, but to help you understand where you stand, which questions you should be asking, and above all which decisions you cannot postpone even if the regulatory landscape is not yet fully settled.
Let’s start from a fixed point: the AI Act entered into force in August 2024, but its rules become applicable in stages between 2025 and 2027. From 2 February 2025, the bans on certain prohibited practices and the first general provisions (definitions, AI literacy, etc.) are already in force.
From 2 August 2026, most of the rules apply for high‑risk systems that fall under the areas listed in Annex III (for example employment, credit, education, certain public‑sector uses), as well as several transparency obligations.
From 2 August 2027, the rules apply to high‑risk systems embedded in products that are already regulated (Annex I, such as certain medical devices or machinery).
So when we talk about the “deadline for high‑risk systems”, in 2026 we are mainly talking about Annex III systems, whereas those embedded in regulated products have a separate deadline in 2027.
The Digital Omnibus and the possible postponement
The European Commission has proposed the so‑called “Digital Omnibus”, a regulatory simplification package presented in February 2025. Among other measures, it introduces the idea of linking the application of certain rules for high‑risk systems to the availability of technical standards and supporting tools.
In practice, if the proposal is approved, for some high‑risk systems in Annex III it could become possible to postpone the application of certain rules by up to 16 months compared to the original timeline, while for high‑risk systems embedded in regulated products (Annex I) the proposed flexibility would be up to 12 months.
For this to happen, the negotiations between Parliament, Commission and Council must be concluded, and in any case this is not a blanket “discount” for all high‑risk systems, but targeted flexibility with clear limits. If no agreement is reached, the original timeline remains in place.
So far, everything is clear on paper. The real problem lies elsewhere.
What no one tells you: decision‑making timelines
In the projects we follow, we often hear something like this: “If you tell me I have until August 2026, I’ll schedule the project for June. But then who tells me whether my system is really high‑risk? The vendor? Legal? An external consultant? What if each one says something different?”
That’s the key point: the deadline is not your real problem. The problem is that before the deadline you need to:
- Have all AI systems mapped (and discover that you have more than you thought).
- Have someone actually decide on the classification (and discover that nobody wants to take that responsibility).
- Change entrenched operational processes (and discover resistance you didn’t expect).
- Train people who don’t have time (and discover that “human oversight” is a more complex concept than it sounds).
All of this requires decision‑making cycles, budget, and cross‑functional alignment. Not weeks: months. Often a full year.
On top of that, many organisations are still waiting for guidelines and standards that will make it easier to classify systems and implement obligations in practice. The result is that anyone who waits for “perfect, definitive guidance” risks being late anyway, because internal decision‑making and change processes cannot be compressed at the last minute.
A concrete example: a financial services company took seven months just to decide whether its scoring system fell into the high‑risk category or not. Not to bring it into compliance – just to decide how to classify it – because IT, legal, risk management and business were all involved, and each had a different interpretation.
Sanctions: what’s at stake if you don’t comply
While many people are looking ahead to 2026, from 2 February 2025 the ban on certain prohibited practices is already in force: these prohibitions are not “future” ones, they are already here.
In practice, if you are using systems for generalised social scoring, certain forms of subliminal behavioural manipulation or certain types of real‑time biometric identification, you already risk being in breach of the law, not “at risk of being in breach in a few years’ time”.
The sanctions provided for in the AI Act for the most serious infringements are very high, relative to global turnover. On top of that, you have reputational and operational risk: having to quickly switch off a critical system or rebuild it from scratch often hurts more than the fine itself.
How do I know whether an AI system is “high‑risk”?
We often find ourselves explaining that not all AI is the same. The AI Act divides systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. In theory, you look at Annex III (and, for embedded systems, Annex I) and see whether you fall into one of those categories. In practice, it is more complicated. Rather than repeating the text of the law, here we suggest the right operational questions
Unacceptable risk
Case: A retailer installs a system that analyses customers’ facial expressions to “adapt offers to their emotional state”. Marketing is thrilled; legal has it switched off within 48 hours. Reason: behavioural manipulation.
Question: “Does this system manipulate people, exploit vulnerabilities, perform generalised social scoring, or use techniques that disproportionately affect fundamental rights?”
If yes: turn it off immediately. You are not “playing it safe in advance”; you are avoiding practices that already fall within the scope of the prohibitions.
High risk
Case: CV screening software. The vendor says: “It doesn’t decide, it just ranks.” Legal says: “It has a major influence, this is high‑risk.” HR says: “We process 300 CVs a month, we always follow the system. In practice, it does decide.”
Who is right? All three of them. And that’s where the process gets stuck.
Question: “Does this system affect hiring, credit, health, safety, education, access to essential services or other areas similar to those listed in Annex III?”
If yes: you need to document everything (risk management, datasets, testing, logging, effective human oversight). The cost is not just technical: it means months of cross‑functional work.
If you’re “not sure”: you have a governance problem. The uncertainty is not just legal; it is organisational. It means nobody has the authority or the tools to make a clear decision. That needs to be fixed even before you get to the technical classification.
Limited risk
Case: Customer service chatbot. 73% of users think they are talking to a human. The company added a note in the footer saying “This service may use automated assistants”. That is not enough.
Question: “Do users immediately and clearly know that they are interacting with an AI system?”
If not: you need an explicit notice at the start of the interaction, not hidden in the footer. The same goes for generated content (images, video, text): it must be recognisable as AI‑generated.
Minimal risk
Case: AI to optimise sheet‑metal cutting, reducing waste by 12%. No direct impact on people or on decisions affecting fundamental rights. Legal steps in: “The system uses operator codes and work shifts. That’s personal data. GDPR still applies.”
Question: “This system does not fall under the AI Act as high‑risk, but does it process personal data?”
If yes: the AI Act does not impose special requirements here, but GDPR still fully applies (legal basis, notice, DPIA where needed, contract with the vendor, etc.).
Classification is not a technical exercise you can just delegate. It is a strategic decision that requires alignment between IT, legal and business. If these functions don’t speak the same language in your organisation, you have a bigger problem than the AI Act itself.
This is exactly where our training programmes focus: helping organisations build a common language across different functions, turning norms and frameworks into operational decisions.
Don’t wait for absolute certainty
The AI Act is not yet a perfectly paved road, but the direction is clear: more transparency, more accountability, and less AI being used “at random” without understanding how it works.
The real risk is not just financial penalties, but operational risk: discovering too late that you need to switch off a critical system, or rebuild it from scratch in a hurry. The fines are high, but the reputational and organisational damage from a late‑discovered non‑compliance can be even worse.
Those who move now turn an obligation into a competitive advantage: they clean up their portfolio, renegotiate contracts calmly, and arrive prepared. Those who wait until the last minute end up in a bottleneck with overbooked consultants and tripled costs.
The point is not to “anticipate the law” in the abstract, but to equip yourself with a common language and clear criteria to govern data and AI before they become an operational problem.
This is the goal of FIT Academy’s AI Governance and CDMP programmes: to bring structure where there are currently doubts, diffuse responsibilities and postponed decisions.