The EU AI Act: What it really means for organisations on the ground

Untitled-design-376.png

The EU AI Act: What it really means for organisations on the ground

 

The EU AI Act marks a significant moment in the global conversation about artificial intelligence. It is the first attempt to regulate AI at scale, and it reflects Europe’s determination to balance innovation with protection for individuals, workers and society more broadly.

Much of the early commentary has focused on the ambition of the legislation and its potential to set global standards. Whilst that discussion is valid, I would argue it risks overlooking a much more immediate question:

 

What does the EU AI Act actually mean for organisations that are already using AI in everyday business decisions?

 

As you’ll come to realise throughout this piece, the answer is more complex than we might initially assume.

 

Moving beyond headlines and high-level summaries

 

“We are building systems that are increasingly powerful, yet increasingly difficult to fully understand.” –Demis Hassabis

 

At a high level, the EU AI Act categorises AI systems based on risk. You’ll find that certain uses are prohibited outright, whereas others that are deemed high risk are subject to strict obligations around transparency, oversight and accountability.

 

Whilst this might sound straightforward, I can assure you that in practice it is anything but.

Many of the AI systems already embedded in organisations sit squarely within the high-risk category. Tools used for CV screening, candidate shortlisting, performance evaluation, productivity monitoring and workforce planning are all likely to fall under closer scrutiny. The reality is that these aren’t experimental technologies; they are already shaping real decisions around hiring, promotions and pay brackets.

 

The challenge? These systems haven’t been adopted as part of a coherent AI strategy. Instead, they’ve been adopted incrementally through vendors, platforms or even bolt-on solutions. This means visibility is often limited, leaving leaders with no clear picture of where AI is being used, the data it relies on or how decisions can be explained.

 

Compliance is not just a legal exercise

 

“The real challenge of AI is not making systems more intelligent, but making them more understandable and accountable.” – Fei-Fei Li

 

A critical risk that I’ve seen organisations overlook lies in the assumption that compliance can be handled in the same way as other regulatory requirements. I’m afraid that updating policies, reviewing contracts and adding oversight mechanisms won’t keep you 100% protected. Having this approach underestimates what the regulation is trying to address.

 

The EU AI Act is centered around accountability, not simply documentation. If an organisation cannot explain how an AI-driven decision was made, who is responsible for it, and how bias or error is identified and corrected, then it is exposed, regardless of how well written its policies are.

 

This becomes especially prevalent in decisions related to employment. Workers, candidates, regulators and the courts are paying closer attention to how algorithms influence outcomes. We’re moving away from theoretical compliance, with a shift to demonstrable fairness and transparency.

 

The operational reality for HR teams

 

HR functions are likely to feel the impact of the EU AI Act first and most acutely. AI intersects directly with hiring, performance management, reward and workforce analytics. These are areas where decisions carry legal, ethical and reputational weight.

 

In many organisations, HR has not traditionally owned technology governance. AI tools may be procured by procurement, implemented by IT and sold as efficiency gains by vendors. The result is a fragmented picture where responsibility is unclear.

 

Under the EU AI Act, that ambiguity becomes a problem.

 

HR leaders will need to be confident not only that AI systems comply with the law, but that they align with organisational values and can be defended if questioned. That requires closer collaboration across HR, legal, IT and procurement, and a much clearer understanding of how AI operates in practice.

 

Cross-border complexity in EMEA

 

For organisations operating across EMEA, the challenge is amplified. While the EU AI Act provides a common regulatory framework, employment law, enforcement priorities and cultural expectations still vary widely by country.

 

A tool that appears acceptable in one jurisdiction may raise concerns in another. Worker consultation requirements, data protection rules and expectations around transparency are not uniform. Managing AI responsibly across borders will require nuance, not one-size-fits-all solutions.

 

This is where many global organisations are vulnerable. Centralised decisions made with good intentions can quickly run into local compliance and trust issues if regional realities are not properly considered.

 

Preparing for scrutiny, not just enforcement

 

One mistake organisations often make with new regulation is focusing solely on enforcement timelines and penalties. While those matter, the more immediate risk is scrutiny.

 

Employees, candidates, unions, regulators and journalists are increasingly asking questions about how AI is used at work. Organisations that cannot answer those questions clearly and confidently will struggle, regardless of whether formal enforcement action follows.

 

This points to the fact that preparation is not just about avoiding fines. Instead, it’s about building internal understanding, governance and confidence before problems arise.

 

One area that is often underestimated is explainability. Under the EU AI Act, organisations will be expected to explain not only what decision was made, but how it was reached.

 

For HR teams, this represents a fundamental shift. It is no longer enough to rely on vendor assurances or technical documentation. Leaders must be able to stand behind outcomes in plain language, particularly when those outcomes affect careers, pay or progression.

 

This will require new skills, clearer governance structures and, in some cases, difficult conversations about whether certain tools should be paused or redesigned until understanding catches up with adoption.

 

A moment for realism

 

The EU AI Act is an important step forward. It creates a framework where none existed before and sets clear expectations about unacceptable practices. But it will not, on its own, make organisations responsible users of AI.

 

That work still has to happen internally. It requires honest assessment of current practices, investment in skills and a willingness to slow down deployment where understanding has not kept pace.

 

For organisations that take that work seriously, the EU AI Act can be a useful guide. For those that treat it as another compliance hurdle, it may expose weaknesses they did not realise were there.

 

AI is already part of the workplace. Regulation has now caught up.

 

The question facing organisations?

 

Whether their understanding, governance and accountability are ready to follow.

Author – Connor Heaney, President, EMEA, CXC Global

Connor Heaney is President of CXC Global EMEA, where he helps organisations navigate the complexities of global talent engagement, compliance and workforce transformation. With deep expertise in contingent workforce strategy and the future of work, he focuses on how technology and AI reshape leadership, skills and organisational readiness. Connor is also the host of the Open Talent Report, exploring emerging trends in the modern workforce.

The post The EU AI Act: What it really means for organisations on the ground first appeared on HR News.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy