Learning to Work with the EU AI Act

I used to avoid EU regulation. Now I’m learning to work with it. The AI Act isn’t perfect, but it’s shaping how I think about risk, trust, and tech.

Learning to Work with the EU AI Act
Bit of deep dive to the closed off world of regulation.

From frustration to foundation

When the AI Act was first announced, I wasn’t thrilled. Another regulation from Brussels. Another compliance burden. Another list of things we couldn’t do. I’ve always leaned into experimentation, iteration, and autonomy, and legal frameworks often felt like the opposite of that. Especially in AI, where the pace of development outruns most policy by years.

But something shifted. Not overnight, and not because the regulation got more exciting. It didn’t. What changed was my understanding of what the EU is trying to do. And how that might actually align with the kind of digital future I care about.

This article is the beginning of that reorientation. An attempt to ground myself, and others, in what the AI Act really asks of us, and how we might respond. Not reluctantly, but intelligently.

Playing the EU Game: Learning the Digital Rules by Getting in the Game
From grants to governance, the EU’s digital world is a maze. I dove in—starting with DG CONNECT, DIGITAL, and the logic behind the rules.

What the AI Act Is and Isn’t

The AI Act is the first major attempt by a democratic region to comprehensively regulate artificial intelligence. It doesn’t regulate all AI. Instead, it focuses on systems that pose risks to fundamental rights, health, safety, or democratic values. That includes things like biometric surveillance, automated hiring tools, credit scoring, and deepfake generation.

The Act introduces a four-tier risk framework:

  1. Unacceptable Risk – Completely banned (e.g. social scoring, real-time biometric surveillance in public spaces).
  2. High Risk – Heavily regulated (e.g. education, healthcare, recruitment, law enforcement).
  3. Limited Risk – Subject to transparency obligations (e.g. chatbots).
  4. Minimal Risk – Allowed with no extra obligations (e.g. spam filters).

That structure makes it more navigable than it looks — especially once you accept that not all AI needs to be treated as dangerous.

AI Act
The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

What It Means in Practice

If you’re deploying AI in or for the EU, the law will affect how you work even if your system isn’t high-risk. You’ll be expected to:

  • Classify your system according to the risk tiers.
  • Maintain documentation about how the system was trained, validated, and monitored.
  • Ensure transparency in user-facing systems, for example, letting people know they’re interacting with a bot.
  • Be audit-ready, especially if you’re offering AI systems at scale or in sensitive domains.
  • Avoid prohibited practices entirely, such as manipulative or exploitative AI.

The fines for non-compliance are significant, up to €35 million or 7% of global turnover, but for most organisations, the real cost is reputational and strategic. Not knowing where your AI sits on the risk ladder makes you vulnerable.

Knowing puts you in a position of strength.

Accepting the Premise

I still believe regulation can be slow, and sometimes clumsy. But I’ve also come to believe that European AI law, with all its imperfections, is built on a set of ideas worth engaging with. That technology should serve people. That risk should be understood, not dismissed. That transparency isn’t a constraint, but a way to build trust.

These are not slogans, they’re working principles. And they require work. I’m not writing this from the position of someone who has fully figured it out. This article isn’t the end of a journey. It’s the start of a shift. A move from compliance avoidance to proactive orientation. From skimming the surface to building real understanding.

🚧
Update (June 2025):
The EU AI Act is already facing pressure for delay and revision. Industry groups and U.S. officials have called for more time to comply, while the European Commission is considering postponing parts of the law due to missing technical guidelines. Critics warn this uncertainty risks undermining trust in the EU’s regulatory leadership.

Where I’m Headed

I’m treating the AI Act as a foundation, something to build on, not just work around. Over the coming months I’ll explore what that means in practice: for chatbot development, business automation, digital agency work, and AI-powered content. I’ll write about risk classification, practical tooling, small-team compliance, and what it means to be “trustworthy” in a regulatory environment.

If you’re on a similar path, building digital systems that touch real people, in a European context, you’ll likely face the same questions. This series is an attempt to think through them, in public, with a mix of strategy, practice, and curiosity.

I’m not trying to win the EU game. I’m learning how to play it. On my own terms.


The Digital Markets Act is Here and It’s Changing How Apps Work in the EU
Big tech feels broken in the EU? It’s not. It’s the Digital Markets Act — more choice, more friction, and a big shift in how apps are allowed to behave.
EU AI Act: first regulation on artificial intelligence | Topics | European Parliament
The use of artificial intelligence in the EU is regulated by the AI Act, the world’s first comprehensive AI law. Find out how it protects you.
EU’s waffle on artificial intelligence law creates huge headache
Industry, lawmakers and safety campaigners begin new lobbying scramble as AI laws are revisited.

To waffle is being vague and indecisive.