AI literacy: from definition to practice

AI literacy rarely shows up as a skill gap. It appears when organisations need to justify decisions, manage risk, and remain accountable.

AI literacy: from definition to practice

For weeks, I found myself circling the same question: what is AI literacy, really?

Not in the abstract sense. I could list skills, risks, competencies. Plenty of reports do that already. But in conversations with organisations, that approach consistently led to friction. People would nod, agree in principle, and then stall.

The more I tried to pin AI literacy down as a clearly defined thing, the more it seemed to slip away.

What eventually helped me get unstuck was a shift in perspective: stopping the attempt to define AI literacy as a stable concept, and instead asking how it actually shows up when organisations try to work with AI in practice.

That changed everything.

Where AI literacy actually appears

AI literacy rarely enters organisations through enthusiasm. It appears at pressure points.

Procurement conversations. Legal reviews. Governance discussions. Moments where someone asks: are we comfortable with this? or can we explain this if needed?

It is not introduced as a learning objective. It emerges as a requirement for making decisions that feel defensible.

That already tells us something important: AI literacy is not primarily about knowledge acquisition. It is about judgement under uncertainty.

The Hot Potato of Compliance
From GDPR to the EU AI Act, a recurring pattern emerges: European regulation lands in procurement, spreading responsibility, caution, and friction.

The unease behind the term

Much of the discomfort around AI literacy comes from its vagueness. There is no licence to obtain, no test to pass, no moment where you are officially “done”.

And yet, there is a growing sense that organisations will be held to account for how they use AI.

That creates a particular kind of anxiety: fear without a clear trigger.

People imagine regulators, enforcement actions, penalties. In reality, that picture is usually exaggerated. There is no AI police. There are no random inspections looking for insufficient literacy.

What does exist is retrospective scrutiny.

When something goes wrong, when a decision is challenged, when a system becomes visible to the outside world, organisations are expected to explain themselves. AI literacy is inferred afterwards, from how decisions were made, documented, and governed.

That is uncomfortable, but it is also familiar. This is how responsibility works in many European regulatory contexts.

Regulation without instruction

The EU AI Act reinforces this pattern rather than replacing it.

It raises expectations, but it does not prescribe competence in procedural terms. It does not tell organisations exactly how to be “AI-literate”. Instead, it assumes that organisations deploying AI take responsibility for understanding what they are doing.

That ambiguity is intentional. Fixed definitions would age badly. Context matters too much.

The result is not clarity, but latitude. And latitude shifts responsibility inward.

A practical reality check

One of the most grounding ways to understand this dynamic is to look at how risk is treated outside regulation.

There is no dedicated “AI literacy insurance”. AI-related risk is absorbed through existing instruments: professional liability, errors and omissions, cyber and legal expenses coverage.

Insurers do not ask whether an organisation is literate in the abstract. They look for signs of seriousness: governance structures, training efforts, decision documentation, escalation paths.

Not because they care about ethics, but because they need to price uncertainty.

That tells us something important. AI literacy is already being interpreted operationally, even if it is not labelled as such. It affects insurability, risk exposure, and the ability to respond when challenged.

This is not about avoiding risk. It is about keeping risk bounded.

From fear to agency

Once that becomes visible, something shifts.

AI literacy stops feeling like an undefined threat and starts to look like a way of regaining agency. Not control, but orientation.

It becomes possible to say: we understand the system’s role, we know its limits, we have thought about failure modes, and we remain accountable for outcomes.

That does not make organisations safe. But it makes them defensible.

And defensibility is often what determines whether experimentation feels possible at all.

Literacy as organisational maturity

Recent research reinforces this view. Studies such as the EY European AI Barometer 2025 show that while AI adoption is accelerating across Europe, AI-specific governance, risk management, and training remain uneven and underdeveloped.

Only a minority of organisations conduct formal AI risk assessments, even as many expect substantial operational and financial impact.

What is often framed as a skills gap is, in reality, a maturity gap.

AI literacy, in this sense, is not about educating individuals in isolation. It is about aligning ambition, capability, governance, and responsibility across the organisation.

It is also a signal. For vendors, partners, and regulators alike, it indicates how much foundational work has already been done, and how much remains.

EY European AI Barometer 2025
Het EY European AI Barometer 2025 rapport onthult de dynamische impact van Artificial Intelligence op verschillende sectoren en de arbeidsmarkt, en belicht zowel de kansen als de uitdagende obstakels van AI-adoptie. Ontdek hoe organisaties zich kunnen bewegen in dit transformerende landschap en het potentieel van AI voor groei kunnen benutten.

Why I am comfortable with where this is heading

After spending time with the darker scenarios first, I find myself broadly comfortable with the direction AI literacy is taking in Europe.

Not because it is well defined, but because it reflects something reasonable: that powerful systems should not be deployed casually, and that responsibility should not dissolve into tooling.

AI literacy is not a brake on innovation. It is the condition under which responsible experimentation becomes possible.

By forcing organisations to think through their use of AI, it slows things down just enough to allow understanding to catch up with capability.

That feels less like a burden, and more like a necessary recalibration.

Where this leaves us

AI literacy is not something we can neatly define and move on from. It is something that evolves as practice evolves.

Trying to pin it down too early leads to paralysis. Watching how it manifests in real organisational behaviour brings clarity.

For me, that shift from definition to practice was the key to getting unstuck.

And it opens the space to engage with AI more openly, more confidently, and, paradoxically, more creatively than before.


Reskilling the Mind: Europe’s Next Transition
No one truly knows what lies ahead. We can sense that work is changing, that the ground beneath knowledge and skill is moving, but the direction is still uncertain. This article is an exploration, a way to think aloud about what reskilling could mean for Europe when intelligence, both human and art