Exploring ISO 42001 and AI governance

As I dive deeper into EU AI regulation, ISO 42001 keeps surfacing. This piece explores what it is, how certification works, and why it matters for vendors.

Exploring ISO 42001 and AI governance

It is the start of a new year, and like many people working with AI, I have been spending time reading things that are not models, prompts, or product updates.

Instead, I have been reading regulation.

The EU AI Act. Privacy law. Compliance language. Procurement questionnaires. The kinds of documents that do not describe how AI works, but how organisations are expected to live with it.

When you follow those threads for a while, you start to notice a different layer of the AI conversation. Less about innovation, more about responsibility. Less about what AI can do, more about who is accountable when it does it.

That is how I recently ran into ISO 42001.

I am not writing this to argue for or against it. At this stage, I am simply trying to understand what it is, why it exists, and why it is starting to appear in conversations around AI, compliance, and procurement.

What ISO standards are, briefly

ISO is the International Organization for Standardization. It does not regulate, and it does not certify organisations itself. What it does is define shared standards that describe how organisations can structure their work around specific themes.

Many of the better-known ISO standards focus on management systems. ISO 9001 for quality management. ISO 27001 for information security. ISO 27701 for privacy.

These standards do not say “your outcomes must be good”. They say: if you care about this topic, here is how responsibility, risk, documentation, and continuous improvement are typically organised.

Certification is handled by accredited third parties, not by ISO itself. Organisations are audited against the standard by certification bodies, which are in turn overseen by national accreditation authorities.

It is a layered system. Procedural. Institutional. Familiar to anyone who has worked with large organisations or public-sector procurement.

ISO - International Organization for Standardization
We’re ISO, the International Organization for Standardization. We develop and publish International Standards.

What ISO 42001 adds to that landscape

ISO 42001 is the first ISO standard specifically focused on AI. More precisely, it defines the requirements for an Artificial Intelligence Management System.

That phrasing matters.

This is not a technical AI standard. It does not evaluate models, measure bias, or certify that an AI system is “ethical”. Instead, it asks whether an organisation has a structured way to govern AI across its lifecycle.

Things like:

  • defining which AI systems are in scope,
  • assigning responsibility and oversight,
  • assessing risks and potential impacts,
  • managing data, suppliers, and third-party components,
  • handling incidents and continuous improvement.

In other words, it standardises how questions about AI are handled, not the answers themselves.

If you are familiar with ISO 27001, the logic will feel recognisable. ISO 42001 follows the same management-system structure, just applied to AI.

ISO/IEC 42001:2023
Information technology — Artificial intelligence — Management system

Why this shows up in procurement, not product design

Formally, ISO 42001 applies to “organisations”. In practice, it attaches itself most strongly to vendors.

That is not because vendors are morally more responsible, but because procurement needs certainty.

Large organisations, public bodies, and regulated sectors often prefer not to deeply inspect every supplier’s internal practices. Certification becomes a shortcut. A way to say: an independent party has checked that there is a governance system in place.

In that sense, ISO 42001 functions a bit like insurance.

Not because it prevents problems, but because it redistributes risk and responsibility. It creates a shared baseline that allows organisations to move forward without fully understanding each other’s internal complexity.

This also explains why such standards are often more relevant for AI vendors than for advisory or exploratory work. The closer you are to operating AI systems in production, the more likely these questions become unavoidable.

The Hot Potato of Compliance
From GDPR to the EU AI Act, a recurring pattern emerges: European regulation lands in procurement, spreading responsibility, caution, and friction.

How this relates to the EU AI Act

ISO 42001 is not law. It does not replace the EU AI Act, and it does not guarantee compliance.

But the two are clearly moving in the same direction.

The AI Act introduces a risk-based regulatory framework. ISO 42001 introduces a risk-based governance framework. One is legal, the other procedural.

What ISO 42001 offers is not legal compliance, but organisational readiness. A way for companies to show that they have thought about responsibility, oversight, and control in a structured way.

For regulators, clients, and procurement teams, that distinction often matters less than one might expect.

Learning to Work with the EU AI Act
I used to avoid EU regulation. Now I’m learning to work with it. The AI Act isn’t perfect, but it’s shaping how I think about risk, trust, and tech.

Where this fits into my own exploration

For me, this sits alongside a broader research question: how responsibility for AI is being redistributed.

On the one hand, there is a strong emphasis on AI literacy. Organisations are encouraged to understand AI better, make informed decisions, and build internal competence.

On the other hand, there is a parallel move to push complexity upstream. To require vendors to demonstrate that they have governance systems in place, so that not every organisation has to reinvent that wheel.

ISO 42001 clearly belongs to the second category.

It does not replace understanding. But it does change where questions are asked, and who is expected to answer them.

In my own work, this is not a primary goal right now. It is something I am keeping in my peripheral vision. Something to be aware of as I look at my own organisation, the vendors we work with, and the institutional environment forming around AI.

Not as a decision to make, but as a signal.

Why it is worth noticing, even if you do nothing with it

You do not need to pursue ISO 42001 to find it interesting.

Its value, at least for me, lies in what it reveals about the direction things are moving. AI is no longer just a technical or creative concern. It is becoming part of the same governance machinery that already exists for quality, security, and privacy.

Whether that is reassuring, constraining, or simply inevitable is not something I am trying to resolve here.

For now, it is enough to recognise that this layer exists, to understand roughly how it works, and to notice when it starts to appear in conversations.

Sometimes orientation is more useful than opinion.


ISO/IEC 42001: a new standard for AI governance
Ensure responsible AI governance with ISO/IEC 42001. Learn how Swiss companies can achieve AI compliance, mitigate risks and align with global standards.
ISO/IEC 42001:2023 Artificial Intelligence Management System Standards - Microsoft Compliance
Microsoft 365 Copilot is certified for its implementation of these information security management standards.
ISO/IEC 42001 - Compliance
ISO/IEC 42001 standard helps entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.