ISO/IEC 42001:2023 — AI Management Systems

ISO/IEC 42001:2023 is the first certifiable international standard for AI management systems — it defines governance, risk, ethics, and accountability requirements for any organisation that develops or uses AI, integrates with ISO 27001 via Annex SL, and satisfies roughly half the EU AI Act's governance obligations.

Published December 2023. The first internationally recognised standard that lets an organisation obtain third-party certification for how it develops, deploys, and governs AI systems. Analogous to ISO 27001 for information security or ISO 9001 for quality management — it is a management system standard (MSS), not a product specification.


What It Is

ISO/IEC 42001 specifies requirements for an Artificial Intelligence Management System (AIMS): the policies, processes, roles, and controls an organisation puts in place to ensure responsible, accountable, and trustworthy use of AI across its operations.

Key characteristics:

  • Certifiable: an accredited third-party certification body (CB) audits and issues certification. ANAB launched its AIMS accreditation programme January 2024; CBs including DNV, BSI, and SGS now issue certificates.
  • Plan-Do-Check-Act structure: same lifecycle as ISO 27001 and ISO 9001.
  • Technology-agnostic: covers any AI system the organisation provides or uses — not limited to LLMs.
  • Risk-based: controls are selected based on assessed risk, not applied uniformly.

Scope

Applies to any organisation that:

  • Designs, develops, or trains AI systems (providers)
  • Deploys or operates AI systems in products or services (deployers)
  • Uses AI systems as part of internal operations (users)

Coverage spans the full AI lifecycle: design, development, deployment, operation, monitoring, and maintenance. Organisations scope their AIMS to a defined set of AI systems — they do not need to cover every AI touchpoint on day one.


Core Requirements (Clauses 4-10)

The standard follows the Annex SL high-level structure, identical to ISO 27001 and ISO 9001 at the clause level:

ClauseNameAI-specific content
4Context of the organisationAI policy; stakeholder expectations; AI system inventory
5LeadershipTop management accountability; AI roles and responsibilities; AI ethics commitments
6PlanningAI risk assessment; AI impact assessment; AIMS objectives
7SupportAI competence requirements; AI literacy training; AI-specific documentation requirements
8OperationAI system lifecycle controls; data governance; supply chain and third-party AI controls
9Performance evaluationAIMS monitoring; internal audit; management review of AI risks
10ImprovementNonconformity handling; corrective action; continual improvement of the AIMS

Annex A defines the specific controls — organisations select applicable controls based on their risk assessment. Controls cover: AI system impact assessment, bias testing, human oversight mechanisms, model documentation, data quality and lineage, transparency to affected parties, and incident response for AI failures.


Relationship to Other Standards

ISO 27001 (Information Security)

The most important integration. Both standards share the Annex SL clause structure (4-10), so an existing ISMS can be extended with AIMS controls rather than building a parallel system. ISO 27001 protects the infrastructure (servers, networks, data stores); ISO 42001 protects the AI logic running on top of it — bias behaviour, decision accountability, model transparency. Organisations with ISO 27001 certification reach ISO 42001 compliance roughly 40% faster than those starting from scratch.

ISO 9001 (Quality Management)

Same Annex SL structure. Quality processes already documented for ISO 9001 (process control, nonconformity, audit) apply directly to the AIMS clauses.

ISO 31000 (Risk Management)

ISO 42001's risk assessment clause references ISO 31000 methodology. Organisations using ISO 31000 already have a compatible risk framework; they extend it to cover AI-specific risks (bias, model failure, explainability gaps).


Relationship to the EU AI Act

The landscape/regulation EU AI Act defines legal obligations (what must be achieved). ISO 42001 defines an operational framework (how to run, evidence, and continuously improve governance). They are complementary, not substitutes.

Overlap is approximately 40-50% at the high-level requirement level, concentrated in:

  • Risk management: the Act's risk-tier classification parallels ISO 42001's risk assessment requirements
  • Data governance: Article 10 data requirements (categorisation, bias detection) map directly to ISO 42001 Annex A controls
  • Transparency and documentation: both require technical documentation, instructions for use, and record-keeping
  • Human oversight: both mandate mechanisms for human intervention in AI decisions

ISO 42001 certification does not grant EU AI Act compliance. For high-risk AI systems under the Act, specific conformity assessment obligations apply that go beyond what ISO 42001 covers (CE marking, notified body assessment for certain categories). However, a certified AIMS substantially satisfies the governance, risk management, and documentation obligations — particularly for GPAI model providers subject to the transparency duties that began applying from August 2025.


Relationship to SOC 2

SOC 2 Type II covers security, availability, processing integrity, confidentiality, and privacy of a service organisation's systems. It has no AI-specific controls. ISO 42001 fills the gap for:

  • Model bias and fairness
  • AI decision accountability and explainability
  • AI lifecycle governance (training data lineage, version control, model retirement)
  • Human oversight of automated decisions

In enterprise procurement, ISO 42001 is increasingly appearing alongside SOC 2 Type II in vendor questionnaires, particularly in finance, healthcare, and government contracts. Neither replaces the other; they address different risk surfaces.


Certification Process

Five stages — total elapsed time for most small-to-mid organisations is 4-9 months:

  1. Gap assessment (2-4 weeks): map current AI governance practices against ISO 42001 clauses and Annex A controls; produce a prioritised remediation list.
  2. Implementation (1-3 months): write documentation, implement controls, run AI impact assessments, establish audit trails.
  3. Internal audit (2-4 weeks): verify controls are operating as designed before inviting external auditors.
  4. Certification audit — Stage 1 (1-2 days): documentation review; auditor confirms the AIMS is adequately documented and ready for Stage 2.
  5. Certification audit — Stage 2 (3-9+ days depending on scope): auditors interview personnel, review evidence, test controls in operation; nonconformities must be closed before certificate is issued.

Annual surveillance audits maintain the certificate. Full recertification every three years.


Practical Implementation for an AI Team

Engineers implement the controls even if they are not responsible for the certification process. The controls that require engineering work:

Documentation to produce:

  • AI system inventory: for each AI system in scope, document purpose, inputs, outputs, training data sources, and risk classification
  • AI impact assessment: structured analysis of potential harms, affected populations, bias risks, and mitigations — produced before deployment, reviewed at major updates
  • Data governance policy: data lineage, quality checks, permitted use, retention, and bias detection procedures
  • AI incident log: structured record of model failures, unexpected outputs, bias incidents, and corrective actions
  • Training records for AI literacy: evidence that staff interacting with AI systems understand their limitations and oversight responsibilities

Controls to implement:

  • Human oversight hooks: decision points where a human can review, override, or halt an AI decision — particularly for consequential outputs (credit decisions, triage, hiring)
  • Bias testing: systematic evaluation of model outputs across demographic slices before deployment and after significant retraining
  • Model versioning and rollback: ability to identify which model version produced any given output, and to roll back to a prior version
  • Decision explainability: for each AI system in scope, a defined approach for explaining decisions to affected parties (not necessarily full mechanistic interpretability — SHAP scores or rule-based post-hoc explanations often suffice)
  • Supply chain controls: evidence that third-party AI components (foundation model providers, embedding APIs, vector store vendors) meet equivalent governance standards — typically via their ISO 42001 or SOC 2 certifications

Market Adoption

ISO 42001 is becoming a procurement gate in regulated industries:

  • Financial services: regulators in multiple jurisdictions are referencing ISO 42001 as a recognised governance framework for AI in credit, fraud detection, and customer-facing systems
  • Healthcare: AI-assisted clinical decision support vendors are being asked to demonstrate AIMS certification alongside existing quality management certifications
  • Government contracts: EU procurement increasingly specifies ISO 42001 for AI suppliers handling personal data or supporting public-facing decisions

Enterprise sales cycles in 2025-2026 routinely include ISO 42001 in AI vendor questionnaires, typically paired with SOC 2 Type II. Platform providers such as AWS and Microsoft Azure have both achieved ISO 42001 certification for their AI services, establishing an expectation that serious AI infrastructure providers hold it.


Why Engineers Need to Know This

Certification is owned by compliance and legal, but the controls are owned by engineering. A certified AIMS is only as good as what runs in production. The audit will check:

  • Whether the AI system inventory reflects what is actually deployed
  • Whether bias test results exist and are acted upon
  • Whether human oversight mechanisms are operational, not just documented
  • Whether the incident log has real entries — not a blank template
  • Whether model versioning and audit trails are present in the CI/CD pipeline

Engineers who understand ISO 42001 can build these controls in from the start rather than retrofitting them under audit pressure. The standard is also useful as a design checklist: if you cannot write an AI impact assessment for a system you're building, that is a signal the design is not mature enough to deploy.


  • landscape/regulation — EU AI Act risk tiers, GPAI obligations, compliance timeline
  • security/owasp-llm-top10 — OWASP LLM Top 10 2025 and Agentic Top 10 2026 — the security complement to ISO 42001's governance controls
  • safety/alignment — RSP, red teaming, scalable oversight — the research-side of AI safety that ISO 42001 operationalises at the organisational level
  • agents/practical-agent-design — Layered guardrails and human oversight patterns for agentic systems
  • evals/methodology — Evaluation pipelines that generate the evidence ISO 42001 audits require

Connections

Open Questions

  • How have CBs (BSI, DNV, SGS) differed in their Annex A control interpretation during Stage 2 audits, and is there emerging consensus?
  • For agentic AI systems with no fixed training data lineage, how should the data governance controls in Annex A be applied?
  • Is ISO 42001 likely to require a major revision once the EU AI Act's delegated acts and standards are finalised post-2026?