AI Governance and Regulations: From EU AI Act to ISO 42001
Back to blog
GovernanceAI Governance

AI Governance and Regulations: From EU AI Act to ISO 42001

AI governance is the moment the story meets law: models leave the lab and enter a world of risk tiers, audits, and named obligations. This guide maps the major frameworks and what they require teams to actually build.

12 min readMarch 17, 2026
AI GovernanceEU AI ActNIST RMFISO 42001ComplianceRisk

The regulatory map

Three frameworks, one intent: accountable AI at scale

Global AI regulation is converging on a risk-based approach: higher-risk applications face stricter requirements, minimal-risk applications are largely unregulated. Three frameworks dominate: the EU AI Act (legally binding in the EU from 2026), the NIST AI RMF (voluntary in the US but rapidly becoming a procurement requirement), and ISO/IEC 42001 (the auditable management system standard). Visualize as a world map with color-coded regulatory zones and timeline bars showing enforcement dates.

Legally binding

EU AI Act

The world's first comprehensive AI law. Enforced via a four-tier risk classification system. High-risk systems (medical devices, hiring tools, credit scoring, critical infrastructure) face mandatory conformity assessments, registration, and ongoing monitoring obligations.

Risk management

NIST AI RMF

A voluntary but widely adopted US framework organized around four functions: Govern, Map, Measure, Manage. Designed to complement existing risk management practices rather than replace them. Increasingly required by US federal procurement.

Auditable

ISO/IEC 42001

An AI Management System standard analogous to ISO 27001 for information security. Provides an auditable framework for organizations to demonstrate responsible AI development and deployment. Third-party certification available.

EU AI Act risk tiers

Four tiers, clear obligations — the pyramid that governs AI in Europe

The EU AI Act classifies AI systems into four mutually exclusive risk tiers. Understanding where your system lands is the first compliance step. Visualize as a pyramid: unacceptable risk at the top (narrow, banned), high risk below (significant regulatory burden), limited risk below that (transparency obligations), minimal risk at the base (no obligations). Most consumer AI falls in the bottom two tiers.

1

Unacceptable Risk — Banned

Prohibited

Social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), subliminal manipulation techniques, and AI that exploits vulnerable groups. These applications are prohibited entirely from August 2026.

2

High Risk — Heavy obligations

Mandatory compliance

Recruitment tools, credit scoring, medical devices, critical infrastructure, law enforcement, education assessment, and migration management. Requires conformity assessment, technical documentation, human oversight mechanisms, and EU registration.

3

Limited Risk — Transparency

Disclosure only

Chatbots, deepfakes, emotion recognition systems. Users must be informed they are interacting with AI. Deepfake content must be labeled as artificially generated.

4

Minimal Risk — No obligations

Voluntary

Spam filters, AI in video games, product recommendation engines. No regulatory obligations, though the Act encourages voluntary codes of conduct.

Up to 3% of global revenue

High-risk fine

August 2026

Full enforcement

NIST AI RMF deep dive

Govern, Map, Measure, Manage: a practical risk cycle

The NIST AI RMF structures AI risk management as a continuous cycle across four core functions. Unlike the EU AI Act which prescribes specific technical requirements, the RMF is outcome-focused — it tells you what to achieve, not how. This makes it adaptable to any technology stack or organization size. Visualize as a circular flow diagram with four colored quadrants cycling clockwise.

Foundation

Govern

Establish organizational policies, roles, and accountability structures for AI risk. Define risk tolerance thresholds. Assign AI risk owners. Create a cross-functional AI review board with representation from engineering, legal, and affected communities.

Inventory

Map

Identify AI systems in use, categorize their risk level, and understand their context of use. Document intended and unintended uses. Map data flows, model dependencies, and potential harm scenarios per system.

Quantify

Measure

Quantify identified risks using established metrics: fairness benchmarks, robustness tests, privacy assessments, explainability scores. Conduct red-team exercises and third-party audits on high-risk systems.

Control

Manage

Implement controls, prioritize risk response based on severity and likelihood, monitor continuously, and update risk assessments as systems change. Maintain incident response plans for AI-specific failure modes.

What this means for engineers

Regulatory requirements translate directly into technical artifacts

Compliance is not just a legal team concern — every requirement in the EU AI Act and NIST RMF maps to a specific engineering artifact or process. Building compliance-ready AI from the start is far cheaper than retrofitting it. The table below maps requirements to deliverables. Animate as a mapping diagram with regulation boxes on the left, connected by arrows to engineering deliverable boxes on the right.

Checklist

  • Model cards: document intended use, training data sources, evaluation metrics, known limitations, and demographic performance breakdowns.
  • Data lineage: every training dataset version must be traceable to its source with processing history and quality checks documented.
  • Conformity assessment: for high-risk EU AI Act systems, conduct a formal conformity assessment (self-assessment or third-party audit) before market placement.
  • Human oversight mechanisms: high-risk AI decisions must allow a human to override, correct, or reject the AI output with access to the underlying reasoning.
  • Incident logging: log all model decisions for high-risk systems with retention period matching the system lifecycle plus 10 years.
  • Continuous monitoring: implement drift detection and performance monitoring; report material changes to competent authorities within 15 days.

Regulatory requirement to engineering artifact map

Left column: regulatory requirements (EU AI Act Article 10 data governance, Article 13 transparency, Article 14 human oversight, Article 17 quality management). Right column: engineering deliverables (data lineage graph, model card, override UI + audit log, CI/CD quality gate). Animated arrows connect them, color-coded by risk tier.