A compliance officer recently asked me, "If we follow NIST, do we still need to worry about the EU AI Act?" The answer, unfortunately, is: it's complicated. But it doesn't have to be overwhelming.
Let's walk through the major AI governance frameworks, what they require, and how to build an approach that satisfies multiple regulatory regimes without drowning in bureaucracy.
The Major AI Governance Frameworks
NIST AI Risk Management Framework (AI RMF)
Origin: U.S. National Institute of Standards and Technology (January 2023)
Status: Voluntary guidance for U.S. organizations
Key Focus: Risk-based approach to AI development and deployment
NIST AI RMF provides a flexible framework built around four core functions: Govern, Map, Measure, and Manage. It's not prescriptive—instead, it gives organizations a structure for identifying and addressing AI risks appropriate to their context.
EU AI Act
Origin: European Union (Entered force August 2024)
Status: Mandatory law with significant penalties
Key Focus: Risk-tiered regulation based on AI system type
The EU AI Act is the world's most comprehensive AI regulation. It categorizes AI systems as unacceptable risk (banned), high-risk (heavily regulated), limited risk (transparency obligations), and minimal risk (unregulated). Penalties can reach €35 million or 7% of global revenue.
ISO/IEC 42001
Origin: International Organization for Standardization (December 2023)
Status: Certification standard
Key Focus: AI management system specification
ISO 42001 provides requirements for establishing, implementing, and maintaining an AI Management System (AIMS). It's certifiable, meaning organizations can undergo audits to demonstrate compliance—useful for enterprises that need to prove governance to customers or partners.
SOC 2 + AI Controls
Origin: AICPA (evolving)
Status: Industry standard certification
Key Focus: Trust service criteria applied to AI systems
SOC 2 audits are already common for cloud services. Auditors are increasingly expecting AI-specific controls covering data quality, model governance, and output reliability. This isn't a separate framework—it's AI-specific criteria within existing SOC 2 audits.
Understanding the EU AI Act Categories
The EU AI Act is the most consequential regulation for most organizations, so let's dig deeper:
Unacceptable Risk (Banned)
- Social scoring systems
- Real-time biometric identification in public spaces (with limited exceptions)
- Manipulative AI that exploits vulnerabilities
- Emotion recognition in workplaces and schools
High-Risk
AI systems in these areas face the strictest requirements:
- Critical infrastructure
- Education and vocational training
- Employment and worker management
- Essential services (credit, insurance, public benefits)
- Law enforcement
- Border control and asylum
- Justice administration
High-risk systems must implement:
- Risk management systems
- Data governance requirements
- Technical documentation
- Logging and traceability
- Human oversight mechanisms
- Accuracy and robustness standards
- Conformity assessments
Limited Risk
Chatbots, deepfakes, and emotion recognition systems must disclose their AI nature to users. This sounds simple, but implementation requires care—"made by AI" disclaimers need to be meaningful, not buried in terms of service.
Minimal Risk
Most AI applications fall here—spam filters, recommendations, internal tools. No specific requirements, though general data protection and consumer protection laws still apply.
The NIST AI RMF Deep Dive
NIST organizes AI risk management around four functions:
GOVERN
Establish the organizational culture and structure for AI risk management:
- Define roles and responsibilities
- Establish AI policies aligned with organizational values
- Create accountability mechanisms
- Integrate AI risk into enterprise risk management
MAP
Identify and document AI system characteristics and risks:
- Understand the AI system's context of use
- Catalog AI system components and dependencies
- Identify potential impacts and harms
- Engage stakeholders affected by the AI
MEASURE
Assess identified risks:
- Evaluate AI system performance and reliability
- Test for bias and fairness issues
- Assess security vulnerabilities
- Monitor for emerging risks
MANAGE
Treat risks based on priority:
- Implement controls for identified risks
- Document residual risks and acceptance decisions
- Establish incident response procedures
- Plan for AI system decommissioning
Operationalizing Framework Requirements
Frameworks tell you what to do, but not how to do it. Prime AI Guardrails provides the technical controls that implement framework requirements—from the logging and traceability required by the EU AI Act to the risk management processes mandated by NIST.
Building a Unified Governance Approach
You don't need separate programs for each framework. Here's how to build one approach that satisfies multiple requirements:
1. Start with Risk Classification
Create a risk tiering system that maps to EU AI Act categories while supporting NIST risk assessment. Every AI system gets classified, which determines what controls apply.
2. Build Common Controls
Most requirements overlap. A single control library can satisfy multiple frameworks:
- Documentation: Required by all frameworks
- Logging: EU AI Act, NIST, SOC 2 all require it
- Human oversight: Central to EU AI Act and NIST
- Incident response: Expected by all
- Data governance: Universal requirement
3. Map to Multiple Frameworks
Create a mapping that shows how each control satisfies requirements across frameworks. This demonstrates coverage and identifies gaps.
4. Implement Technical Enforcement
Policies need teeth. Runtime guardrails enforce what governance documents prescribe. Without technical enforcement, you have governance theater.
5. Establish Continuous Monitoring
Compliance isn't a point-in-time assessment. Build dashboards showing governance metrics: policy violations, incident rates, human override frequency, etc.
Common Pitfalls to Avoid
- Paper compliance: Policies nobody follows don't satisfy regulators. They look for evidence of implementation.
- Over-classification: Labeling everything "high risk" creates unsustainable overhead. Be realistic about risk levels.
- Ignoring existing controls: You probably have data governance, security, and risk programs already. Extend them for AI rather than starting from scratch.
- Waiting for clarity: Regulations are still evolving, but core principles are clear. Start now and adapt.
- Treating it as purely legal: Governance requires engineering, product, and operations involvement—not just legal and compliance.
What This Means for Your Organization
If you're operating globally, assume you need to comply with the EU AI Act—it applies to AI systems that affect EU residents, regardless of where your company is based. Use NIST AI RMF as your operational framework for how to manage risks. Consider ISO 42001 certification if you need to demonstrate governance to enterprise customers.
If you're U.S.-only, NIST AI RMF is your starting point, but watch regulatory developments closely. U.S. AI regulation is evolving rapidly at both federal and state levels.
The good news: organizations that build strong governance now will be ahead of requirements, not scrambling to catch up. The frameworks exist. The tools exist. It's a matter of prioritization and execution.