NIST AI RMF for tech: from Govern to Measure in 30 days.
NIST AI RMF (1.0, January 2023) is voluntary. It's also the de-facto American AI governance standard, baked into procurement requirements, FTC consent decrees, and the EU AI Act's reference list. AI-first tech companies that ignore it usually adopt it the hard way 12 months later. Here's the practical version.
This guide is built from engagements with hyperscalers and AI-first tech companies onboarding NIST AI RMF as their internal AI governance baseline. The case study referenced on the tech vertical page covers one such program.
What NIST AI RMF actually is
The framework has two parts: a core with four functions (Govern, Map, Measure, Manage) and a set of profiles tailored to specific use cases or sectors. The four functions are not sequential — they run continuously and inform each other.
- Govern — culture, accountability, policies, processes that cultivate AI risk management.
- Map — context, classification, and risk framing for each AI system.
- Measure — quantitative + qualitative tracking of identified risks.
- Manage — prioritization, response, and resource allocation around the measured risks.
Each function decomposes into categories and subcategories. RMF 1.0 has 19 categories and 72 subcategories total. NIST publishes a Companion Playbook with suggested actions per subcategory — that's the operational gold mine.
Why tech companies need it
Three reasons:
- Procurement. US federal contracts (and increasingly state and large-enterprise contracts) reference NIST AI RMF. Vendors that can show alignment shorten the security-questionnaire cycle.
- FTC enforcement. Recent FTC consent orders (2023–2025) cite RMF-aligned obligations: model documentation, bias testing, post-market monitoring. The agency uses RMF as its reference framework even though the framework is voluntary.
- EU AI Act bridge. The Act's Annex IV documentation requirements map cleanly to NIST AI RMF's Map and Measure functions. Doing RMF well makes EU AI Act readiness 60% easier.
What "doing RMF" actually means
Across the four functions, an AI-first tech company needs to produce roughly:
- Govern (12 categories): AI governance policies, accountability mappings, training records, vendor / third-party AI policies, incident-response plans for AI failures.
- Map (5 categories): system inventory, intended-use documentation, contextual risk register, stakeholder-impact analyses.
- Measure (4 categories): model evaluation reports, fairness / bias test results, monitoring metrics, drift-detection outputs.
- Manage (4 categories): risk-treatment decisions, prioritization rationale, mitigation plans, communication records.
For a hyperscaler with hundreds of in-scope models, this is a significant documentation effort. For a 50-engineer AI startup, it's manageable in 30 days if you scope tightly.
The 30-day onboarding plan
Days 1–7: Govern foundations
Before mapping any individual system, set the policy baseline. The minimum:
- An AI Governance Policy that names the AI Risk Officer or equivalent.
- A Risk-Tolerance Statement (per use case or per risk class).
- A defined escalation path for AI risks above tolerance.
- An AI Code of Conduct that engineers actually read.
RegAI ingests RMF subcategories GV-1 through GV-6 and surfaces gaps in your existing policy stack. Most tech companies have ~70% of this from existing security / privacy programs; ~30% is new.
Days 8–14: Map the inventory
Build the system inventory. For each AI system in scope:
- Intended use and known misuse cases.
- Stakeholders (developers, deployers, users, affected non-users).
- Risk class (per your internal taxonomy, often mapped to EU AI Act tiers for export readiness).
- Data flow (training data sources, inference data flows, retention).
Practical tip: don't try to inventory everything. Start with externally-facing systems (anything customers or regulators see) and AI used in employment, credit, or sensitive-data contexts. That's the FTC and Annex III risk surface.
Days 15–21: Measure what matters
Pick the metrics that actually track risk. Common starter set:
- Accuracy and reliability across demographic slices (fairness measure).
- Drift signals on training-vs-production input distribution.
- Adversarial robustness for systems exposed to user inputs.
- Incident rate and time-to-detection for AI-specific failures.
Don't drown in metrics. Five well-tracked measures beat fifty unmaintained ones. RegAI maps your existing observability stack (Datadog, Splunk, internal eval pipelines) to RMF Measure subcategories so you're not building duplicate monitoring.
Days 22–30: Manage and document
Set the operating cadence:
- Quarterly AI-risk review with the named owner from Govern.
- Risk-acceptance documentation when tolerance is intentionally crossed (always document, always justify).
- Mitigation playbook for the top 5 risks per system.
- Stakeholder communication template for AI incidents.
By day 30, a small team should have: a documented governance baseline, an inventory of in-scope systems with risk classification, a measure-set running on the highest-risk systems, and a manage-cadence on the calendar.
Where AI helps
RegAI ingests:
- NIST AI RMF 1.0 core + Companion Playbook (for the suggested actions per subcategory).
- Your existing AI governance policies, security policies, and engineering standards.
- The model inventory (typically a pull from an MLOps platform).
It then maps each RMF subcategory to your internal corpus, scores coverage, and drafts replacement policy language for partials. Plus it cross-maps each subcategory to EU AI Act Annex IV requirements — so the same documentation effort serves both regimes.
Common pitfalls
- Over-scoping the inventory. Trying to document every internal-only ML model on day 1 stalls the program. Externally-visible systems first.
- Confusing Govern with policy theater. RMF Govern is operational accountability — who specifically owns AI risk for each system. A glossary of policies isn't governance.
- Treating Measure as an afterthought. The framework's Measure function assumes you can produce evidence on demand. If your measurement tooling can't, your RMF claims are aspirational.
- Mistaking voluntary for optional. The framework is voluntary; the consequences of not aligning are increasingly not voluntary.
What this looks like at scale
On a recent engagement with an AI-first US tech company: 60% time savings during FTC audit cycles, 30+ analysts using the agent weekly, and the inventory of in-scope AI systems is now refreshed automatically every release. See the tech vertical →
Get started
If you're standing up an AI governance program — or quietly fixing one before a regulator notices — we run a 45-minute walkthrough on a slice of your AI inventory and a sample policy. Book a demo →
