Back to Sia Partners A Sia product
RegAI  /  Blog  /  FDA 21 CFR Part 11
Practical guide · Pharma & Life Sciences

FDA 21 CFR Part 11 compliance with AI — electronic records, signatures, and the audit trail.

Published April 30, 2026 11-minute read By Sia

21 CFR Part 11 was published in 1997, before half the people running pharma quality programs today were in school. It's also the rule that decides whether your electronic records and signatures are legally equivalent to ink on paper. Adding AI to a Part-11 environment makes auditors nervous for good reasons. Here's how to do it without breaking the validated state.

This guide is built from engagements with pharma manufacturers, biotechs, and CDMOs adding AI to GxP-validated workflows. The technical premise: AI can produce 21 CFR Part 11-compliant outputs as long as the system around the AI satisfies the rule's requirements. The model is not the system; the controls around it are.

What Part 11 actually demands

21 CFR Part 11 establishes the criteria under which electronic records and electronic signatures are considered trustworthy, reliable, and equivalent to paper records. Subpart B (records) and Subpart C (signatures) carry the operational obligations. The most-cited requirements:

  • §11.10(a) — Validation of systems to ensure accuracy, reliability, consistent intended performance.
  • §11.10(b) — Generate accurate and complete copies of records in human-readable and electronic form.
  • §11.10(c) — Protect records to enable accurate and ready retrieval throughout the records retention period.
  • §11.10(d) — Limit system access to authorized individuals.
  • §11.10(e) — Use of secure, computer-generated, time-stamped audit trails to independently record the date and time of operator entries and actions that create, modify, or delete electronic records.
  • §11.10(g) — Use of authority checks to ensure that only authorized individuals can use the system, electronically sign a record, access the operation or computer system input or output device, alter a record, or perform the operation at hand.
  • §11.50, §11.70, §11.100–§11.300 — Electronic signature requirements: components, controls, identity verification, link to the record.

The FDA also publishes guidance under Computer Software Assurance (CSA, 2022 final) that softens the historical Computer System Validation (CSV) burden for many software categories — explicitly accommodating risk-based, agile, and AI-augmented validation. That guidance is the bridge document for adopting AI in a Part 11 context.

Where AI fits — and where it doesn't

Part 11 doesn't ban AI. It requires that the system producing electronic records and signatures meet specific control objectives. AI is one input to that system. Where it fits:

Helpful AI uses (well within Part 11 expectations):

  • Pre-drafting batch records, deviation reports, change controls, CAPAs — followed by validated human review and signature.
  • Document classification and routing into validated EDMS workflows.
  • Summary generation for management review (where the underlying data is the record, not the summary).
  • Anomaly detection on environmental monitoring or in-process data, surfaced to qualified reviewers.
  • Suggested gap analyses against quality manual procedures (with human-in-the-loop sign-off).

AI uses that need careful design:

  • Direct generation of electronic records that need a signature. The AI's draft can become a record only after a qualified human signs it. The signature is the point of legal equivalence; pre-signature, the AI output is a working artifact.
  • Retraining on production data. Models that change autonomously break the "consistent intended performance" requirement of §11.10(a). Use frozen models in production; retrain offline, validate, redeploy.
  • Autonomous decisions on regulated outputs (e.g., lot disposition). Don't. The signatory has to be a person, identified, and accountable.

The five Part-11 control objectives mapped to AI

1. Validation (§11.10(a))

The FDA's Computer Software Assurance (CSA) guidance is the framework: assess intended use → identify risk → apply assurance activities proportionate to risk. For AI components:

  • Intended use: document precisely what the AI is allowed to do (suggest, draft, classify) and what it's not (sign, decide, deploy).
  • Risk: classify by impact on product quality and patient safety. AI summarizing investigator-brochure sections is low-risk; AI proposing CAPA root cause is medium; AI involved in lot release is high.
  • Assurance activities: for low-risk, documented testing on representative inputs. For higher risk, formal IQ/OQ/PQ-style validation with frozen model versions and a defined re-validation trigger.

The deliverable is a validation summary that names the model version, the test set, the acceptance criteria, the human-review controls, and the change-control trigger.

2. Audit trail (§11.10(e))

This is where AI integration is most often done badly. The audit trail must record every action that creates, modifies, or deletes a regulated record — and AI suggestions that get accepted are exactly such actions.

The minimum viable AI audit trail:

  • Each AI suggestion timestamped, with model version and prompt template version recorded.
  • Reviewer accept / edit / reject recorded with user identity, timestamp, and rationale.
  • Diff between AI proposal and accepted version stored, so the regulator can see exactly what changed.
  • The chain "source data → AI suggestion → human edit → final record → electronic signature" intact and queryable.

This is the same citation-graph mechanic we covered for compliance work in Citation graphs for compliance. The Part-11 version adds the electronic-signature endpoint at the bottom of the chain.

3. Authority checks (§11.10(g)) and access (§11.10(d))

AI doesn't change the access model. The user signing the record must be authenticated to the same standard as in any Part-11 system. Where AI complicates things:

  • If the AI runs in a separate tenant or service, the access boundary between AI and the validated record system must be explicit. The AI can read source data with the user's permissions; it can't write the record without going through the validated workflow.
  • API keys and service accounts that let the AI act on the system are themselves "authorized individuals" for §11.10(g) purposes — they need provisioning, rotation, and revocation procedures the same as any user.

4. Accurate and complete copies (§11.10(b))

Records must be retrievable in human-readable and electronic form for the retention period. For AI-touched records, that means archiving:

  • The final signed record (always).
  • The AI suggestion that fed into it (so the lineage is reproducible).
  • Enough metadata about the model and prompt to reconstruct the suggestion if needed.

You don't need to archive the model weights, but you do need a frozen reference to the model version and the reproducibility properties (deterministic decoding parameters, fixed retrieval corpus version) so a reviewer can re-run the same input and get a consistent answer.

5. Electronic signatures (Subpart C)

Nothing about AI changes signature requirements. Two-component signatures (Subpart C) for non-biometric methods, with linking to the record (§11.70), and identity verification (§11.100–§11.300) all stay the same. The signature is on the record the human reviewed and accepted, not on the AI suggestion.

The CSA shift — why this is more workable than five years ago

The FDA's 2022 Computer Software Assurance guidance was a meaningful shift away from the historical CSV mindset of "test everything, document everything, freeze everything." CSA explicitly endorses:

  • Risk-based testing scope. High-risk components get formal validation; lower-risk components get focused assurance activities.
  • Vendor leverage. Documented vendor testing can replace some IQ/OQ activities for non-product-quality-impacting components.
  • Agile and continuous-deployment models. With proper change-control and assurance, software updates don't require full re-validation.
  • Unscripted testing. Exploratory testing by qualified personnel is acceptable for lower-risk validation activities.

Translation: a well-designed AI integration for low- and medium-risk pharma workflows is now a normal CSA exercise, not a 12-month CSV ordeal. The high-risk frontier (lot disposition, batch release, regulatory filings) still gets the full treatment. The 80% of the validation budget that historically went to over-validating low-risk software now goes to risk-appropriate work.

Where RegAI helps

RegAI ingests the FDA Part-11 text, the supporting CSA guidance, EU Annex 11 (the European equivalent), and PIC/S Annex 11. It maps each obligation to your QMS and SOP library, scores coverage, and drafts policy and procedure updates for gaps. The agent supports the Part-11 control mapping and produces the validation deliverables (intended-use statements, risk assessments, assurance summaries) you need for the AI components themselves. Citation graph at every step; Part-11-grade audit trail by construction.

For a typical engagement, the deliverables are:

  • A Part-11 / Annex 11 / PIC/S obligation matrix scored against your current QMS.
  • Drafted SOPs for AI use within validated workflows (intended-use, change-control, assurance).
  • An audit-trail design document that satisfies §11.10(e) for AI-augmented workflows.
  • A validation pack ready for internal QA review and external audit.

Common pitfalls

  • Treating AI as a separate validation problem from Part 11. It isn't. Part 11 is the framework; AI is one input. Validate the system, not the model in isolation.
  • Over-validating low-risk uses. The historical reflex is "more is safer." CSA explicitly disagrees. Document the risk basis and proportion the assurance to it.
  • Letting models retrain in production. The "consistent intended performance" clause doesn't accommodate quietly-drifting models. Freeze in production; retrain offline; revalidate.
  • Skimping on the audit trail. If a regulator asks "what did the AI propose, what did the reviewer change, and why?" — and the answer is "we don't store that," you're outside §11.10(e).

Closing

AI in pharma quality systems is not a Part-11 problem with a Part-11 answer. It's a system-design problem with a Part-11 answer: design the workflow so that AI is an input, the human is the signatory, the audit trail captures the chain, and the validation is risk-proportionate. CSA gives you the framework; RegAI gives you the obligation mapping, the drafted procedures, and the audit trail to defend it.

The pharma industry's relationship with software has changed slowly because the cost of being wrong is so high. AI doesn't change that relationship — it works inside it.

Run RegAI on your Part-11 scope.

A 45-minute walkthrough on a slice of your QMS and a sample SOP. We bring the platform.