⏱ Estimated time: 15–20 minutes total
Section 1 of 10

Organizational Profile

Provide foundational information about your organization to contextualize the AI risk assessment.

GOVERN · MAP · MEASURE · MANAGE
1.1 Organization Information GOVERN 1.1
1.2 Primary Contact Internal
Section 2 of 10

AI System Inventory & Classification

Identify and classify the AI system being assessed. Complete one assessment per AI system, or provide consolidated answers for an organization-wide assessment.

MAP 1 — Context & Use Classification
2.1 AI System Identification MAP 1.1

3–5 sentences describing what the system does, its inputs, outputs, and primary function.

2.2 Purpose and Use Context MAP 1.1, 1.3, 1.4
2.3 AI Lifecycle Responsibility GOVERN 1.4
Section 3 of 10

Data Governance

Document the data inputs, training data characteristics, and quality controls for this AI system.

MAP 2 — Data Context & Risk
3.1 Training & Input Data MAP 1.1, 2.3, GOVERN 6.1
3.2 Data Quality Controls MEASURE 2.5
Section 4 of 10

Stakeholder & Impact Mapping

Identify affected populations and characterize the potential harms and benefits of this AI system.

MAP 1 · MAP 5 — Impact & Harm Assessment
4.1 Affected Populations MAP 5.1, 5.2, 1.2
4.2 Impact Characterization MAP 5.1, 5.2

For each harm type, rate the likelihood and severity of that harm occurring due to this AI system.

Harm Type
Likelihood
Severity
Harm to civil liberties or legal rightsRights & Legal
Discriminatory or disparate outcomes across demographic groupsFairness
Financial harm to individuals or the organizationFinancial
Physical safety risk to individualsSafety
Psychological harm to individualsWellbeing
Harm to the organization's reputationReputational
Harm to operational continuityOperational
Environmental or societal harmSocietal
Harm from security breach or data exfiltration via the AI systemSecurity
4.3 Benefits Characterization MAP 5.2
Section 5 of 10

Trustworthiness Characteristics Assessment

Rate the current implementation maturity of each trustworthiness characteristic for this AI system.

MEASURE 2 — Trustworthiness Evaluation
Rate each item on a four-point scale. These ratings directly inform the risk posture sections of your generated policy.
5.1 Valid and Reliable Section 3.1
Are accuracy metrics defined and documented for this system?
Is the system tested against representative datasets that reflect real deployment conditions?
Are false positive/false negative rates tracked and reported?
Is there a defined acceptable performance threshold below which the system triggers a review?
Is model robustness tested across varied conditions and edge cases?
Is ongoing post-deployment performance monitoring in place?
5.2 Safe Section 3.2
Has a formal safety risk assessment been conducted for this system?
Are there defined conditions under which the system will be shut down or overridden?
Is there a human intervention mechanism for safety-critical outputs?
Are safety incidents documented and tracked?
Does the system have a graceful degradation mode if it exceeds its knowledge limits?
5.3 Secure and Resilient Section 3.3
Has the AI system been assessed for adversarial attack vulnerabilities (data poisoning, model evasion, model extraction)?
Are access controls in place to prevent unauthorized use or modification of the model?
Is the system included in the organization's incident response plan?
Are the model, training data, and outputs protected against exfiltration?
Has a threat model specific to this AI system been developed?
Is the system subject to penetration testing or red-teaming?
5.4 Accountable and Transparent Section 3.4
Is documentation available describing how the system works, what data it uses, and how outputs are generated?
Are end users or affected individuals informed that an AI system is involved in decisions that affect them?
Is there a mechanism for individuals to understand why a specific decision was made?
Are audit logs maintained for system inputs, outputs, and decision events?
Is there a designated accountable owner for this AI system?
Are there documented escalation paths when the system produces disputed or harmful outputs?
5.5 Explainable and Interpretable Section 3.5
Can the system produce explanations of its outputs in terms understandable to non-technical users?
Are explanation methods (e.g., SHAP, LIME, rule extraction) in place?
Are explanations provided to decision-makers using the system's outputs?
Is there documentation mapping system outputs to the inputs or features that drove them?
Does the organization distinguish between explainability (how the system works) and interpretability (what the output means in context)?
5.6 Privacy-Enhanced Section 3.6
Has a Privacy Impact Assessment (PIA) been conducted for this system?
Are data minimization practices in place (only collecting data necessary for the system's purpose)?
Are de-identification or anonymization techniques applied where applicable?
Are privacy-enhancing technologies (PETs) employed (e.g., differential privacy, federated learning)?
Does the system comply with applicable privacy regulations (GDPR, CCPA, HIPAA, etc.)?
Are individuals provided with rights to access, correct, or opt out of AI-driven decisions about them?
5.7 Fair — With Harmful Bias Managed Section 3.7
Has bias testing been conducted on this system?
Are fairness metrics defined and tracked (e.g., demographic parity, equalized odds)?
Are outputs disaggregated by demographic group to assess disparate impact?
Is there a process for surfacing and addressing bias complaints from users or affected individuals?
Has a disparity analysis been conducted comparing system performance across protected classes?
Section 6 of 10

Governance Structure

Assess the organizational policies, roles, risk tolerance, and workforce practices governing AI risk management.

GOVERN 1 · GOVERN 2 · GOVERN 6
6.1 Policies and Procedures GOVERN 1–6
6.2 Roles and Accountability GOVERN 2
6.3 Risk Tolerance MAP 1.5
6.4 Third-Party and Supply Chain Risk GOVERN 6
6.5 Workforce and Culture GOVERN 3, 4
Section 7 of 10

Testing, Evaluation, Verification & Validation

Document pre-deployment testing practices and ongoing production monitoring for this AI system.

MEASURE 2 · MEASURE 4 — TEVV & Monitoring
7.1 Pre-Deployment Testing MEASURE 1
7.2 Ongoing Monitoring MEASURE 2.4, MANAGE 4.1
7.3 Metrics in Use MEASURE 2
Section 8 of 10

Incident History and Risk Response

Document any known adverse outcomes and the organization's risk treatment approach for this AI system.

MANAGE 2 · MANAGE 3 — Incident & Risk Response
8.1 Known Incidents MANAGE 1–4
8.2 Risk Treatment MANAGE 1.2, 1.3
Section 9 of 10

Legal and Regulatory Compliance

Identify applicable legal requirements and document your organization's compliance posture for this AI system.

GOVERN 1 · GOVERN 4 — Legal & Regulatory
Section 10 of 10

Open-Ended Supplemental Questions

These questions capture nuances not covered by the structured assessment. Your responses directly shape the custom policy provisions generated for your organization.

All RMF Functions — Supplemental Context

Include emerging risks, novel use cases, or gaps you have identified.

This is your opportunity to add context, priorities, or specific policy language requirements.

You have completed all 10 assessment sections. Click Submit Assessment below to save your responses and begin policy generation.
Section 1 of 10
Changes saved automatically