Key AI Frameworks and Principles
Reference Guide
This document provides a concise overview of major AI frameworks and principles referenced in the presentation.
NIST AI Risk Management Framework (AI RMF)
The National Institute of Standards and Technology (NIST) AI Risk Management Framework provides a structured approach to managing AI risks throughout the AI lifecycle.
Core Functions
- Govern: Establish governance structures and processes
- Cultivate a risk management culture
- Define roles and responsibilities
- Allocate resources appropriately
- Establish policies and procedures
- Map: Identify and document contexts and risks
- Define the context in which the AI system will operate
- Identify potential risks and impacts
- Document AI system characteristics and requirements
- Understand relevant regulations and standards
- Measure: Quantify and qualify AI risks
- Assess likelihood and impact of identified risks
- Evaluate risks against organizational tolerance
- Prioritize risks based on assessment
- Document assessment methodologies and results
- Manage: Implement mitigation strategies
- Develop and implement controls
- Test effectiveness of controls
- Monitor for new or changing risks
- Continuously improve risk management processes
ISO/IEC Standards for AI
ISO/IEC 23894: AI Risk Management
This standard provides guidelines for managing risks related to AI systems, including:
- Risk identification methodologies
- Risk assessment techniques
- Risk treatment options
- Integration with existing risk management processes
ISO/IEC 42001: AI Management Systems
This standard specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system within an organization, addressing:
- Organizational context and leadership
- Planning and support
- Operation and performance evaluation
- Improvement processes
OECD AI Principles
The Organization for Economic Cooperation and Development (OECD) AI Principles provide an internationally recognized framework for the responsible development of trustworthy AI.
- Inclusive growth, sustainable development and well-being
- AI should benefit people and the planet by driving inclusive growth and sustainable development
- Human-centered values and fairness
- AI systems should respect human rights, democratic values, and diversity
- They should be designed to promote fairness and non-discrimination
- Transparency and explainability
- There should be transparency and responsible disclosure around AI systems
- People should understand AI outcomes and be able to challenge them
- Robustness, security and safety
- AI systems should function in a robust, secure, and safe way throughout their lifecycles
- Potential risks should be continually assessed and managed
- Accountability
- Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning
UNESCO Recommendation on the Ethics of AI
UNESCO’s Recommendation, adopted in 2021, is the first global standard-setting instrument on the ethics of artificial intelligence.
Core Values
- Human rights and human dignity
- Respect, protection, and promotion of human rights and fundamental freedoms
- Living in peaceful, just, and interconnected societies
- Promoting peaceful, inclusive societies
- Ensuring diversity and inclusiveness
- Ensuring cultural and social diversity in AI development and use
- Environment and ecosystem flourishing
- Promoting environmental sustainability and ecosystem health
Ten Core Principles
- Proportionality and Do No Harm
- Safety and Security
- Right to Privacy and Data Protection
- Multi-stakeholder and Adaptive Governance & Collaboration
- Responsibility and Accountability
- Transparency and Explainability
- Human Oversight and Determination
- Sustainability
- Awareness & Literacy
- Fairness and Non-Discrimination
EU AI Act Risk Categories
The EU AI Act categorizes AI systems based on their potential risk level:
1. Unacceptable Risk (Prohibited Practices)
- Harmful manipulation and deception
- Exploitation of vulnerabilities
- Social scoring
- Individual criminal offense risk assessment
- Untargeted facial recognition database creation
- Emotion recognition in workplaces and education
- Biometric categorization for protected characteristics
- Real-time remote biometric identification in public spaces
2. High Risk
- AI in critical infrastructure
- AI in education
- AI safety components in products
- AI in employment and worker management
- AI for essential services access
- AI in law enforcement
- AI in migration and border control
- AI in justice administration
3. Transparency Risk
- Chatbots (disclosure of AI nature)
- Generative AI (content identification)
- Deep fakes (clear labeling)
- AI-generated text on public interest matters (labeling)
4. Minimal or No Risk
- Most current AI applications (e.g., spam filters, video games)
- No specific regulatory requirements