heino1.github.io-AI-management

Key AI Frameworks and Principles

Reference Guide

This document provides a concise overview of major AI frameworks and principles referenced in the presentation.

NIST AI Risk Management Framework (AI RMF)

The National Institute of Standards and Technology (NIST) AI Risk Management Framework provides a structured approach to managing AI risks throughout the AI lifecycle.

Core Functions

  1. Govern: Establish governance structures and processes
    • Cultivate a risk management culture
    • Define roles and responsibilities
    • Allocate resources appropriately
    • Establish policies and procedures
  2. Map: Identify and document contexts and risks
    • Define the context in which the AI system will operate
    • Identify potential risks and impacts
    • Document AI system characteristics and requirements
    • Understand relevant regulations and standards
  3. Measure: Quantify and qualify AI risks
    • Assess likelihood and impact of identified risks
    • Evaluate risks against organizational tolerance
    • Prioritize risks based on assessment
    • Document assessment methodologies and results
  4. Manage: Implement mitigation strategies
    • Develop and implement controls
    • Test effectiveness of controls
    • Monitor for new or changing risks
    • Continuously improve risk management processes

ISO/IEC Standards for AI

ISO/IEC 23894: AI Risk Management

This standard provides guidelines for managing risks related to AI systems, including:

ISO/IEC 42001: AI Management Systems

This standard specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system within an organization, addressing:

OECD AI Principles

The Organization for Economic Cooperation and Development (OECD) AI Principles provide an internationally recognized framework for the responsible development of trustworthy AI.

  1. Inclusive growth, sustainable development and well-being
    • AI should benefit people and the planet by driving inclusive growth and sustainable development
  2. Human-centered values and fairness
    • AI systems should respect human rights, democratic values, and diversity
    • They should be designed to promote fairness and non-discrimination
  3. Transparency and explainability
    • There should be transparency and responsible disclosure around AI systems
    • People should understand AI outcomes and be able to challenge them
  4. Robustness, security and safety
    • AI systems should function in a robust, secure, and safe way throughout their lifecycles
    • Potential risks should be continually assessed and managed
  5. Accountability
    • Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning

UNESCO Recommendation on the Ethics of AI

UNESCO’s Recommendation, adopted in 2021, is the first global standard-setting instrument on the ethics of artificial intelligence.

Core Values

  1. Human rights and human dignity
    • Respect, protection, and promotion of human rights and fundamental freedoms
  2. Living in peaceful, just, and interconnected societies
    • Promoting peaceful, inclusive societies
  3. Ensuring diversity and inclusiveness
    • Ensuring cultural and social diversity in AI development and use
  4. Environment and ecosystem flourishing
    • Promoting environmental sustainability and ecosystem health

Ten Core Principles

  1. Proportionality and Do No Harm
  2. Safety and Security
  3. Right to Privacy and Data Protection
  4. Multi-stakeholder and Adaptive Governance & Collaboration
  5. Responsibility and Accountability
  6. Transparency and Explainability
  7. Human Oversight and Determination
  8. Sustainability
  9. Awareness & Literacy
  10. Fairness and Non-Discrimination

EU AI Act Risk Categories

The EU AI Act categorizes AI systems based on their potential risk level:

1. Unacceptable Risk (Prohibited Practices)

2. High Risk

3. Transparency Risk

4. Minimal or No Risk