The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. Initially adopted in 2019 and updated in May 2024, these principles guide AI actors in developing trustworthy AI and provide policymakers with recommendations for effective AI policies.
AI should contribute to economic growth, social development, and environmental sustainability, benefiting humanity as a whole.
AI systems should respect human rights, democratic values, fairness, and privacy, ensuring that AI development and use aligns with fundamental societal values.
AI systems should be transparent, with stakeholders able to understand how they function and make decisions, promoting accountability and trust.
AI systems should be designed to operate reliably, securely, and safely throughout their lifecycle, minimizing risks and ensuring resilience against attacks.
Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Governments should invest in fundamental research and encourage private sector investment to foster innovation in trustworthy AI.
Policies should support a digital ecosystem where AI technologies can flourish, including digital infrastructure, mechanisms for sharing data and knowledge, and support for small and medium enterprises.
Governments should create a policy environment that supports the development and use of trustworthy AI, including standards, regulatory frameworks, and international cooperation.
Policies should help people develop skills for AI-related jobs, support workers through transitions, and ensure that the benefits of AI are broadly shared.
Governments should work together to share information, develop standards, and help ensure that AI is used to improve global challenges like climate change, health, and sustainable development.
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
The OECD Recommendation on AI is the first intergovernmental standard on AI, with 47 adherents to the Principles. Countries use the OECD AI Principles and related tools to shape policies and create AI risk frameworks, building a foundation for global interoperability between jurisdictions. Major entities like the European Union, the Council of Europe, the United States, and the United Nations use the OECD’s definition of an AI system and lifecycle in their legislative and regulatory frameworks and guidance.