heino1.github.io-AI-management

AI Management, Risk Management, and Governance

Glossary of Key Terms

Artificial Intelligence (AI): Technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.

Machine Learning (ML): A subset of AI that involves creating models by training an algorithm to make predictions or decisions based on data without being explicitly programmed for specific tasks.

Deep Learning: A subset of machine learning that uses multilayered neural networks to more closely simulate the complex decision-making power of the human brain.

Neural Network: Computing systems inspired by the human brain’s structure and function, consisting of interconnected layers of nodes that work together to process and analyze complex data.

Generative AI: Deep learning models that can create complex original content such as text, images, video, or audio in response to prompts or instructions.

AI Governance: The framework of policies, principles, and practices that guide the ethical development, deployment, and use of artificial intelligence technologies.

AI Risk Management: The systematic application of policies, procedures, and practices to identify, analyze, evaluate, and address risks associated with the development and use of AI systems.

AI Ethics: The branch of ethics that focuses on the moral issues related to the creation and use of artificial intelligence, addressing questions about fairness, transparency, privacy, and accountability.

Explainability: The ability to explain the processes and decisions of AI systems in terms understandable to humans, making clear how and why a particular decision or prediction was made.

Transparency: Making the workings of an AI system open and accessible, clearly sharing insights into how an AI model is developed, deployed, and used.

Accountability: Clear attribution of responsibility for the actions taken by AI systems, with processes in place to address issues, mitigate biases, or unintended consequences.

Fairness: The quality of AI systems being designed and operated to avoid bias and provide impartial, just, and equitable decisions.

Bias: Systematic errors in AI systems that can lead to unfair outcomes, often reflecting historical or societal prejudices present in training data.

Data Governance: The overall management of the availability, usability, integrity, and security of data used in AI systems, ensuring sensitive information is protected.

Model Drift: The degradation of model performance due to changes in the data distribution or relationships between input and output variables over time.

High-Risk AI Systems: AI applications that can pose serious risks to health, safety, or fundamental rights, requiring strict obligations under frameworks like the EU AI Act.

AI Impact Assessment: A structured process to evaluate the potential effects of an AI system on individuals, communities, and society before deployment.

Red Teaming: The practice of deliberately challenging an AI system to identify vulnerabilities, biases, or other issues before deployment.

AI Lifecycle: The entire process of AI development and deployment, including planning, data collection, model development, testing, deployment, monitoring, and retirement.

Robustness: The ability of an AI system to maintain reliable and effective operation even under unexpected and difficult conditions.