Module 1: Foundations of AI Ethics
Module 1: Foundations of AI Ethics
This module introduces the ethics foundations of artificial intelligence (AI) within European policy frameworks. It covers the definition of an AI system according to the EU AI Act, along with core AI concepts and techniques.
It further explores the ethics-by-design approach, showing how ethics principles can be embedded throughout the AI lifecycle. The module also presents the requirements for trustworthy AI as defined by the High-Level Expert Group on AI and introduces the ALTAI framework as a practical self-assessment tool.
Overall, this module equips learners with the essential prerequisite knowledge to understand AI fundamentals and serves as an introductory module to the course.
Learning Goals
- Understand the definition of an AI system under the EU AI Act
- Become familiar with core AI concepts and terminology
- Explain the ethics-by-design approach
- Identify the key requirements for trustworthy AI
- Understand the purpose and basic use of the ALTAI framework
What is this about?
This module introduces the basic concepts and ethics foundations of artificial intelligence (AI) within the European context. It explains what AI systems are according to the EU AI Act, presents key AI concepts, and explores how ethics requirements can be integrated into AI development through an ethics-by-design approach.
Learners will also become familiar with the main requirements for trustworthy AI, as outlined by the High-Level Expert Group on AI, and gain an introductory understanding of the ALTAI self-assessment framework.Why is this important?
Artificial intelligence is widely used across research, industry, and public services, making it essential to ensure its development aligns with ethics principles and societal values. Understanding AI in the context of the EU AI Act helps learners navigate emerging regulatory requirements.
This module emphasizes the importance of trustworthy, transparent, and accountable AI, in line with the High-Level Expert Group on AI guidelines, and introduces tools such as ALTAI to support responsible development and evaluation.AI System Definition
An AI system, as defined in the EU AI Act, provides the foundational reference point for determining the scope of AI governance and related ethics obligations. This definition is crucial because it establishes clear criteria for distinguishing AI-based technologies from other software systems, ensuring consistent interpretation across research, development, and policy contexts. A precise understanding of this definition helps stakeholders assess whether a system is subject to specific legal, ethical, and compliance requirements under EU rules. According to the definition of an AI system in the EU AI Act:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Source: European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence (EU AI Act). Official Journal of the European Union. Article 3(1) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
Core AI Concepts
Core AI Concepts
Ethics-by-design Concept
Ethics-by-design Concept
HLEG Guidelines for Trustworthy AI
HLEG Guidelines for Trustworthy AI
