AI Ethics and Governance in Practice: AI Fairness in Practice

From The Embassy of Good Science
Revision as of 15:04, 10 April 2024 by 0009-0001-9914-1502 (talk | contribs) (Created page with "{{Resource |Resource Type=Education |Title=AI Ethics and Governance in Practice: AI Fairness in Practice |Is About=<div> In 2021, the UK's National AI Strategy recommended th...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Education

AI Ethics and Governance in Practice: AI Fairness in Practice

What is this about?

In 2021, the UK's National AI Strategy recommended that UK Government’s official Public Sector Guidance on AI Ethics and Safety be transformed into a series of practice-based workbooks. The result is the AI Ethics and Governance in Practice Programme. This series of eight workbooks provides end-to-end guidance on how to apply principles of AI ethics and safety to the design, development, deployment, and maintenance of AI systems. It provides public sector organisations with a Process Based Governance (PBG) Framework designed to assist AI project teams in ensuring that the AI technologies they build, procure, or use are ethical, safe, and responsible.

This workbook explores how a context-based and society-centred approach to understanding AI Fairness can help project teams better identify, mitigate, and manage the many ways that unfair bias and discrimination can crop up across the AI project workflow.

Why is this important?

Reaching consensus on a commonly accepted definition of AI Fairness has long been a central challenge in AI ethics and governance. There is a broad spectrum of views across society on what the concept of fairness means and how it should best be put to practice.  

We begin by exploring how, despite the plurality of understandings about the meaning of fairness, priorities of equality and non-discrimination have come to constitute the broadly accepted core of its application as a practical principle. We focus on how these priorities manifest in the form of equal protection from direct and indirect discrimination and from discriminatory harassment. These elements form ethical and legal criteria based upon which instances of unfair bias and discrimination can be identified and mitigated across the AI project workflow.  

We then take a deeper dive into how the different contexts of the AI project lifecycle give rise to different fairness concerns. This allows us to identify several types of AI Fairness (Data Fairness, Application Fairness, Model Design and Development Fairness, Metric-Based Fairness, System Implementation Fairness, and Ecosystem Fairness) that form the basis of a multi-lens approach to bias identification, mitigation, and management.

For whom is this important?

What are the best practices?

This workbook discusses how to put the principle of AI Fairness into practice across the AI project workflow through Bias Self-Assessment and Bias Risk Management as well as through the documentation of metric-based fairness criteria in a Fairness Position Statement.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.1.6