I.

Trustworthy AI requirements in practice

If you’ve studied our Trustworthy AI course, you know that AI systems can have a societal impact – and you’ve read about the challenges of bias in data and the potential complications it results in for developing fair models. Complicated and opaque AI design makes it difficult to explain the reason behind AI decisions, which diminishes trust in AI systems. Fairness, explainable AI (XAI), and resilience are presented as important dimensions that contribute to trustworthiness. The goal of this course is to go beyond a basic understanding of what’s at stake and provide practical technical solutions to design for trustworthy AI.

In the course Basics of Trustworthy AI, we provided you with many reasons for making AI more trustworthy. The examples you’ve read touch upon issues like:

  • Bias in datasets used by governments for decision-making

  • Questions around who’s responsible when automated vehicles cause accidents

  • Validity concerns that arise when you consider if a healthcare algorithm does what it’s supposed to do

  • Privacy infringements caused by the cameras of self-driving cars


In order to deal with such complicated challenges, there’s a push for AI owners and developers to build, train, deploy, and monitor algorithms in a more trustworthy manner. Trustworthy AI has many dimensions. In summary, the European Commission identified seven requirements for AI to be considered trustworthy:

  1. Human agency and oversight

  2. Technical robustness and safety

  3. Privacy and data governance

  4. Transparency

  5. Diversity, non-discrimination, and fairness

  6. Societal and environmental well-being

  7. Accountability

These requirements are interconnected and support each other. In this course, we present and explain technical methods that can help you work towards these trustworthy AI requirements.

The seven requirements for trustworthy AI listed in a graph. In the middle a text says that these requirements are needed to be continuously evaluated and addressed throughout the AI system's life cycle.
The seven requirements for trustworthy AI listed in a graph. In the middle a text says that these requirements are needed to be continuously evaluated and addressed throughout the AI system's life cycle.

The seven requirements for Trustworthy AI. Image source (p.17).

How to evaluate dataset balance, bias, and fairness

The development of trustworthy AI begins with reflecting on the datasets that we use to develop and train machine learning models. Data, whether this is generated or synthetic, can always include bias due to human behavior. Therefore it’s crucial to evaluate the data balance and the fairness of data. In Chapter 2, we illustrate how bias can occur on different levels and how developers can mitigate this.

To handle bias in data used for training AI models, we need to make a diagnosis, meaning that you first need to detect the bias to take mitigation measures or intervene in the process. This is an iterative process because after mitigation – which can happen during pre-processing, in-processing, or post-processing – another round of detection should take place until you achieve satisfactory results. This chapter discusses tools to foster diversity, non-discrimination, and fairness, and to advance human agency and oversight.

How to develop explainable and interpretable AI

Explainability is one of the hot topics of AI development in recent years. Many machine learning model types are explainable by default. However, some solutions such as different types of deep learning models are called “black box” due to the difficulty of understanding the model's inner workings and the features of the data leading to a given outcome. A lack of transparency about how these black box models work makes it difficult to trust their outcomes, as we can’t reason why they behave in a given way. Explainable AI (XAI) was introduced as a means to provide insights into how algorithms reach decisions and generate specific outcomes. Chapter 3 focuses on how we can use these methods to increase the transparency of the AI systems we build. Together, we’ll look into the technical aspects of XAI and different approaches to XAI aimed at explaining different phases of AI development and employment. To do this, various types of XAI analysis are presented as well as ways to interpret the outcome and how to design multiservice XAI tools.

How to build resilient AI solutions

AI systems are challenged by security attacks and privacy attacks, which need to be mitigated in order to provide technical robustness and safety. In Chapter 4, we give you an overview of different types of attacks. To make machine learning systems resilient, we’ll also equip you with insights into how you can mitigate security attacks, for instance through resilience by design, protection and detection mechanisms, defense strategies, and sanitizing data. Moreover, we present differential privacy as a strategy to mitigate privacy attacks.



Next section
II. Developing trustworthy AI in an organization