III.

The dimensions of trustworthy AI

In the previous sections, we’ve discussed the promises and perils of AI. What we hope became clear is that while AI solutions have become part of many aspects of society, the need for societally sound AI increases. In response to this need, trustworthy AI is often mentioned as an ethical, explainable, reliable, and transparent form of artificial intelligence. But what in the world does trustworthy AI even mean? That’s what we’re going to look at in this section.

According to European guidelines (18), trustworthy AI requires a big-picture approach considering legal, robustness, and ethical considerations. Incorporating these principles can help make AI systems more trustworthy, secure, reliable, supportive of human values, and ensure that it contributes to the common good. The European Commission indicates that AI should be:

  • Lawful: AI systems must comply with legal and regulatory requirements. This includes ensuring that AI systems respect individual privacy, data protection laws, and anti-discrimination laws. AI systems should also be transparent and explainable so that users can understand how decisions are made and what data is being used. In the following section, we’ll reveal how this goes beyond technical regulations and also covers regional privacy and market regulations and AI directives.

  • Ethical: AI systems must be developed and deployed with ethical considerations in mind. This includes ensuring that AI systems are fair and unbiased, respect human rights, and promote human well-being. Ethical AI should also be transparent and accountable so that users can understand how decisions are made and who’s responsible for those decisions.

  • Robust: AI systems must be designed to be robust and resilient to various types of attacks and failures. This includes ensuring that AI systems can function even when faced with malicious attacks, system failures, or unexpected inputs. Robustness requires that AI systems are tested thoroughly and can operate in different environments and scenarios. In other words, AI systems have to be reliable and safe, and able to handle unexpected situations.

The complexity of trust and trustworthiness

Trustworthy AI is a complex term that blends notions of trust and trustworthiness. That means it’s a somewhat fuzzy concept that is difficult to measure or to make visible because it connects with intuitive notions of trust. Charlotte Stix argues that the term personifies technologies – and that this leads to trusting technologies the same way we trust people:

“Just as we may have to resort to trusting a person when we cannot know their intentions, trusting an AI system seems to imply that it has inaccessible intentions that we have no control over, and that we are fine with that status quo” (Charlotte Stix, former coordinator of the European Commission's High-Level Expert Group on AI who currently works with OpenAI) (19).

Because trustworthy AI is meant as an evaluative framework, as a way to assess if technologies are worthy of our trust, it’s important that clear factors are identified that lead to trustworthiness. Trustworthy AI can refer to many different aspects, including:

  • trust in the proper functioning and safety of the technology

  • trust in the developers of the technology

  • trust in the users of the technology

  • trust in institutions and organizations that deploy the technology

You can see how these ideas on trustworthiness only partly refer to the technical aspects of trustworthy AI. This already hints at the different understandings people might have when we talk about trust. These understandings of trust and trustworthiness might also depend on cultural and national contexts; how people think of trust and trustworthiness might differ, if sometimes only slightly, between the USA, China, and the EU. In this course, we stick to understandings in a European context, so we use notions of trustworthy AI conceptualized by the European Commission. It remains important to understand that trustworthiness can mean different things to different actors in different contexts and that these different understandings shouldn’t be blindly assumed. As you might predict, this makes it even more difficult to establish trustworthy AI, because what do we mean by trustworthiness?

For example, predictive algorithms can be useful for creating future probabilities based on past data and can provide a lot of business value in certain areas and business sectors. Yet, these algorithms are quite opaque. They’re very powerful from an accuracy standpoint, but not very interpretable. What happens when heavily regulated sectors, such as banking or insurance, adopt automated decision-making tools that directly affect people's lives? Ideally, the user is informed about the reason why a decision was made, but with opaque tools, this becomes almost impossible. Citizens affected by such decisions can often only evaluate such AI tools on the basis of trusting the organization that deploys them. However, they have the right to receive an understandable explanation.

Transparency is a key aspect of trustworthiness for precisely this reason. Transparency makes it possible for people to understand how an algorithm makes decisions, the workings of machine learning, and AI output, by making various aspects of AI visible to people. For example, transparency can range from clarification about data collection to investigate potential biases to information about how machine learning models are built and trained. Moreover, transparency about auditing and monitoring processes puts checks and balances in place and helps to reflect on how machine learning and AI are implemented.

Transparency is therefore a crucial component of trustworthy AI and increases accountability. In connection to transparency, this introductory module to trustworthy AI covers four key dimensions:

  • Fairness: decisions made by computers after a machine-learning process may be considered unfair if their outcomes or the accuracy of their outcomes are unequal according to sensitive features (such as gender, ethnicity, sexual orientation, or disability). Therefore, algorithmic bias in automated decision processes based on machine learning needs to be corrected. Fairness forms the basis of Chapter 2 where transparency and lawfulness are discussed.

  • Explainability: it’s often unclear how machine learning results are established due to opaque (black box) decision processes. Explainability requires that machine learning output can be explained and interpreted in a human-intelligible manner so that people can understand how decisions are made by AI models. Explainability will be addressed in Chapter 3.

  • Resilience: information systems can face disruptive events (including deliberate attacks, accidents, and naturally occurring threats or incidents). Through risk management, contingency, and continuity planning, AI owners should anticipate threats and prepare systems. Resilient systems are able to withstand and adapt to attacks, adverse conditions or other potential disruptions. Chapter 4 dives into resilient AI.

  • Privacy preservation: AI owners need to preserve the privacy of users of information systems. When users provide personally identifiable information, this should be stored, processed, and shared in a secure and transparent manner. Privacy-preserving technologies will also be discussed in Chapter 4.

Next section
IV. How can (trustworthy) AI transform businesses?