II.

Developing trustworthy AI in an organization

Let’s do a recap on the Trustworthy AI course. We discussed the topic of how to work together as a team or organization to develop trustworthy AI solutions. Firstly, we need to align within the team or organization that trustworthiness is a shared goal we have. We found several good reasons to start this work, including the better ethical premises of the projects we build. We also recognized that many of the aspects of the trustworthiness of the project can help us gain the trust or “buy-in” of the different stakeholders around the project.

We presented a six-point checklist for getting started with working towards more trustworthy AI.

Firstly, the organization should make sure it’s complying with the existing regulations and laws around both privacy and AI topics. This is an effort that’s shared by many different roles in the organization, for example IT, legal, HR, or purchasing.

Secondly, the system should be built keeping in mind that it needs to be thoroughly tested – good practices around testing will make the end result more robust and understood. In parallel with testing, there should also be ways of inspecting and mitigating possible biases occurring in the data or in the model outcomes.

Thirdly, the societal or “real world” context in which the AI solution will be taken into use needs to be considered. What complications will our project face outside of the boundaries of it’s development and testing environments? How do we account for those challenges?

Fourthly, we need to keep in mind that the AI tools we build should be built taking a human-centric approach. Not only is this crucial for the end product's success and usability, but it’s also an ethical concern. How can we build products that are easy to interact with and that humans like to use and are able to understand thoroughly enough? Also, does the product or solution take into account users from diverse backgrounds?

The fifth aspect would be considering explainability. What technical explainability solutions would this project require, and what’s the role or importance of explainability to the different stakeholders and users?

Lastly, the sixth checkpoint is agreeing on and documenting responsibility and ownership of the various possible issues or malfunctioning of the solution.

In the Trustworthy AI course, the importance of cross-team and cross-discipline collaboration was also highlighted. When developing AI solutions, we rely on the diversity of views and skills within the team to understand the different aspects and impact areas of the solutions we build. This could mean both having a diverse set of people building the technical product and not hesitating to reach across our organizations for different perspectives or expert views. Having diverse teams also makes it easier to account for and mitigate biases in the data and model, as we might be paying attention to different parts of it. Despite diversity and collaboration being valuable, it can also be hard to achieve. We listed some ways to ease the path into it, including good team communication practices, making explicit agreements ahead of time on how often the team will check in together, developing a shared vocabulary, and encouraging and learning open communication.

So in short, there are a few practical steps that your organization can follow to implement trustworthy AI:

  • Ensure compliance

  • Work together with experts on regulation, ethics, and data processing

  • Take the societal context of AI deployment into account and involve societal actors from this context in the design stage

  • Put human priorities at the forefront of your machine learning implementation

  • Document and explain the development process

  • Test for robustness

  • Define who’s responsible in your organization if something goes wrong

  • Work in diverse and interdisciplinary teams

In the following chapters, we’re going to look deeper into what it actually takes to implement trustworthy AI from a technical point of view. Welcome to the course!

Part summary

So in short, there are a few practical steps that your organization can follow to implement trustworthy AI:

  • Ensure compliance

  • Work together with experts on regulation, ethics, and data processing

  • Take the societal context of AI deployment into account and involve societal actors from this context in the design stage

  • Put human priorities at the forefront of your machine learning implementation

  • Document and explain the development process

  • Test for robustness

  • Define who’s responsible in your organization if something goes wrong

  • Work in diverse and interdisciplinary teams

Next Chapter
2. Bias and fairness in AI