As you’ve seen in this course, there are many reasons why organizations might adopt AI systems. However, we’ve also discussed how adopting trustworthy AI puts additional requirements on businesses to review their internal processes. As covered before, trustworthy AI tools have technical requirements related to reliability, transparency, fairness, and resilience – and their development can’t be done in isolation from business practices. That’s why we’ve dedicated this section to the value of collaboration for AI development. As we’ll see, trustworthy AI can lower the threshold to adopt AI-based systems because businesses can do so in a more ethically sound manner. Adoption of trustworthy AI might also help to get stakeholder buy-in, meaning that explainability and transparency will raise people’s confidence in using AI-based systems.
The purpose and role of AI systems and who benefits from them
When it comes down to it, AI is an imperfect tool that reflects the data we provide it, not a neutral and perfect solution. You can think of AI as the bucket of paint that a painter needs to make their art: the bucket doesn’t do the painting for you. But how can we help increase the trust people have in this tool? That’s where trustworthy AI comes into play. The concept of trustworthy AI aims to create a norm of asking:
1) Why are we using AI for something?
2) For whom is the AI-powered solution designed?
3) What kind of data does the solution build on?
4) Who does the solution affect?
Before going into the details of how to make the steps in the model building trustworthy, one should always first be clear about the why behind it. AI shouldn’t be something introduced into an organization’s processes just for the sake of having AI. From there, trustworthy AI can help illuminate the reasons why AI makes a decision, increasing users’ trust – and making AI a more useful tool for us all.
An organizational perspective on trustworthy AI development
“This all sounds great,” you might be thinking. “I want to implement trustworthy AI in our next project. How do I do it?” Unfortunately, there’s no single straightforward checklist for organizations that want to make AI-driven tools more trustworthy. However, there are a few things you should take into account for the successful implementation of trustworthy AI.
First, organizations need to ensure compliance with regulations around data processing and privacy, automated decision-making, human rights, proprietary information, and intellectual property. To ensure such compliance, it’s useful to work together with experts on regulation, ethics, and data processing. This can mean that people from different departments such as Legal, IT and Security, Purchasing, and maybe even Human Resources (HR) should get together to discuss these topics.
Second, to ensure reliable and stable operations, systems need to be tested for robustness. They should be able to handle adverse and often difficult problems that are intentionally or unintentionally posed to them – think cybersecurity threats or accidental data breaches. This requires extensive testing, preferably with data or people that haven’t previously been engaged in the AI training process. When testing for robustness, it’s important to also consider possibilities of discrimination and bias, to explore how the algorithm could affect groups that are disadvantaged or vulnerable in society, and – whenever possible – involve these groups directly in the testing and development process. Even when the possibilities of discrimination seem remote, almost every machine learning application has the potential to impact groups of people in different ways, often reinforcing systemic biases. No system is perfect, but the inclusion of thorough testing can go a long way to making the system better.
Third, the societal context of AI deployment needs to be taken into account, especially when it comes to use in public environments. AI development often happens in simulated and controlled environments, far removed from the contexts in which AI will be deployed. This can lead to complications for actual deployment and implementation. For instance, testing and deploying (semi-)autonomous vehicles such as drones, cars, and delivery carts is complicated due to technical and environmental uncertainties. As a result, there is little to no real-life data that can be used to adjust the models.These limitations occur because licenses are rarely provided for the deployment of autonomous AI-driven vehicles – it wasn’t until 2021 that a driverless delivery vehicle company was the first company in Europe to receive a pilot license. Therefore, the infrastructure needs to be developed for actual technical deployment.
The fourth point to consider is the human who interacts with the AI product or solution. The products should be easy enough to use and understand, which also feeds into the success of the product.This approach raises the trustworthiness of AI beyond the technical aspects, which can be an additional advantage to organizations. When it’s feasible (meaning when it improves the solution’s trustworthiness), it’s important to involve humans in monitoring the results, for example through human-in-the-loop approaches.
Fifth, technical methods that provide explanations for machine learning decisions should be implemented. However, they shouldn’t be a standalone solution. It’s important to document organizational decisions that can explain why a machine learning model was developed in a certain way. Explanations about the development process are as important, if not more, than technical AI explainability. This also ties to transparency and open data, which should be ensured whenever possible, for instance, when they don’t compromise robustness.
Sixth, and finally, it’s important to define who’s responsible in the organization for particular processes during regular operation and not only when something goes wrong. It can be difficult to assign responsibility as the repercussions can be severe. However, this shouldn’t deter organizations from carefully thinking about this.
Making effective use of trustworthy AI isn’t an easy feat, especially because it has implications beyond the technology itself and its output.
Collaborating on trustworthy AI development
We just discussed how intra-organizational collaboration can be important for the development of trustworthy AI. But collaboration can also be beneficial or even necessary for data scientists, engineers, or social scientists to make their AI tools even better.
AI research and development can benefit from the knowledge created in fields of science beyond data science and engineering. An interdisciplinary approach has been shown as a key approach to solving complex problems in other areas. Even the Industrial Revolution meant more than a change in technology – it changed the fabric of society on a much deeper level. Teams working on AI tools should therefore strive to achieve diversity in their makeup. Data science specialists can team up with experts from different technical specialties, but also with experts from application domains, such as healthcare, education, robotics, or transportation. Such experts will be familiar with the specific challenges and opportunities of the area they specialize in. In many projects on digital innovations, a multi-stakeholder approach is considered to be best practice.
Recently, businesses and funding agencies have realized the benefits of involving social scientists in the process of AI development. The involvement of social science perspectives adds value to the development of AI because social scientists are extensively trained on critical reflections of ethical and societal values – especially how these values may unintentionally creep into end products. With the range of application areas for AI and their potential far-reaching implications, this reflection is especially salient. Therefore, AI development teams could benefit from involving experts in ethics, communication, or sociology to reflect on processes and outcomes in new ways early on in the development process, without trying to add ethical frameworks in the final stages.
The benefit of diversity
Another factor of diversity is to have team members with different cultural backgrounds and genders. Data scientists and engineers have had great mobility in the past, but with the increased popularity of AI, this has increased manifold. Businesses aim to attract talent from all areas of the world to get the best teams for their organization. Moreover, international collaboration among businesses and development teams based in different countries is common in the field. The benefit of this form of diversity is that teams have access to different perspectives, values, and practices, which can help them in their approach to AI development. At the same time, however, these differences can become a complicating factor if they aren’t acknowledged or resolved. In the end, the largest benefit lies in resolving differences and finding ways of working together. Working in diverse teams can help in preventing (unconscious) bias against specific groups.
It’s easy to say that diversity in teams is important and valuable, but it’s another matter to implement it effectively. Interdisciplinary team members can have different backgrounds, for example in terms of experience or culture, and different training in terms of discipline. Therefore, teams should put in additional effort in communication to overcome such challenges.
Teams can implement several strategies to prevent or solve potential issues with language and communication processes. As you’ll probably notice, the success of these strategies lies in part in the level of trust teams have in each other and the project. While these strategies may be quite general, they certainly apply to AI projects as well.
The first aspect that team leads should focus on is the development of good relationships within the teams, or with teams from other organizations. While much of the work in AI development projects can be done remotely and through digital means, it’s important to make time for social interactions. In-person meetings, while costly in terms of time and funding, tend to foster good connections and smoother interactions among team members.
The second strategy is to discuss and make clear and explicit agreements on communication processes before the project starts, and to review these while the collaboration is ongoing. For example, it should be clear how often a team should meet, who should be present, and the focus area or goal of meetings. It’s fine to involve only those who need to be involved: high-level discussions don’t require all members of the team to be present, and meetings on a particular topic don't need to involve those members who work on other parts of the project. However, a regular checkup to reflect on the alignment of goals and processes is extremely important.
The third approach is to develop a shared vocabulary as a team. With such a vocabulary, team members can share their views on key concepts relevant to the projected final outcomes and discuss perceived overlap or disagreements. The vocabulary can serve as a repository that team members can check during the lifetime of the project. Teams can agree to make the vocabulary a living document or to update it during a later stage of the project.
Finally, open communication plays a significant role in the success of collaborative projects. This invites proactivity: it gives team members the opportunity to speak up about potential misunderstandings or ask clarifying questions before it becomes too late to resolve issues – or before they turn into conflict. In diverse teams, team members need to be receptive to others who might raise such points, as they may stem from differences in language proficiency or different expert backgrounds.
If communication problems arise or persist, it can be helpful to assign someone a broker role. This person can translate the language differences and pinpoint where team members align or diverge. A broker can be useful even before problems become clear, as a broker can act as an additional checkpoint to see if all team members have the same view of the project.