II.

Epilogue

Throughout this course, we’ve looked at the trustworthiness of a project from several angles – including the topics of data, the risks of bias, and how we can work towards improving dataset balance and fairness, and what kind of impact that can have on the model outcomes. We also dug into model interpretability and different approaches to explainability, as well as why being able to explain and understand the inner workings of models through these various techniques is so important from the perspective of developers, companies, and individual users. We also looked further into the topics of privacy and security – what makes up a secure and robust machine learning application and what the impacts of possible attacks could be to both the developers and the end users.

However, while understanding the technical premises for working with trustworthy AI is important, the perhaps most important challenge in moving forward lies in the communication about these topics. For an AI project to succeed, we need to involve a diverse range of experts who are familiar with the area of application and the challenges and opportunities that lie there. How can developers, designers, product managers, managers, and all the other people involved in developing these solutions collaborate and understand each other? How can we bring everyone together at the same table, with a shared language and understanding of the problems at hand?

For a developer, data scientist, or machine learning engineer, a key takeaway from this course is exactly this. When building trustworthy AI solutions, the starting point should always be to align the vocabulary we use. It may mean taking the time to write some documentation pages that can be shared within your organization, or maybe it means scheduling a time to chat over these topics with your manager over coffee. Once you feel like you share the same language for speaking about topics such as “fairness,” “explainability,” and “AI system security,” prioritizing and communicating the work done around these topics becomes much simpler.

It’s also important for the different stakeholders to have a good level of visibility into the technical work. Having a basic understanding of the challenges and different alternatives to how and why to build products can also help justify the time and resources put into a project. For example, communicating what the different approaches for explainability are and how they impact both the expected results as well as the time it takes to develop the solution is valuable to the different stakeholders within the organization so that they can understand and adjust their expectations accordingly.

When working together, a good rule of thumb is also to communicate early. What possible challenges do you see ahead, and who would be the right people to engage in discussions on how to tackle those? Making modifications and seeking actionable advice is always much easier before something has already been built. Bring up the topics from this course already in the first design meetings for new projects.

Lastly, when you do bring up these topics, don’t expect the same level of technical knowledge from all the roles in the project or company. Instead, try to bring plenty of easily understandable examples and pause to ask if what you described made sense. Keep in mind that the goal is not to throw lots of technical terms at people and seem knowledgeable but rather to work as a team towards a shared basic understanding and aligned vision of the outcomes.

Note

Effective communication

  • Build a shared knowledge base and vocabulary

  • Give the different stakeholders visibility into different alternatives and how they impact the project.

  • Ask early: what are the requirements for the different areas of trustworthiness? What challenges do you see ahead?

  • Give plenty of practical examples. Instead of trying to sound knowledgeable, keep communication as accessible and easily understandable as possible.

Congratulations, you've completed the course!

Thank you for joining us on this journey into the specifics of trustworthy AI and what it means for developers. We hope you gained both theoretical and practical know-how that will help you as you advance your career. We hope you take these learnings to make the AI tools you develop something that can be trusted and that stands the test of time.

We want this course to be as accurate and accessible around this topic as possible. If you have feedback or ideas on how to improve it, please drop us a line at hello@minnalearn.com.

Best regards, the writers of the course:

Anouk Mols, Tessa Oomen & João Gonçalves
Erasmus University Rotterdam

Marcus Westberg & Prachi Pagave
Delft University of Technology

Abdul-Rasheed Olatunji Ottun, Mehrdad Asadi, Farooq Ayoub Dar, Mayowa Olapade & Huber Flores
University of Tartu

Bartlomej Siniarski, Shen Wang, Thulita Seneviratha & Chamara Sandeepa
University College Dublin

Vinh La, Ana Cavalli
Montimage

Souneil Park
Telefónica

Magdalena Stenius

Travis Larson

Miikka Holkeri & Laura Bruun
MinnaLearn

You reached the end of the course!

Correct answers

0%

Exercises completed

0/0