III.

Looking to the future of trustworthy AI

After everything that you’ve studied so far, you might be wondering: “What comes next?” The media is certainly full of scare stories about AI – like who among us will be the first ones to lose their jobs.

In this section, we’ll look at a few possible scenarios and discuss them with the goal of keeping our predictions firmly grounded in reality. The fact that we can have an interactive chat with a bot that produces code, videos, or poetry feels revolutionary, but it’s not something that fundamentally threatens our control of civilization or sense of our humanity.

Still, it’s important to remember that these tools can produce things their creators don’t want them to. Plenty of prognosticators are signaling that not carefully considering AI development is complacent. According to a survey done by AI Impacts, an American research group, machine learning researchers predicted a median 5% chance for advanced AI to cause “extremely bad” outcomes. So it’s good to prepare for AI to be an increasing part of our professions and ways of getting information but also think about it critically.

Prediction 1: Regulation will keep increasing

If you’ve made it this far in the course, you’ve read about the different kinds of regulation that are already being put into place, specifically in the EU. The AI Act introduced in section 4.4 is currently on the stricter end of the spectrum. Different uses of AI are placed in different law categories based on their risk level (self-driving vehicles need much more disclosure and monitoring than music recommendations). At the time of writing this, Britain is taking a lighter approach by trying to apply the existing regulations on AI systems. America wants to build and retain its position as an AI superpower and is seeking to create a suitably light–touch rulebook (1).

The greatest need for regulation lies in the interpretability of AI models. As we’ve learned, there are numerous methods to help us understand black-box models, but the incentive to build explainability systems into the models is too low without the push that comes from proper regulation.

One track in regulation has to do with generative AI and, especially, copyrights. Questions around data licensing are also gaining increasing visibility – there are several ongoing court cases around the training data used for various generative AI models. However, it’s likely that the large-scale companies currently being targeted in these cases will be able to adapt their practices and shift to make use of for example artificially generated training data in case legislation catches up with this issue.

Overall, plenty of people have concerns about AI’s impact on bias, privacy, and intellectual property rights, which means the need for further development of regulatory frameworks will likely increase as more services and interfaces become AI-powered. If AI is to be as ubiquitous and important an area as cars, planes, or medicine, the regulation should mirror the structure built around those industries. However, we know that laws and regulations rarely keep up with the speed of technological development. That’s why it’s important to take responsibility early on – a large role for ensuring the trustworthiness of AI applications falls on the developing teams and organizations. The challenge here is that the logic of commerce tends to pull in different directions with societal aims. A good example of this is Microsoft, who fired their AI ethics team in 2023 while increasing investment in their partnership with OpenAI (2). At the same time, the industry is getting ahead of academia in coming up with new solutions: in 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia (3).

Prediction 2: Large language models will continue to grow — moderately

When you think about AI as a non-expert, it’s hard not to think about it without starting from large language models (LLM). A model of how language works touches largely on how our world works, too. This means that the way we reason, create, plan, and execute things in our life and work will most likely continue to be aided by an LLM-powered tool. When the training data of words get combined with sensor data from the environment, we’re not far from being able to create robots that can carry out physical tasks you ask them to do (4).

Learn more

LLMs’ emergent abilities

Instead of being deliberately added, many of the LLMs’ abilities appear as a side effect of growing the models in size.

For example, researchers made GPT-4 experimentally take the Uniform Bar Examination (UBE), a test for American lawyers to get licensed (5). It features both multiple-choice and open-ended questions. GPT-4 passed the test easily, scoring better than over 90% of the humans taking it. GPT-3.5 (the previous and slightly smaller version of the model) failed the same exam and ranked in the bottom 10% of test-takers.

Did GPT-4 attend an expensive law school to learn these abilities? No, the difference is explained by the sheer size of the language model. It resulted in an increased capability to learn complex concepts, which amounted to convincing reasoning about the law.

While it might feel like this is only the beginning, LLMs can’t grow indefinitely. When growing the amount of input data, computing power requirements (hardware and electricity) grow disproportionally. That means the costs ramp up faster than input data and performance. Training GPT-3 used around 1.3 gigawatt-hours of electricity (the yearly consumption of 121 American homes) and cost an estimated $4.6 million. GPT-4’s training costs were in the range of $100 million (6). Even the CEO of OpenAI, the company that built ChatGPT, has anticipated that the era of giant language models has come to an end and that they’ll be improved “in other ways” (7).

This is why discussions about problem-specific LLMs have arisen: What if future LLMs were trained to solve specific problems in pre-determined fields like medicine, law, or the car industry? This way, the model wouldn’t have to run such a massive database.

Learn more

Will AI accelerate climate change?

Should we be alarmed about the environmental impact of these huge AI models? It’s good to acknowledge that while digital services “in the cloud” might seem immaterial to us, they all run on physical hardware and consume electricity somewhere on the planet. Perhaps someday that electricity will be fully renewable, but that’s not the case today. And building data centers also requires considerable natural resources.

Training a model is typically a computationally expensive and energy-intensive phase. After that, using it (also called “inference”) requires relatively less power. Yet, with a lot of inferences performed on a model, the energy needed adds up. Businesses using Amazon Web Services say up to 90% of their machine learning costs are from inference (8). In research conducted with BLOOM (9), a 176-billion parameter language model (similar in size to GPT-3), the inference of a single query took about 0.004 kilowatt-hours — about a tenth of the energy needed for boiling water for a cup of tea.

If, for example, Google or Microsoft Bing would start using LLM-based chatbots similar to ChatGPT for all their billions of daily web searches (10), that would significantly increase their energy usage. But they might not do that right away, not least because the additional processing power requirements would eat into their profit margins (11). The adoption of these large models could be limited by their cost.

There are continuous efforts to make the algorithms more power- and cost-efficient. However, the lower price can boost their spread through increased demand, even resulting in higher total energy consumption (also known as the Jevons paradox). For example, when the cost of computing and networking decreased, we humans concluded that toasters and car license plates should be connected to the web, contributing to the 14.3 billion active internet devices in 2023 (12).

The potential for positive change

Despite the potentially large environmental footprint, AI has immense potential for positive change. AI has already made many jobs more productive and will continue doing so. In addition, it will be indispensable in numerous applications tackling climate change. Here are just a few examples (13):

  • Increasing the share of renewable electricity in the grid by forecasting and managing supply and demand

  • Improving energy storage by accelerating materials science

  • Reducing transportation emissions with planning based on modeled demand

  • Reducing food waste by optimizing delivery routes and improving demand forecasting

  • Improving agriculture and forestry practices by quantifying emissions with real-time maps

  • Helping people adapt to climate change by predicting local risk scenarios and forecasting extreme weather events

AI’s net impact on the climate will be determined by how we end up using it and how quickly we can power it with carbon-free energy.

Prediction 3: AI will transform jobs but not take over them

Even if AI is going to support and not replace humans, it will still affect the content of our work – and lead to us as professionals having to reskill ourselves to take on more intellectual tasks while AI does the heavy lifting for routine work. This is already visible in the tools used by artists and designers, which contain ever more AI elements. For example, Photoshop is making more and more use of generative AI art. One could argue that AI-based tools are becoming part of most professions’ toolkits – and being able to make use of them will become a job requirement.

Now, let’s talk about the fear of AI “taking over” our jobs and making some of us, eventually, unemployed or useless. Although scientists do see a slight chance of an apocalyptic end result, AI is primarily able to make our jobs faster and easier. Did the Internet make us useless? Not really – it just made information dissemination and search a lot easier, and many professions adjusted to it when they realized we could do more with less time. Similarly, AI helps us make our jobs faster. That’s why many new AI tools are released under the label “co-pilot”. They help you to get started, but won’t do the whole job for you.

Even if trustworthiness is needed to strengthen the trust between people and AI tools on different levels, it doesn’t look like the workforce is too worried. According to a survey done by Fishbowl (14), nearly 30% of white-collar workers out of 4,500 people surveyed have already used an artificial intelligence tool in their work – without telling their bosses. The challenge here might lie not in fighting against the change, but in opening organizations to the opportunities AI can bring while educating people on how to do it responsibly.

Prediction 4: Both open and closed models will keep on developing

Large language models have been developed behind closed doors with little visibility into the kind of data they’ve been trained on. These proprietary AI solutions, such as ChatGPT and GitHub Copilot, are already widely used by many professionals in their daily tasks. But if we in our work and life become reliable on solutions that are commercial, does that pose a threat?

To fight the dominance of commercial actors conquering the AI space, a wave of open-source AI has arisen. Following in the footsteps of open-source software like Mozilla’s Firefox browser, Apache server software, and the Linux operating system, collectives, academics, and nonprofits are now trying to push back on the development of LLMs becoming proprietary and closed. And the development is fast, too: Dolly, an open-source LLM model that any enterprise can use is already thought to beat GPT-3 even if it has been trained with less data – and with significantly lower costs (15). Some open-source models can be run with lower-power devices like a MacBook Pro or an old iPhone (16).

All of this sounds great and suggests a larger role for society to take part in AI development. When the power of LLMs lies in the hands of the many, that means many more minds can come up with innovations to improve everything from law to medicine. On the other hand, as the American technology magazine Venture Beat writes (16), open-source AI poses questions: “Should AI models be freely available so anyone can modify, personalize and distribute them without restrictions? Or should they be protected by copyright and require the purchase of a license? And what are the ethical and security implications of using these open-source LLMs — or, on the other hand, their closed, costly counterparts?” To name at least a few of these risky consequences, disinformation and deep fakes are likely to increase as the technology becomes more accessible. Another aspect is regulation, which becomes a lot harder with everyone running an open-source AI model on their personal laptops.

There’s a lot happening in the space of AI and in the development of making the building and application process more trustworthy. That’s why it’s good to follow the news and stay updated about the newest regulations. People have always been keen to predict the future and create scenarios. We hope you can approach these scenarios critically based on the theory learned in this course.

Congratulations, you've completed the course!

Thank you for joining us in learning about the basics of trustworthy AI. We hope it’s been worthwhile studying! As said many times already, AI-powered applications are here to stay and need more and more of us to be aware of their workings to be able to discuss them among people from diverse backgrounds.

We’re interested in knowing how the course could be improved. If you have any suggestions, please feel free to give us feedback: hello@minnalearn.com.

Thinking already about the next step in your learning journey? Then give our follow-up course, Advanced Trustworthy AI, a go!

Best regards, the writers of the course:

Anouk Mols, Tessa Oomen & João Gonçalves
Erasmus University Rotterdam

Marcus Westberg & Prachi Pagave
Delft University of Technology

Abdul-Rasheed Olatunji Ottun, Mehrdad Asadi, Farooq Ayoub Dar, Mayowa Olapade & Huber Flores
University of Tartu

Bartlomej Siniarski, Shen Wang, Thulita Seneviratha & Chamara Sandeepa
University College Dublin

Vinh Hoa La, Manh Dung Nguyen, Ana Cavalli & Wissam Mallouli
Montimage

Souneil Park
Telefónica

Magdalena Stenius

Travis Larson

Miikka Holkeri & Laura Bruun
MinnaLearn

You reached the end of the course!

Correct answers

0%

Exercises completed

0/0