The previous discussion of AI in business, entertainment, government, and infrastructure shows us that many ethical implications play a role in AI development. Ethics have to do with what we think is morally good and bad, or right and wrong. In other words, ethics deals with value judgments. As you can imagine, what we think is ethical (good/bad or right/wrong) varies in line with the current accepted societal norms. And you can see how these societal norms or values differ if we take a historical perspective or if we look at different countries. Dealing with norms and values is already quite difficult, as you might imagine, but these variations make the discussions even more complex.
The ethical implications for AI development have, amongst others, to do with data collection and curation when it comes to issues around ownership, privacy, and bias. When it comes to decision-making processes and governance, bias and unfairness are crucial issues. And across the board of AI use, questions of accountability arise. Who’s responsible for algorithmic decisions and selections, and who’s to blame when errors or incidents occur? But before we dive deeper into such issues and potential solutions, we’ll discuss what exactly is ethical about these concerns.
Why AI is complicated
When we're talking about how AI has an impact on society, it’s inevitable that we start discussing what’s good or bad, right and wrong. This is the domain of ethics, and ethical AI is one of the main concerns of politicians, software developers, and users alike. While many other disruptive and advanced technologies have ethical debates around them, such as nuclear energy, gene editing, or fidget spinners, debates on ethical AI are often more complicated because we trust it with tasks involving human-like information processing and autonomous actions. You may hear someone at a bar (who hasn’t taken this course) debating if AI was right to do this or that, but you never overhear someone asking if a nuclear power plant did the right thing.
While AI can have an impact on many contexts, such as the environment and animals, in this section, we focus on ethical implications for humans. However, as explained moments ago, we aren’t treating AI as a human in this section. We do however follow a human rights perspective to understand AI and ethics (14) – which implies that when AI tools are adopted, basic human rights are respected and risks are mitigated. By following the human rights framework, which is generally accepted by many different governments and institutions, we can avoid some of the variations that can complicate ethics discussion, as we mentioned earlier. For instance, considerations of privacy need to be taken into account to respect human dignity, considerations of autonomy to comply with a right to freedom, and a right to equality needs to be respected by considering non-discrimination.
As Meta’s chief AI scientist and all-around machine learning celebrity Yann LeCun writes, “before we reach Human-Level AI (HLAI), we will have to reach Cat-Level & Dog-Level AI. We are nowhere near that (15).” So, if AI isn’t a human but a technology, what specific aspects should we look at when trying to do the right thing with AI? The following section will give you some ideas.
Validity
In talking about AI, we often discuss the data that is used to train it or the sophisticated and complex models with billions of parameters that learn from the data. But we often skip the aspect that is more closely related to ethical decisions: the loss function. A loss function is a mathematical operation that examines the outputs of an AI to conclude if it performed its task successfully or not. If the answer is “no”, a model like a neural network is adjusted so it can perform better the next time it attempts the same task, meaning the machine learns! While we won’t dive into introductory AI concepts such as loss functions, neural networks, or backpropagation (for that, check out the Elements of AI course), it’s important to understand that loss functions are entirely defined by humans and determine if an algorithm if performing the task it was assigned to.
So, when we talk about validity, we're really asking: is the algorithm doing the thing that it was supposed to do? This may seem like a silly question at first. We don’t ask if a hammer is doing the task it was supposed to do, so why should we ask this about AI? Let’s look at a few examples.
A widely used AI healthcare algorithm in the US was making decisions on how to manage the health of populations. When looking at this algorithm more carefully, scientists found that the outputs of the algorithm were racially biased. Why was this the case? When looking at the loss function of the algorithm, they found that it was actually using the money spent on patients as an indication of how sick they were (if you were sicker, they’d spend more money on you). However, in the US healthcare system, less money is spent on Black people than others, so the outputs wrongly assumed that they weren’t as sick, and therefore didn’t provide these patients with the care they needed. This is a validity problem – the algorithm wasn’t doing what it was supposed to do. It should be telling us how sick a patient is, but it was instead telling us how much money was spent on them.
Let’s consider another example. We want to train a self-driving car to find the best route, so we configure a loss function that makes the algorithm output the path that takes the least amount of time. However, that causes our car, and others like it, to frequently pass by a school zone where children play on the street – increasing the risk that someone will be harmed in a traffic accident. Again, is the algorithm valid, is it doing what it’s supposed to do? No, because the fastest route isn’t necessarily the best.
What can we conclude from this? When examining an algorithm, the first question you should ask is: “what’s your favorite song?” Because, let’s admit it, we're all curious about AI’s musical taste. However, one of the key questions that you should consider next is: is the algorithm doing what it’s supposed to do? Or is it something different? How’s it calculating loss? And, if you find a validity issue, carefully consider who may be negatively impacted by it: be it Black US healthcare patients or school children, anyone can be impacted by invalid AI.
Now, validity is clearly an important aspect of AI. As long as AI tools don’t meet validity requirements, they shouldn’t be implemented. However, once the validity requirements are met, we can look at other human rights that AI tools might impact. For example, does the AI tool affect a person’s autonomy? Meaning, the implementation and use of an AI tool should still allow people to make their own decisions and choices, without removing or reducing options.
Non-discrimination and privacy
Additionally, another important question to ask is does the AI tool comply with nondiscrimination requirements? In other words, is everyone treated equally? The datasets used to train the AI systems, or used to feed machine learning algorithms, can really affect this aspect – as we discussed previously. One real-life example comes from Amazon, which has used an AI recruitment tool that was proven to be biased against women (16). This is a clearly discriminatory outcome, as women have a lower likelihood to be hired than men, thus excluding women and reducing their chances to get a certain position. However, recruitment is a particularly interesting domain. Why? Because recruiters sometimes have to deal with quotas or considerations for the rights of disabled people, which means that the likelihood to be hired needs to be increased for certain groups. This adds another interesting ethical layer to the adoption of AI tools (17). You’ll get to read more about the Amazon case from the bias and fairness point of view in Chapter 2.
Finally, privacy and transparency come into play when considering the human rights framework. Data use for training and operating AI systems shouldn’t affect a person’s right to privacy. On the other hand, we see increased requirements for being transparent about AI systems and, for example, how data is used in AI systems. How can businesses balance privacy and transparency requirements? This question doesn’t always have an obvious answer! So, while AI systems aren’t inherently unethical, we need to realize that they’re also not automatically ethical. Businesses will have to actively make sure that they think about validity especially, and using the human rights framework can support such analyses.