If you have a car, you probably want to be able to drive it on dirt roads sometimes, even though it was built for tarmac roads. You probably also want to drive it in different countries, even if they drive on the opposite side of the road. You also want the car to continue working even if a naughty kid throws a water balloon at it. If you get a flat tire, you should be able to replace one rather quickly (or ask someone else to do it for you). In short, you want the car to be resilient, regardless of environmental factors (dirt roads), context (different countries), potential attacks (water balloon), and failures (changing a tire).
In the same way, people still have expectations regarding AI’s resilience and its ability to face adversity, adapt to changes, and recover from failures. Many humans possess or work to achieve these characteristics, in fact, many job interview questions focus precisely on these aspects. Therefore, for AI to perform or assist humans on tasks, we also want it to be resilient – otherwise, just like when we drive a car, a malfunction risks (potentially fatal) consequences.
Why might AI not function as expected?
The accuracy of an AI model is largely dependent on the quality of the training data used to train the model. If the training data is flawed, the AI model is likely to make mistakes. Furthermore, since it’s impossible to enumerate all possible realistic data for a given task, it’s inevitable that AI may behave unexpectedly. There are several reasons why AI may not function as expected due to geographical bias or environmental factors. AI models trained on data from a specific region may not generalize accurately to different geographic contexts. The data used to train the AI system may not properly represent the diversity of geographical features, cultural norms, or local variations. This can result in limited or biased model performance when applied to different geographical regions. If we consider other environmental factors, the AI model based on computer vision training data may struggle to recognize objects or features in different environments with distinct lighting conditions, terrain, or vegetation.
Why aren't there more autonomous drones in cities already?
To illustrate some of the complexities that go along with the development and implementation of safe and reliable AI, consider autonomous drones. They’re an interesting opportunity for businesses and cities to efficiently support a clean and safe public environment. Think, for example, about medicine or grocery deliveries, litter detection and separation, water pollution monitoring, and air quality assessment.
Automated drones require AI for localization, trajectory estimation, navigation, and collision avoidance. They need to be able to operate in a way that doesn’t harm citizens, infrastructure, or the environment. This is complicated by their complex and often black-box operating logic based on deep learning models and federated learning.
There are many aspects that can cause malfunctions – drones can bump into objects, they can crash into buildings, or they can collide with people. There are human actors in the environment, but keep in mind how humans also play a role indirectly: they make choices for the AI model and drone design, determine what’s appropriate and desired behavior of the drones, and they interpret the possible output.
If something happens with a drone in an urban area, for instance, when a drone is damaged by a vehicle or person or when a drone creates a risk for people, the question arises: who’s responsible for the consequences – the manufacturer, the AI developer, or the city?
While drones are operational, trade-offs between functioning and technical and environmental uncertainties make wider use in public spaces difficult at this stage. This has to do with business considerations that go beyond AI and technical aspects but have to do with sociotechnical and environmental factors. However, research is advancing and collaborations with experts on ethics and privacy/AI regulations will make the deployment of drones more widespread in the future.
Who are the relevant AI stakeholders?
AI isn’t just technology in isolation. People do things with AI, to AI, and are affected by AI. AI methods are tools for many problems, and we need to make sure those tools are secure and well-understood by taking the required security measures when developing them. By building safe tools we also ensure the safety of people.
Let’s think of civil security using the example of buildings. Why would you want to develop methods to protect buildings? Sure, a building itself might be architecturally pleasing and historically important. But more importantly, you’d protect private individuals (residents), business users (shopkeepers, office workers), and service providers (property owners, building managers). Crucially, different security methods may cater to each of these stakeholders differently. Therefore, to properly understand how to protect a building from fires or an AI system from attack, you need to understand who you’re protecting, what are the conceivable risks and what methods are most suitable to address them.
1. Private individuals
This group is highly diverse and represented by many actors. In essence, we’re talking about the average consumer (or end user) of AI-based products, systems, and networks. In general, end users expect the correct functioning of these systems according to the promised specifications and a clear and easy-to-understand interface – including simple explanations (for example, through visualizations). Furthermore, they want the systems to be secure and resilient at the highest possible level as they may rely on those systems in their day-to-day lives. Any malfunctioning or availability issues of the systems can quickly cause harm, for example, in applications used in the medical field.
2. Business users
Business users are individuals who use AI technology to support business operations, decision-making, and strategy. They’re typically employed by organizations and use AI or AI-driven software applications to perform tasks such as data analysis, project management, communication, and collaboration. Business users are often responsible for managing data and information within an organization, and they may work closely with developers to ensure that software applications meet their specific business needs. Business users are responsible for making critical decisions within an organization, and they may use AI-generated data-driven insights to make more informed and accurate decisions. Business users rely on the reliability and consistency of these tools for their work.
3. Service providers
Service providers, including developers, data scientists, and others working to deliver AI-based solutions, are highly relevant stakeholders of any project. They need to make sure that the systems they design are secure, effective, efficient, and user-friendly. Service providers also need to integrate AI systems with other technologies within an organization and deploy them in a way that’s scalable and secure. They’re responsible for maintaining and updating AI systems to ensure that they continue to perform effectively over time. This involves monitoring performance, identifying areas for improvement, and implementing updates or changes as needed.
What are the risks for different stakeholders?
AI security risks
A particular component of AI resilience is related to security risks. Risks to our security are increasingly taken online, such as cybercrime and cyberwarfare. Likewise, activities that raise security risks for potentially more acceptable ends, such as civil disobedience protests and hacktivism, are also taken online. AI algorithms, due to their high-stakes applications, are prominent targets for attacks. Furthermore, because of how AI depends on its training data, its security risks are somewhat different from other tech domains.
The biggest burden of providing security for AI-based apps falls on service providers. The current state of AI-based security vulnerabilities is still being explored, while new security vulnerabilities are being discovered rapidly. Patching and testing these vulnerabilities is a challenge for developers. These fixes have to be immediate to keep up with the rapid discovery of security threats. At a certain point, adversaries automate the attack process, targeting deployed applications and causing disruption to businesses. For instance, in a Denial of Service (DoS) attack against an AI-based application that uses natural language processing (NLP) to provide real-time translation services, an attacker overwhelms the system by flooding it with a large number of requests of arbitrary or even nonsensical texts. This can lead to resource exhaustion (CPU, memory, and network resources) and render the service unavailable to legitimate users. It’s worth noting that any downgrade of availability could create a ripple effect in the client base and incur losses for business users. In terms of security, the service provider is responsible for the user's convenience and security. In some cases, the business is legally bound to compensate the users for any security vulnerabilities. Ideally, users will be at a lower risk than businesses from AI-based attacks.
AI privacy risks
When considering the privacy risks from AI, an adversary may primarily attempt to extract private and sensitive information from an AI-based service’s end users. For example, the adversary may get the advantage of machine learning model parameters trained with the end user's private data by exploiting these parameters via an inference attack. From the attacker's perspective, a business user is another type of user who utilizes AI models to aid in the business processes like decision-making. An attacker may also attempt to obtain business-related sensitive information by launching these attacks. However, it’s the responsibility of the service providers to make AI models more resilient against privacy attacks.