There are a few things that are always true in life: the Earth revolves around the sun, avocado toast is only as good as the avocado itself, and no system is truly secure. All of these are extremely relevant for your survival, but let’s focus on the last one. If no system is secure, then aren’t we all doomed? Well…we aren’t doomed, but we should never be fully carefree either. Because no system is secure, we need to minimize any problems that may happen to it or – in order words – mitigate the risks.
At first, you may think this is the role of the developers. After all, they’re the main ones in charge of developing the system. However, business owners that manage the system should also invest the proper resources to protect it, even if that means hiring more developers to do it. Finally, you may have the best password protection system in the world, but if one of your users posts it on Facebook, there isn’t much to do than a massive facepalm. Each of the stakeholders in the AI chain plays a role in risk mitigation. Below, we’ll discuss what each of them can do to address this.
Approaches for private individuals
The mitigation of security risks for end users relies heavily on the ability of AI service providers to be transparent and to document AI systems. This doesn’t mean disclosing every single business secret to users, but revealing enough information so that they can make informed choices and understand the impact of these choices. They don’t need to know Coca-Cola’s secret recipe, just that drinking too much of it could be bad because it contains sugar, caffeine, and other ingredients that may be harmful in excess.
End users must be informed when AI components are used in any system, and have documentation available related to the AI system and how it impacts the application it’s embedded in. While some service providers may choose to be transparent about their AI systems and openly communicate their usage, others may have reasons to maintain a certain level of confidentiality. However, there may be instances where service providers opt for a more closed or proprietary approach, particularly if their AI systems are used for sensitive applications or involve proprietary algorithms or intellectual property. In such cases, service providers may prioritize protecting their technology and competitive advantage, which can result in limited disclosure about the specifics of their AI systems' utilization.
The documentation should provide an understanding of which data is used, how it’s processed, and how it impacts the decision of the AI system. It should present the concrete implications (ethics, privacy) in case of AI system compromise. It should also report the likelihood of security compromise as well as its impact on the confidentiality, availability, and integrity of the AI system and the data it uses. This likelihood should be quantifiable and computed based on the security testing of the AI system.
Can an end user opt out of AI-based decision-making?
In the context of emerging regulations such as the AI Act, end users can make an informed decision on whether or not to use the AI systems and provide their potentially sensitive information. End users should have options to opt out of AI-based decision-making in any service they use, much like they have the choice to accept or refuse cookies when visiting websites.
However, it’s important to realize that in practice, the availability of these options may vary. For example, in some cases, such as a recruitment system that relies heavily on AI algorithms and is the only available option, dropping an application may not be a viable solution. Also, the fact is that not all services give users full control over disabling the AI system. In such situations, it’s essential to consider the overall context and weigh the potential benefits against the potential risks associated with using an AI system.
Users can always take steps to protect their data and privacy, such as being careful with the information they provide, understanding the system's privacy policy, and asking service providers to clearly explain how their data will be handled.
Approaches for business users
Similarly to end users, the mitigation of security risks for business users relies heavily on the availability of detailed AI system documentation. Business users must be able to translate generic risks and their implications described in the documentation to their business. In other words, how would a compromise of the AI system impact the business according to several aspects: customers, profit, reputation, and business advantage? A compromised AI system that decides how bright the lights should be in a room is, in principle, less serious than a system that decides how much medication to give to a patient. However, if you’re the owner of the lightbulb company, maybe you’re also extremely concerned about the former because it could reduce the lifespan of your product. The security risk generated by the AI system must be integrated into the overall risk assessment and risk management process of the company using it. Risk management policies may need to be updated accordingly. Adequate investment is needed to mitigate risks.
On a more practical note, the AI system must be integrated into its final applications, and its link to specific business outcomes and KPIs must be defined. Comprehensive threat modeling and security testing must be performed to observe and anticipate the potential impact of compromise on business key performance indicators (KPIs). Someone in a company (or preferably a team of someones) needs to properly assess, plan, and act on security issues related to AI.
Approaches for service providers
Service providers have a duty to mitigate the security risk of the AI systems they design, develop, and deploy. The first means to mitigate this risk is to acquire the necessary knowledge about AI security threats, vulnerabilities, and attacks that we talked about in Section 4.2. It enables developers to integrate security aspects in the design of the AI systems they develop, making them more resilient against potential attacks.
Beyond security by design (planned and built into the technical solution from the very beginning), threat modeling, security testing, and vulnerability assessment exercises must be performed on developed AI systems.
Threat modeling systematically identifies potential threats and adversaries that could exploit vulnerabilities in AI systems. For example, in the context of AI systems in healthcare, unauthorized access to patient records by malicious attackers could be a potential threat.
Security testing is another important step in evaluating the resilience of AI systems to security risks. Various tests, such as penetration tests and fuzz tests, are performed to identify the strengths and weaknesses of the system. These tests, both manual and automated, try to scan the tested system for common security flaws and assess if and at what scale the user might be able to damage the system, by probing it and passing various types of input into it.
Vulnerability assessment exercises analyze the system's code, configuration, and infrastructure to identify known vulnerabilities and misconfigurations that could be exploited by attackers.
Mitigation strategies
Combined with a knowledge of solutions to mitigate attacks against AI systems, effective mitigation strategies can be deployed to better protect the system. For instance, adversarial training is an important approach to train AI models on both clean and adversarial data to improve their robustness against adversarial attacks. Model interpretability and explainability techniques help us understand the decision-making process of AI models and detect potential biases and weaknesses. These exercises can be repeated in an iterative cycle of assessment and improvement until the AI system reaches the required level of security. The final results of these exercises must be documented and made available to final users of the AI system to make them aware of the security risks. Depending on the system requirements, standards such as PCI, HIPAA, ISO, and SOC might also be used as frameworks for assessing and certifying system security. In short, many of these methods ask developers to place themselves in the shoes of the attacker. Because AI is sometimes unpredictable, this also means finding and exploring its weak points so actual attackers aren’t able to take advantage of these.
Risk management and testing
Finally, security risk management and security testing must be integrated as continuous processes in machine learning operations (MLOps). AI systems must be continuously tested to ensure they preserve their required level of security. The testing for security usually involves an iterative process that ensures that the AI system is secure from potential attacks. Some of the steps involved are:
threat modeling (identifying and mitigating potential threats to the system)
vulnerability scanning (using automated tools for searching for common security issues in the system)
expert reviews of system architecture and code
compliance testing (auditing the system to see that it follows agreed upon policies and procedures)
user acceptance testing (making sure the system fulfills the user requirements set for it)
monitoring the system to notice any anomalies in its usage and performance
By integrating these steps into the MLOps process, organizations can proactively identify and address security risks, reducing the likelihood and impact of potential attacks on AI systems. This is particularly important in case there are changes in the system, such as retraining of models. These tests must be documented and made available to final users of the AI system to make them aware of the security risks.
For example, leading technology companies have developed adversarial machine learning threat frameworks, such as CleverHans and adversarial ML Threat Matrix. These frameworks provide documented information on potential adversary threats, attack vectors, and countermeasures, serving as a valuable resource for developers and end users to understand the security risks related to AI systems. The bottom line is that AI systems often evolve, either through retraining or online learning, and AI-based attacks also change. Therefore developers can’t simply stop being vigilant because things were quiet for a few months.