IV.

AI laws and regulations in EU

When considering AI security and privacy, laws and regulations are a pretty big deal. After all, besides profit-seeking and market needs, it’s one of the key factors that determine what technology will be developed and how. Of particular interest are the European frameworks and legislation, not only because Europe aims to be a key player in AI development, but because legal frameworks like GDPR end up inspiring the paths that other governments around the world take. Europe may not always have state-of-the-art AI technology, but it’s a key influencer of AI laws. So that you don’t have to look up if the European Union has an Instagram account, this part summarizes the key upcoming regulations around AI, the AI Act, the Digital Services Act, the Assessment List for Trustworthy AI, and the AI Liability Act.

Current laws and regulations

GDPR – protecting personal data

The General Data Protection Regulation (GDPR) came into effect on May 25th, 2018, to protect the personal data of European Union (EU) citizens. It covers a set of rules and principles that applies to data protection across the entire EU/EEA area. The GDPR enshrines the rights of what it calls ‘data subjects’. A data subject is any individual whose data is collected and processed for any reason, thus they’re the subject to which a piece of data refers. The GDPR aims to make sure that data subjects can stay informed on the processing of their data and allows them to intervene if they don’t want their data to be stored or processed at all. It also strives to make sure that personal data, independently of the data subject, is protected from outside threats such as hackers and leaks.

Learn more

Personal data and the European Union’s General Data Protection Regulation (GDPR)

“Any processing of personal data should be lawful and fair. It should be transparent to natural persons that personal data concerning them are collected, used, consulted or otherwise processed and to what extent the personal data are or will be processed.

The principle of transparency requires that any information and communication relating to the processing of those personal data be easily accessible and easy to understand, and that clear and plain language be used.

[…] Every reasonable step should be taken to ensure that personal data which are inaccurate are rectified or deleted. Personal data should be processed in a manner that ensures appropriate security and confidentiality of the personal data, including for preventing unauthorised access to or use of personal data and the equipment used for the processing.”

– GDPR, Recital 39

The rights of an individual ‘data subject’ under the GDPR can be summarized as follows:

  • Right of access. A person has the right to know and gain confirmation whether or not personal data regarding them is being processed. If sensitive data is being processed, then the person also has the right to access said data and to know what kind of data it is, why it’s being collected and processed, how long it will be stored, and with whom the data will be shared or disclosed. This is done by issuing a Data Subject Access Request (DSAR) for personal data held by an organization. Organizations are required to respond to DSARs within one month and provide the requested information, which can include information on any discriminatory practices that may have taken place. In addition to this, any communication regarding this processing of data must be transparent and explained in a language that’s easy to understand for the subject. This means that an organization can’t hide the usage of data by using confusing language or overly technical jargon (which we’re not using in this course either, are we?) to give a fake sense of transparency.

  • Right of rectification. If data about a person is inaccurate or incomplete, the person has the right to request that this is corrected.

  • Right of erasure. Also known as the ‘right to be forgotten’, this allows people to request that their personal data be erased from storage. For example, if a person objects to the way their data is being processed or withdraws previously given consent, they can exercise this right. Additionally, if a person believes that their data is being used illegally, is inaccurate, or that data is no longer needed for the purposes expressed by the organization that collected it, it’s also possible to use these reasons as grounds for data removal.

  • Right to restriction of processing. Using the same grounds as explained above, a data subject can also request that their data processing becomes restricted. An example of this is that a data subject could request that their information is temporarily removed from a website but kept for possible later use.

  • Right to data portability. Since a person has a right to access their data, the format of their data must also be provided in a format that’s actually possible for them to read. That is, the data must be easily portable in a common machine-readable format that can be shared and understood.

  • Right to object. A data subject has the right to object to the way their data is processed. A data controller can then respond to this objection by showing that the processing of the data is legitimate. Additionally, GDPR requires organizations to obtain individuals' explicit consent before processing their personal data. This includes informing individuals of the purpose of data processing and providing them with the option to withdraw their consent. By obtaining informed consent, organizations can ensure that they aren’t engaging in discriminatory practices (5).

If a data breach occurs, GDPR requires organizations to notify data subjects about these events to keep them informed of the risks to their personal data. This includes breaches that may lead to discrimination. By providing timely notifications, individuals can take steps to protect themselves from the potential harm caused by the breach.

The GDPR’s impact

The GDPR has had a significant impact on tackling unjust practices related to data processing. It has introduced several measures that require organizations to assess the potential impact of their processing activities on individuals' rights and freedoms, including the risk of discrimination. This can help organizations identify potential discriminatory practices and take appropriate steps to address them.

One of the most visible impacts of the GDPR for the average person is that most websites now require user consent before processing cookies. Most web users in the EU/EEA these days see a fair amount of “agree” or “reject” buttons pop up on a daily basis.

The impact of the GDPR can also be seen in the number of large companies that have already received considerable fines since its inception. Google, Amazon, and Facebook are industry titans that have all been fined millions because of their lacking transparency toward users and their processing of user data.

One of the biggest fines so far was levied against Amazon at €746 million by The Luxembourg National Commission for Data Protection, first announced in July 2021. This sparked a court case where the fine was eventually put on hold pending changes in practices regarding Amazon’s processing of personal data.

You can find more examples of GDPR fines .

New European regulations around AI

AI Act

The Regulation Laying Down Harmonized Rules on Artificial Intelligence (thankfully known as the ‘AI Act’ for short) is a legal framework for regulating AI across Europe that was passed by the EU parliament in 2023. It’s a first-of-its-kind framework with a focus on creating a trustworthy AI ecosystem. The AI Act recognizes the fast growth of AI technology and explicitly aims to find a balance between keeping AI trustworthy while at the same time encouraging its adoption in high-impact areas where this technology is deemed beneficial (such as healthcare, climate change, and finance to name a few).

Instead of regulating specific technologies, the framework categorizes AI into four risk levels according to its area of application.

  1. Unacceptable risk. This category includes AI technology and applications that contravene EU values, for example applications that violate fundamental rights.

  2. High risk. Most of the current obligations in the law proposal focus on this category of AI system and defines such systems related to:

    1. Biometric identification

    2. Management of critical infrastructure (energy and water supply)

    3. Education, including student assessment

    4. Employment, including recruitment and promotion systems

    5. Essential private services, public services, and social benefits

    6. Law enforcement

    7. Migration, asylum, and border control

    8. Administration of justice and democratic procedures

  3. Limited risk. AI systems with certain specific transparency requirements to prevent manipulation of the system. This would include chatbots where it’s important to inform the user that they’re talking to a machine.

  4. Low or minimal risk. Includes for example AI applications in video games.

The objectives of the AI Act are:

  • Ensure AI systems are safe and in line with the EU’s fundamental rights and values

  • Facilitate investment and innovation with legal certainty

  • Prevent market fragmentation within the EU

The AI Act will very likely end up having a lot of interplay with the already existing framework of the GDPR. Due to the nature of data processing in AI applications, both the AI Act and the GDPR draw from a similar legal basis in Article 16 of the Treaty on the Functioning of the European Union (TFEU) which mandates that the European Council and Parliament should create rules for protecting individuals in regards to the processing of their personal data.

Digital services act

The Digital Services Act (DSA), approved by the European Commission in 2022, focuses on citizens’ rights regarding their access and use of digital services. While the act itself isn’t focused on AI, it does have some implications for artificial intelligence. For instance, it requires companies to make transparent to users the main parameters in the recommender systems they use, and provide users with options to customize these. (Hmm…if only there were a way to know how AI recommender systems make decisions so that users would be aware. To give something like a reason for why you see a certain movie or product recommendation. Better yet, an explanation – let’s call it explainable AI. Someone should make a course about it!)

Jokes aside, the requirement for user transparency and control for recommender systems in the Digital Services Act directly relates to explainability methods that are put at the disposal of consumers. Of course, like with GDPR, companies may try to circumvent this and bury the options given to users or the explanations under a multitude of pop-ups or convoluted menus. However, like with GDPR, some change is to be expected and the need for XAI will rise because of this. Another provision of the DSA concerns avoiding bias and discrimination in algorithms. What could help with that? Well, having a method that provides an explanation if race, gender, or other information that can be used to infer race and gender is being used by an algorithm. The DSA is a big deal for XAI and, although the AI Act may sound more relevant, you shouldn’t disregard the implications of this on digital services.

ALTAI

To ensure that regulations then translate into practice, there need to be design guidelines and assessment protocols for developers to follow. One example of this is the Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment (6). This publication contains a requirements overview for several aspects of trustworthy AI, including guidelines for technical robustness, safety, privacy, and data governance. In this context, technical robustness refers to systems performing reliably and are developed with safety risks and ways to prevent them in mind. Some example self-assessment points related to robustness and safety are:

  • “Is the AI system certified for cybersecurity (e.g. the certification scheme created by the Cybersecurity Act in Europe) or is it compliant with specific security standards?”

  • “Did you inform end-users of the duration of security coverage and updates?”

  • “Did you put in place measures to ensure that the data (including training data) used to develop the AI system is up-to-date, of high quality, complete and representative of the environment the system will be deployed in?”

While this document is relatively simple, it nonetheless represents a first step in what a developer or deployer could use to self-assess their system. AI systems cover a wide array of domains and each AI system application has its own context and challenges to overcome. For this reason, developing more specific design patterns is of growing interest to AI researchers.

AI Liability Directive

The AI Liability Directive (AILD) was proposed in September 2022. As the AI Act focuses on the prevention of AI harm, the AI Liability Directive aims to provide a framework for redress for those harmed by AI. The liability rules that the AILD will be adapting would be ‘non-contractual’ in nature, meaning that they apply even if there is no concrete contract between subject and deployer entities for the latter to be held liable for harms caused by their AI systems.

The motivation for this directive is that existing liability rules pertaining to damage caused by human activities have been proven extremely complex to apply in cases of AI activities, as it’s difficult to pin the blame directly on the entities deploying or developing the AI system. The liability directive would thus introduce rules to clarify these cases specifically for AI contexts.

The current proposal rests on the recommendations that strict liability rules should be implemented specifically for high-risk autonomous AI systems, following the same definition of risk levels stated above in the context of the AI Act.

Preparing for regulation

Obviously, with the passing of laws restricting the applications of AI, it’s important for businesses to know how to prepare for such regulation shifts. As there’s no precedent for the consequences of such regulation, it’s hard to predict exactly how the AI market will be affected in the coming years. However, an AI-involving or invested company can expect both ethical and monetary considerations to increase. For example, there are likely to be more resources spent on making sure AI systems are compliant with the new legal framework. As such, businesses can expect an increase in costs to ensure compliance. This isn’t only in the case of initial reworks of existing systems, but must also factor in continuous assessment in the various stages of development for existing and new businesses alike.

On the ethical side, just as the GDPR already enforces debiasing requirements in data processing, AI systems using sensitive data will also have to take these into consideration. As the GDPR is already in effect, this is technically not something new – but the concerns for AI systems to have increased fairness, transparency and explainability can be expected to go from indirect requirements to direct ones. Once the AILD comes into effect, there will also be further liability concerns to consider. Making sure that your AI system is mitigated for discrimination and that proper debiasing procedures are in place is a good first step in avoiding your AI system causing harm, thus avoiding liability concerns in the first place.

It will also be important, whether you develop a system within the EU or deploy systems in the EU market from a third country, to consider the risk level of the AI system and be aware of their definitions – especially what’s included in ‘high risk’ systems as these are the ones primarily targeted by these current and upcoming regulations. More information on the risk levels can be found for example here: Regulatory framework proposal on artificial intelligence | Shaping Europe’s digital future

If you’d like to study more about these new (and current) regulations, here’s a list of useful resources for staying up to date:

European Commission's High-Level Expert Group on Artificial Intelligence

The OECD Principles on AI

EU General Data Protection Regulation (GDPR)

Data Act: Proposal for a Regulation on harmonised rules on fair access to and use of data

Part summary

In this chapter we learned that…

  • AI models may not perform as expected due to a mismatch between expectations and practice or flawed training data.

  • Attackers can compromise AI systems through security attacks, such as model evasion and data poisoning, and privacy attacks, such as membership inference or attribute inference.

  • The main stakeholders of an AI system are end users, business users, and developers.

  • The mitigation of security risks for AI systems requires transparency and documentation for end users, risk assessment and management for business users, and security-by-design, testing, and management for developers to ensure that the AI systems are resilient against potential attacks and maintain their required level of security.

  • The General Data Protection Regulation (GDPR) enshrines the rights of data subjects, including the right of access, right of rectification, right of erasure, right to restriction of processing, right to data portability, and right to object. The new AI Act aims to regulate AI across Europe, ensuring AI systems are safe, trustworthy, and respect fundamental rights.


Next Chapter
5. Implications