The impact of the GDPR on AI

The use of an artificial intelligence (AI) system is inextricably linked with the use of data. This data can include data such as images, paintings, text… etc., which might be protected by confidentiality or intellectual property rights, or this data can contain personal data. The collection, use or creation of personal data by an AI system can pose challenges under the European general data protection regulation (“GDPR”) which seeks to protect natural persons with regard to the processing of their personal data.

Application of the GDPR

While the GDPR does not explicitly reference artificial intelligence, it was drafted in a technology neutral way, so that it applies to all emerging technologies, including AI.

The GDPR applies to processing of personal data, i.e. any information relating to an identified or identifiable natural person, the data subject. This information can include information such as identification information, contact details, but also such (sensitive) personal data as data relating to a person’s health, union, sexual preference, political affiliation, etc.

The GDPR includes responsibilities for data controllers, i.e. those parties that determine the purpose and details of the processing, and data processors, i.e. those parties that process personal data on behalf of a data controller.

Challenges in relation to GDPR

GDPR challenges relating to AI systems exist on several levels. While it is not possible within the context of this blogpost to list all struggles, it is worth to highlight a few notable ones.

Training of an AI system

An early challenge arises when it comes to the training an AI system. In order for it to learn and be able to achieve the task it was created to do, it will take huge amounts of data. A simple example: if an AI system would be tasked with distinguishing between an e-mail address and a telephone number, it will first be fed a large amount of both that are tagged as either a telephone number or an e-mail address in order to learn the patterns and how to distinguish between the two.

If personal data is used for the training of an AI system, then it needs to be ensured that the collection and use of the personal data is in line with the requirements of the GDPR, e.g. identifying a correct legal basis, informing the data subject about the purpose of the processing and ensuring the data subject is aware of its rights.

Automated decision-making and profiling

AI systems can be used to make decisions about persons. For example, AI systems can assist in determining whether or not a person should be able to receive a loan on the basis of information that is provided to it about a person’s income, costs and other personal factors. AI systems are also often used when it comes to targeted advertising and marketing. In such cases, AI systems can learn on the basis of interests or purchases of a particular person and can suggest other things of interest that a person might want to buy.

Ultimately, all of these collections can lead to building one or more profiles about a person that can be used by companies building profiles to make decisions about such person, which can range from less harmful decisions (e.g. which pair of shoes an algorithm will think a person likes based on purchases) to potentially more harmful decisions (e.g. whether or not a person will receive a loan).

For this reason, under GDPR, a data subject has the right not to be subject to a decision that is based solely on automatic processing (including profiling), which leads to legal effects or has a significant affect on the data subject.

Inferring of personal data

Building on the foregoing, AI can learn and create new bits of personal data by combining the (personal) data they collected, meaning that companies can hold personal data about a data subject that they might not have been meaning to hold. In this case, it is important that data controllers inform data subjects about how data will be collected or generated and used. Insight in how personal data can be inferred or created is then necessary, but might be challenging given the ‘black box effect’, i.e. the situation where it is not always clear how an AI system has come to a certain output based on the provided input.

Conclusion

The fact that AI systems are capable of learning, processing and generating (personal) data can thus lead to issues in compliance with the GDPR, e.g. the data subject can be insufficiently informed about the processing due to the fact that it is unaware certain personal data is being inferred about it, etc. It is thus important to keep in mind the obligations of GDPR when developing and using AI systems.

Curious to learn more about AI and the law? Read our previous blogposts:

Artificial Intelligence: an introduction to our series of blogposts

AI and ethics: ethical challenges connected to AI

AI and ethics – is the EU fulfilling its own ambitions? 

The current proposal of the AI act summarized

AI and (product) liability

Can Artificial Intelligence systems claim authorship under copyright law? – EY Law Belgium

In case you have any questions, relating to the aforementioned or other topics, do not hesitate to reach out to us.