Now that Chat GPT has risen in popularity, other companies are starting to develop their own chatbots built on artificial intelligence. There are already some scale-up initiatives like ChatSonic, Jasper AI, Open Assistant and Wordtune, but Google is currently offering the biggest opposition with its newest AI-chatbot called Bard. As these AI-chatbots are used more often and for several matters, it is important to consider its legal implications.
What makes these ‘new’ AI-chatbots so different from other chatbots (for example, like some chatbots on commercial websites)? Because the underlying artificial intelligence works with both a very large database and machine learning, these new AI-chatbots can give more human-like responses. Some of the new bots (like Bard) will even have direct access to the Internet, allowing them to give what may seem at first sight perhaps more accurate answers. What the chatbots like Chat GPT and Bard have in common, is that their responses are the result of an instantaneous calculated prediction. It is a prediction of what words are most likely to follow each other.
What hasn’t changed, is that it is still possible to have a dialogue with these AI-chatbots. Moreover, chatbots like Chat GPT are able to maintain the state of the conversation, meaning that the system will build on your previous questions. The answers you receive are, for the time being, still in the form of written language, including programming languages like Python. Compared to other chatbots, the context of what you are writing will also be analyzed. After all, one word can mean several things.
When using these AI-chatbots, there are some limitations that need to be taken into account. For example, the database on which Chat GPT was trained only contains information that doesn’t go further than the year 2021. The system will not be able to answer any questions you might have on more recent data. Furthermore, the underlying prediction mechanism of these chatbots can create reasonable-sounding answers even though the information is inaccurate. Another issue is that the dataset used for training can even be biased, resulting in discrimination on the basis of race, gender, ethnicity, …
With regard to the legal implications, both the developers and the users are often faced with uncertainty: current legislation has not been drafted with AI in mind and it can be challenging to apply existing principles. For the data on which the AI systems are trained, it is important to consider the existence of intellectual property rights, like copyrights associated with that data which reside with the legal owner of the data (e.g. pictures). Reproduction of data is not permitted without the legal owner’s permission and at this moment, it is unclear if data used for certain chatbots or applications are used lawfully. This applies all the more to an AI chatbot with direct access to the Internet, which can consult and gather information itself. Issues relating to the ownership of copyrights on the output of these chatbots, are even more unclear: who or what can be an owner? In principle, authorship has to reside with a person and a chatbot is still not a person. Moreover, the processing of personal data, either incorporated in the training data or provided by the user of the chatbot, shall have to be in compliance with the GDPR-regulations. For example, people using these chatbots will need to be made aware of what will be done with any personal data that they provide during their conversations.
The AI legislation that is currently proposed by the European Union (for example the new proposal of the European Commission on product liability) will have an impact on the legal implications of these chatbots. These legislative initiatives will be further analyzed in our coming AI-blogposts.
In case you would like to receive further information on this topic or need our assistance, please do not hesitate to reach out to us.