Businesses that develop, use or market artificial intelligence (“AI”) should bear in mind that ethics are a major challenge for AI systems. These businesses should be aware of what can go wrong and make sure that the AI system works responsibly. A clash with the ethical principles of the region where the AI operates, may cause severe damage to a business’s reputation. Ethics should be a major consideration for any business that develops, uses or markets AI in any way.
A recurring phenomenon that causes unwanted outputs of AI systems is inherent unconscious bias included in the AI system. Such bias can creep in due to biased data that is inputted into the system, which the system then uses to learn from and which it subsequently integrates into its output.
This is especially a challenge for conversational AI systems that learn from their users or from comments on social media. As a result, they are prone to develop offensive language towards their users, which depicts the business in a bad light.
It is better to prevent unconscious bias from entering the AI system than it is to cure it. Prevention can exist in scoping which data is used by the AI system. The equation is simple: if the input is bad, the output will most likely be too. But it is important to be aware that filtering in large data sets may be a challenge. Even with the use of automated checks and a certain amount of human intervention, it may be difficult to bring the quality and completeness of data sets up to standards and to maintain it throughout the years.
Unwanted violations of privacy, confidentiality or intellectual property rights
An AI system in itself is unaware of the existence of privacy or confidentiality rights or obligations or any intellectual property rights vested in the data that it uses to make its decisions and produce its results.
Businesses that use an AI system that processes personal data, already need to comply with the relevant obligations under the applicable data protection laws (most notably for the EU: the obligations under the GDPR). Among other things, data subjects need to be informed about the processing of their personal data and their rights and businesses wishing to use an AI system may need to perform a data protection impact assessment.
The data used by the AI may include confidential information, for example the names and details of certain customers. Such data may be confidential by way of a contractual provision or because of sectoral or professional requirements (for example confidentiality requirements for lawyers or medical professions). Users are not always cautious when entering data into the system, which can lead to such confidential information ending up in the data sets used by the AI system. This can constitute a breach of the user’s confidentiality obligation. This is one of the reasons why some businesses are reluctant to use AI in their internal operations.
Another hot topic relating to AI is the total disregard of AI systems for intellectual property rights. The rise of AI that generates text or images is accompanied by debate about the originality of their output. Businesses that bring such AI systems into the world are being faced with plagiarism claims, which are yet to be settled in or outside of a court.
Debates have been all over the news about the desirability of such generative AI systems, for example:
- In our educational system
Will the arrival of AI that is able to do students’ homework cause negative effects on their learning process and development? Will students that made only few references to sources in their homework or thesis be presumed to have used AI, even if that is not the case?
- In the creative industries sector
Artists put days, sometimes even months, of their time in their artworks, whereas an AI system can produce an artwork in mere seconds. It has the potential to reduce the value of artworks made by hand significantly and can demotivate artists to put their time and creativity into making something new.
Making ethical choices
When presented with a choice, the AI system may be unaware of which is the more preferable solution at hand. This is often illustrated by the ethical dilemma that arises with self-driving cars. When presented with the unavoidable choice between hitting a wall or a child, the AI system in the self-driving car might just choose to hit the smallest obstacle. While plenty of thought and action has been put into solving this ethical dilemma, it teaches us to keep an eye out for ethical choices that do not come naturally to AI systems. Unethical choices should be avoided by paying attention to them when developing and testing AI.
While there are many more ethical considerations to make in the context of AI, being aware of the above risks is a good start towards the ethical development and use of AI. Mitigating them allows businesses to innovate, while reducing the risk of adverse effects. More and more individuals and businesses in society recognize the importance of ethical issues, diversity and inclusion, values, etc. Properly managing AI is crucial to safeguard the progress that has been made.
Stay tuned for our upcoming blogpost, where we will shed light onto the ethical requirements for AI systems set forth by the EU.
Previous blogposts relating to AI:
Artificial intelligence: an introduction to our series of blogposts – EY Law Belgium
If you have any questions related to the aforementioned topics or any other matters, please feel free to reach out to us at any time.