The EU has on numerous occasions expressed its ambition to become a world leader in artificial intelligence (“AI”). To do this, it has to align AI with the fundamental values of the EU. It acknowledges the potential risks that AI poses to individuals, including ethical risks. In our previous post, we highlighted some of the most important ethical risks and challenges that AI is currently facing. In order to mitigate ethical risks associated with AI and to ensure trust, the EU has identified seven ethical requirements for AI systems. The below overview contains a shortlist of these ethical requirements and highlights current difficulties when implementing this vision into practical guidance.
As part of the EU’s AI strategy, the High-Level Expert Group on AI (“AI HLEG”) was entrusted by the European Commission with giving recommendations on the ethical, legal and societal issues related to AI in 2018, which led to a final draft of the “Ethics Guidelines for Trustworthy AI” (“Guidelines”) on 8th of April 2019. These non-binding Guidelines are a key instrument in the EU’s strategy to regulate AI. They are referenced in later initiatives of the European Commission, like the Whitepaper on AI and its regulatory proposal for an AI Act.
In the Guidelines, the AI HLEG formulated the following seven non-binding key requirements that need to be fulfilled in order for an AI system to be considered trustworthy:
- Human agency and oversight: AI systems should allow human beings to make informed and autonomous decisions and protect their fundamental rights. Proper human oversight needs to be achieved.
- Technical Robustness and safety: AI systems need to be resilient and secure to minimize and prevent unintentional harm. They need to be safe, accurate, reliable and their results should be reproducible.
- Privacy and data governance: The protection of people’s privacy and data and the use of adequate data governance mechanisms needs to be safeguarded. The quality and integrity of the data and legitimized access thereto need to be ensured.
- Transparency: the data, system and AI business model should be transparent. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be made aware that they are interacting with an AI system and must be informed of the system’s capabilities and limitations.
- Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could lead to unintended (in)direct prejudice and discrimination against certain groups or people, especially vulnerable groups. Moreover, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
- Societal and environmental well-being: AI systems be sustainable and environmentally friendly. Their social and societal impact should be carefully monitored and considered.
- Accountability: Responsibility, auditability and accountability for AI systems and their results. Adequate an accessible redress should be ensured.
These are non-binding requirements, to be continuously evaluated and addressed during the entire life cycle of any AI system (development – data collection – learning – use & continuous improvement).
Currently, binding regulations specific for AI are circulating between the Council of the European Union and the European Parliament. For example, the regulatory proposal for an AI Act was drafted with the Guidelines in mind.
However, there are certain difficulties that any regulatory proposal specific for AI systems should take into account and overcome:
- A need for practical guidance
Requirements for AI systems are often vague, leaving businesses in the dark about what they really need to do to comply. The seven key requirements of the EU themselves are very broadly formulated. Setting specific requirements is a challenge as there are many types of AI systems, both on the market and in the future. Legislation needs to cover all of these and meanwhile accommodate future developments.
- Unrealistic requirements for data sets
The key requirement privacy and data governance aims to ensure that an AI system’s data sets are complete and free of error. However, requiring data sets to be perfect might be unrealistic in most cases. The more data a system processes, the more difficult it will be to filter it. While repetitive checks can be automated, human intervention may also be necessary to check whether all data are correct and up-to-date. For large data sets (e.g. in case of web scraping), it is almost impossible to filter out all unreliable and undesirable data.
- Black box principle hinders transparency
The key requirement transparency requires businesses to explain the workings of their AI system. But for many AI systems (so-called “black box AI”) only the inputs and outputs are visible, whereas sufficient knowledge about how it came to a certain result is lacking. If transparency requirements are too extensive, it may prove difficult for businesses to comply.
- Difficult distribution of responsibilities between different actors
Many actors may be involved in the life cycle of an AI system. AI systems are more than often not developed, owned and offered to users by the same entity. The question rises who of these actors will be responsible for not complying with the EU’s standards for AI systems. This should be clearly outlined in any legislative proposal. It is also important for businesses to pay attention to the distribution of responsibilities when licensing, buying or selling an AI system.
- Administrative burden for businesses, especially SMEs
The means to ensure the fairness and trustworthiness of AI systems may be risk assessments and/or information and documentation requirements. Drafting these instruments can prove to be costly for businesses, especially SMEs. Any new regulation that imposes specific obligations on businesses that work with AI systems, needs to carefully consider the administrative burden to comply. At the same time, businesses shall need to keep an eye out for such new regulations, to prepare themselves for what is coming their way.
- Concerns for conflicting legislation
Lastly, the relation between any new regulations specific for AI systems and existing regulations needs to be carefully considered. Conflicting provisions should be avoided, notably in terms of privacy and data protection, product safety, consumer protection and sectorial legislation, so it is clear for businesses which rules apply to them.
In light of the above, it is advisable for businesses to keep track of the regulatory proposals circulating between the Council of the European Union and the European Parliament, to keep an eye out and prepare for what is to come.
In the upcoming weeks, we will shed more light onto the current developments and proposals in subsequent blogposts on our website.
Previous blogposts relating to AI:
Artificial intelligence: an introduction to our series of blogposts – EY Law Belgium
AI and ethics – ethical challenges connected to AI – EY Law Belgium
In case you have any questions in the meantime, relating to the aforementioned or other topics, do not hesitate to reach out to us.
AI and ethics – ethical challenges connected to AI – EY Law Belgium