This Time, Humans Learn About Machines: AI Literacy in the AI Act

BY Elora Fernandes and Abdullah Elbi – 01 October 2024

Artificial Intelligence (AI) systems have become integral to our daily lives, influencing everything from shopping to public services. While AI promises to free us from bureaucratic tasks and mobilize knowledge in ways previously impossible, it also presents challenges that require public understanding to maintain collective control over the technology. One crucial element in this regard is the development of AI literacy, which is discussed in this blog post series based on the provisions outlined in the AI Act.

Artificial Intelligence (AI) systems have woven themselves into the fabric of our daily lives, influencing how we interact in various contexts—from grocery shopping and public services to our work and educational experiences. This interaction directly impacts human relations, influencing the realization of human rights, shaping what we choose to remember or forget, and challenging our perceptions of what makes us uniquely human. On the one hand, AI systems bring an automation element that promises to free us from bureaucratic tasks and help us mobilize existing knowledge in ways never seen before. On the other hand, AI poses challenges that must be understood, to an appropriate level, with varying granularity, by all citizens, so we can maintain (collective) control over technology, rather than being controlled by it. Being ‘AI literate’ becomes, therefore, as essential as traditional literacy skills like reading, writing, and arithmetic. In this two-part blog post series, we’ll explore the provisions addressing AI literacy in the Artificial Intelligence Act. In this first blog post, we  discuss what AI literacy is and how the regulation defines it. In the second post, we’ll examine Article 4 of the regulation and the ensuing obligations for providers and deployers of AI systems.

AI Literacy in the AI Act

The Artificial Intelligence Act (AIA – Regulation 2024/1689) has recently come into force and is the first in the world to horizontally regulate AI. The provisions on AI Literacy within the AIA result from the European Parliament’s amendments and the original wording of both its definition (Article 3(56)) and the obligation to implement measures related to it (Article 4) were a lot bolder and wider in scope. The definition of AI Literacy previously referred to its importance for democratic control of AI systems and the need for Member States and the Commission to promote the development of a sufficient level of AI literacy in all sectors of society and for people of all ages, as part of their internal efforts to promote digital literacy in general.

The Parliament’s text of Article 4 also directly introduced obligations for Member States regarding the promotion of measures to develop a sufficient level of AI literacy across sectors, as well as for providers and deployers of AI. What remained in the final text, however, is much more concise and specifically targets the obligations of providers and deployers of AI systems in relation to “their staff and other persons dealing with the operation and use of AI systems on their behalf”. Nevertheless, this provision is still commendable in the sense that it is applicable to all types of AI systems, regardless of the highly debated risk categorization of the AI Act. In the same vein, Recital 20 and the definition of AI Literacy in Article 3(56) still acknowledge, in abstract, the need for individuals affected by AI systems more broadly to be equipped with AI literacy.

What is AI Literacy?

According to Recital (20) AI Literacy is important for three main reasons: (a) to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control, (ii) to ensure the appropriate compliance and enforcement of the AIA; and (iii) to improve working conditions and ultimately sustain the consolidation, and innovation path of trustworthy AI in the Union. But what is AI Literacy and how can it help reach these goals?

Article 3(56), AIA, defines AI Literacy as “skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”

First it is important to note that the “skills, knowledge and understanding” of AI systems as referred to in the provision should not only be understood as the specificities of the technology and its applications (the technological dimension). It should also encompass the broader implications of AI systems and the decisions they (help) make related to individuals and societies, particularly in terms of lawfulness, ethics, and fairness (the human dimension). AI being a complex sociotechnical artifact, these dimensions should be considered in tandem. Similar to education more broadly, AI literacy should not be viewed as merely an instrumental skill, but as a key means for people to engage with and become part of social, cultural and political ecosystems in the 21st century, empowering them to be more autonomous and independent.

What exactly providers, deployers and affected persons must know or be able to do to make an informed deployment and be aware of the risks and opportunities of AI, however, will largely depend on the context and on factors such as the system’s risk level, the sector of application (e.g., healthcare, finance, transportation), the stakeholders involved (e.g., regulators, consumers, employees, vulnerable people), among others. This is directly related to understanding, in the concrete case, what it means to have a “sufficient level of AI Literacy”, as stated by Article 4, AIA, which will be discussed in Part 2 of this blog post series.

Here it is interesting to draw a parallel with the rights to transparency and information within the General Data Protection Regulation (GDPR). Individuals must be informed that their data are being processed (especially through automated means as per Article 13(2)(f)) and provided with meaningful details about it to mitigate information and power asymmetries. This helps them exercise their rights and ensure data quality. Within the AIA, particularly considering Article 4 alongside the right to explanation outlined in Article 86, this understanding should ideally cover not only how AIA systems collect data and for which purposes, but also how algorithms infer new data, make decisions that affect people and society, as well as the broader implications of these processes and decisions.

Conclusion

It is commendable that the AIA places emphasis on AI literacy, recognizing both its technical and human dimensions as crucial for maintaining (collective) control over AI technologies. As we will explore in Part 2 of this series, the obligations in Article 4 of the AIA apply only to providers and deployers of AI systems. However, the definition in Article 3(56) also acknowledges that AI literacy is a critical skill that must be mastered by all affected persons. This should not only inform how providers and deployers implement other provisions of the AIA but also guide the future actions of the Commission and the European Artificial Intelligence Board. Moreover, it should shape the AI strategies of EU Member States, ensuring that all citizens, according to their specific needs and contexts, are equipped to critically engage with the disruptions AI systems are already introducing into our societies.

SOURCE