-> Back to overview

Cronos AI

Responsible AI Use begins with understanding AI

The EU AI Act introduced an AI literacy obligation which went into effect as of February 2025. It aims to enable responsible AI use in practice, and it applies to any company or organization professionally using AI solutions in their activities. This blog will provide you with the who, why, and what of this obligation, alongside some practical guidance.

-> Back to overview

Key takeaways

1

The EU AI Act mandates AI literacy for all organizations using AI.

2

AI literacy training needs to be tailored to specific roles and risks.

3

A tiered approach to AI literacy training is recommended for effective implementation.

Man looking at laptop, trying to fix something

Responsible AI Use begins with understanding AI

Introduction

There is much enthusiasm and excitement around the potential of AI, but there is insufficient awareness of its complex nature and its impact on the way we work and shape our society. This literacy gap between IT professionals and end-users is also highlighted by the EU AI Act, which requires organizations to ensure their staff’s AI Literacy (art. 4). The AI literacy obligation went into effect as of February 2025. In this blog we will dig a little deeper into the who, why, and what of this obligation, alongside some practical guidance. It begins by examining the legal text itself, complemented by a look at the AI Office Repository and finally deriving practical takeaways from both sources into a concrete AI literacy offering.

Art. 4 of the EU AI Act

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use ofAI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”

Scope. The obligation applies to both providers and deployers. Whether you develop AI tools for someone else, or use them yourself, you will have to ensure that you understand the technology. This includes people who are working on your behalf, including outsourced talent and freelancers.

Best effort. The Act puts forward a ‘best effort’ commitment. It does not require any kind of certification, but you will have to prove if and how your organization approaches AI literacy training when asked. Whether this proof suffices depends on three factors:

- Literacy factor. Training should be adapted to the current literacy levels of the people involved. A one-size-fits-all won’t bethe answer for companies with diverse talent. The intent is to save your people’s time and energy if their literacy level can be proven via other means, so that efforts can focus on helping people that seemingly need to “catch-up”.

Context factor. The key objective of the AI Act is to prevent AI from causing harm to society. AI Systems applied in critical or sensitive industries will therefore be subject to more scrutiny than others.When applying AI Systems in these sectors, AI literacy education should clearly indicate the specific risks and potential harms in this context.

Subject factor. The above also applies for affected (groups of) people. The more a third party is subjected to the AI’s proper functioning, the more scrutiny that System should be under. Not all AI will directly affect people, but those that do will need to be handled and supervised by people who are aware of the potential harm it can do.

In summary, people involved with handling or developing AI should at least have a basic understanding of what AI is, to better understand the tool’s impact and risks. People in key positions of oversight or people working in sensitive or critical sectors should have a more in-depth understanding of how AI will affect their operations and the people that rely on them.

EU AI Office Guidance: the AI Literacy Repository

The AI Office recently published its Living Repository of AI Literacy efforts by other organisations and companies. These are our takeaways from the current version:

- Much of the AI Act is ingrained with the rule of three. From its risk levels to its compliance angles(governance, standards, and literacy) down to the literacy levels themselves.Combining Basic AI Literacy for all employees with domain-specific and system-specific trainings, the repository indicates that most companies aim to ensure not only an organization-wide understanding of AI basics, but also its practical applications in a specific industry or role.

- Interestingly, organisation size does not seem to impact the content nor target groups of their training programmes much. The size factor mostly affects which education formats are best suited for increasingly large audiences. While SMEs are better suited for interactive in-person training, large companies may need to supplement with (for example) an accessible platform for e-learning. Nevertheless, even large organisations will need to make sure that intense training happens where necessary.

- The former holds especially true for organisations operating in more sensitive or critical sectors (e.g., healthcare, finance, public services). The repository shows a clear ambition of such organizations in ensuring that their entire staff is offered AI literacy training - not just those in critical or technical roles. This approach ensures that AI awareness and responsible deployment principles are ingrained across the organization, aligned with its values and deontology.

- Most companies have a focus on understanding AI as both providers and deployers. Having specific AI Systems developed to meet your specific may fully enable its potential, but it comes with the responsibility to organise human oversight and to ensure everybody knows how the System works. This dual perspective is critical for ensuring that AI is not only developed responsibly but also deployed safely and responsibly.

Custom levels of training

Based on the legal text of the AI Act and theAI Office Repository, we see the following three relevant levels of AI Literacy training:

1. ESSENTIALS. Understanding the essentials of AI technology.

Audience:all people involved with AI tools

Purpose:provide critical reflections on AI technology in general

- Content:core concepts of AI technology, its potential and capacity, the unique risks and potential harms that come with misunderstanding or misusing the technology

Result: people understand AI’s inner workings and how to use it responsibly

 

2. SPECIALISED. Domain-specific AI literacy & awareness.

Audience:staff within specific domains of expertise, e.g. HR, healthcare, or finance.

Purpose:provide critical reflections on AI technology for a specific domain

Content:core concepts with domain-specific (technology neutral) applications of AI technology, its practical opportunities and domain-specific risks and potential harm

Result: people understand AI’s inner working in their processes and how to stay critical of its risks and how to recognize the potential harm of not using it responsibly

 

3. APPLIED. Responsibly deploying specific AI systems.

Audience: specialized people working with or supervising specific AI systems.

Purpose: provide a concrete understanding of specific AI system, with instructions on how to responsibly deploy it and/or carefully supervise it

Content:Translating specific AI system capacities to its users and/or supervisors through, e.g., instructions of responsible use and human oversight methods.

Result: people understand a specific AI system’s inner workings, how to use it responsibly in practice, and how to provide effective human oversight.

Conclusion

With AI Literacy training, selected people are informed about the nature and impact of AI, highlighting specific functionalities, opportunities, risks, and its potential, increasing their ability to recognize valuable opportunities for use and to scrutinize AI’s in-and output. However, AI Literacy ideally doesn’t stop at AI trainings. AI Literacy is best supported through continuous learning strategies, amending organizational policies, and drafting codes of conduct, helping you build a comprehensive future proof futureproof framework to fulfil your AI literacy obligations.

Prepare your people and get ready to take your first step towards responsible AI implementations by equipping them with the knowledge and skills needed to stay ahead in an AI-driven world!

A Blog by: Ingrid Lambrecht, Hannah Bosman from Legile

Looking for a sparring partner for your AI journey?

Contact us to discover how Cronos.AI can help your business.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.