The AI Act has introduced a digital literacy requirement for providers and deployers of Artificial Intelligence (AI) systems: what is AI, how should it be used, and what are its ethical and legal implications? Deep Blue’s training course, designed for aviation but adaptable to other sectors, offers a comprehensive approach to AI literacy.
Last year, the European Commission approved the AI Act, a regulation on Artificial Intelligence aimed at harmonising the legal framework for AI systems. Under this regulation, AI applications are classified by risk level: different levels of risk correspond to different obligations and responsibilities for providers, users, manufacturers, importers, and distributors – even those based outside the EU but operating within its market.
Articles 4 and 5 of the AI Act
The regulation will be implemented gradually, but some provisions have already come into force as of February 2nd. This includes Articles 4 and 5. Article 5 defines prohibited AI practices. “The regulation text mentions some very high-risk applications, like user profiling for potentially discriminatory purposes, but it doesn’t fully spell out the various risk categories,” explains Paola Lanzi, expert in Human-Centred Automation & Artificial Intelligence at Deep Blue. Upon the activation of Articles 4 and 5, the Commission also published guidelines on banned practices, providing clarification on what is not allowed when using AI. For instance: manipulating or deceiving individuals to influence behavior; using remote biometric identification systems “in real time” in public spaces (unless public safety is at stake); or monitoring emotions in workplaces and educational environments (unless health-related reasons apply).
Article 4, instead, introduces the obligation of AI literacy for providers and deployers – defined in the regulation as any natural or legal person, including public authorities, agencies, or other bodies, using an AI system under their authority, except for personal, non-professional use. Specifically, Article 4 states that: “Providers and deployers of AI systems shall take measures to ensure, to the extent possible, a sufficient level of AI literacy of their personnel, as well as of any other person involved in the operation and use of the AI systems on their behalf, taking into account their technical knowledge, experience, education and training, the context of use, and the people or groups who will be affected by these systems”.
AI Literacy: Obligations and risks
“The training requirement applies to all organisations using AI tools – even the most basic ones used daily,” continues Lanzi. While the definition of literacy is outlined in Article 3, point 56 – “the skills, knowledge, and understanding enabling providers, deployers and impacted individuals, considering their respective rights and obligations, to make informed use of AI systems, and to understand the opportunities, risks, and potential harm they may cause” – the expressions to the extent possible and sufficient level remain vague.
“We have a short window to clarify these points,” says Lanzi. “By early August, the Commission will provide further guidance – also because inspections will start from that date.” Currently, there are no sanctions for non-compliance with the literacy requirement, but fines may be introduced in the future. (Violations of Article 5, for example, can lead to penalties of up to €35 million or 7% of global annual turnover.)
Moreover, in case of malfunctions or damage caused by AI systems, non-compliance with Article 4 could be a very significant factor in the determination of legal liability.
“Potential sanctions are certainly a major concern for anyone working with AI,” Lanzi stresses, “but training is essential regardless. Operators must understand that they’re taking unknown risks if they use AI tools without knowing their limitations. Take one of the simplest and most likely scenarios: an employee inserts a piece of information from ChatGPT into a report, but it turns out to be incorrect. What are the consequences? A reprimand from their manager, reputational damage to the company, or something worse? And again: when is the individual liable, and when does responsibility fall on the company? How can companies anticipate and mitigate these risks?”
Deep Blue’s AI Literacy Course
“While we wait for the Commission’s guidance on AI training,” Lanzi explains, “many companies are already providing courses to their employees – including us at Deep Blue, both internally and externally”.
For years, Deep Blue has been offering AI literacy training to aviation professionals. Thanks to its modular design, the course can easily be adapted to other domains. “Unlike most existing packages that are heavily technical,” says Lanzi, “our three-day course (also available online) is comprehensive: it includes a technical introduction, real-world case studies, an in-depth look at the organizational impacts of adopting AI systems, and a section on legal and ethical implications – all aimed at understanding and mitigating risks”.
The technical section (what is AI, the difference between symbolic and sub-symbolic AI, supervised vs unsupervised machine learning, what reinforcement learning, generative AI, predictive AI mean, etc.) is essential to grasp the basics and to understand the organizational, legal, and ethical implications of using these systems.
“We then move to the application part, which varies depending on the domain: we present real and forward-looking use cases. Next, we focus on the user perspective: we cover issues such as transparency and explainability, emphasizing the importance of user-centred design to foster effective human-AI teaming. The operator must always play a key role in the interaction – maximum automation is not always optimal. Automation levels must be ‘tuned’ to the desired interaction structure,” explains Lanzi.
This is closely tied to organizational structure: AI systems aren’t plug-and-play. They must be integrated within a working environment, which means adapting procedures, workflows, and training – essentially, the entire work system around the AI.
“We also address the ethical impacts of AI systems,” Lanzi adds. “Depending on how the system is designed or used, discriminatory biases may arise. For example, some translation tools automatically assigned gendered pronouns to professions based on biased datasets – doctors being translated as male and nurses as female – simply because the algorithm was trained on data reflecting those societal assumptions.”
The course concludes with an overview of the regulatory framework, both cross-sectoral and domain-specific.
“In aviation, for instance, we refer not only to the AI Act but also to the guidelines laid out in EASA’s AI roadmap,” explains Lanzi, “including the distribution of responsibilities across developers, adopting organizations, and end users”.
A practical and tailored Course
“One of the most engaging aspects of our courses is the hands-on approach,” says Lanzi. “We involve participants in conceptual design exercises, encouraging them to look at a technology from multiple angles: first from a purely technical point of view – which algorithms or datasets to use – then from a usage and impact perspective. It’s fascinating to see how the proposed solutions shift completely depending on the perspective.
This surprises many participants, as it’s not how they usually work: the tech specialist focuses on technical feasibility, while the manager considers business integration but may never question whether the dataset is complete. In this sense, our course provides a full 360° view”.
The course is designed for everyone, regardless of their role in the company.
“But it can be shortened or customized for more specific needs,” Lanzi points out. “We also organize deep-dive focus sessions for different modules. For instance, we have a more technical version of the introductory course, where participants develop algorithms themselves, while other modules focus on organizational or legal aspects”.
Aviation and beyond
While Deep Blue has over a decade of research and consultancy experience in aviation, in recent years it has expanded into the manufacturing sector – one of the most active domains in the digital transition and AI adoption.
“We’ve tackled legal and ethical challenges in AI integration – like liability in the event of accidents, and the need for AI to be explainable, ethical, and responsible,” Lanzi explains. “In the XMANAI project, for example, our legal and Human Factors experts worked together to build a risk assessment and mitigation framework for AI in industry”.
It was within this context that Deep Blue’s AI Literacy courses were developed – even before the AI Act made them mandatory.
“We’ve been working for years to offer robust, responsible training aligned with European directives,” Lanzi concludes. “Companies can’t afford to be unprepared – and our courses provide the tools needed to navigate the AI landscape from multiple perspectives: technical, human, regulatory, and legal”.