There is much talk about how Artificial Intelligence (AI) will replace humans not only in more repetitive tasks, but also in more complex ones that require logic, flexible reasoning, and decision-making skills. Although the latter is not an entirely unrealistic scenario, research points in another direction, favouring a cooperative approach between humans and AI. How can this be achieved? The first guidelines are being set out in the aviation sector, and Deep Blue is among the first to put them in writing.
Collaborating with AI: even Kasparov knew how
In 1997, the World Chess Champion Garry Kasparov was beaten by IBM’s super-computer Deep Blue. Much more than an electronic calculator, Deep Blue was capable of processing 200 million moves per second and storing thousands of games played. Kasparov planned to outsmart the computer by surprising it, playing extravagantly, but this strategy failed and in the final game the world champion surrendered after only 19 moves. The defeat was broadcast worldwide. Kasparov did not take it well, but in the following years he began to cultivate an interest in Artificial Intelligence, going so far as to formulate the idea of advanced chess: a man and a computer playing chess against another man and another computer. The idea was that mixed teams would benefit from computers’ computational capabilities and human intuition.
According to Kasparov, who pioneered the idea, the alliance between humans and AI would be possible in all fields, from industry to medicine. Indeed, in both civilian and military applications, today Human-AI Teaming, or collaboration between humans and AI, is the ideal partnership: one or more people and one or more AI systems can cooperate and coordinate to accomplish a given task. This model works across the board: in fact, studies have shown that delegating even only mundane, repetitive and easily automated tasks to AI, diminishes attention, situational awareness, and human decision-making capabilities in general with potentially disastrous consequences.
Human-AI teaming in aviation
“Human-AI Teaming is not just a simple interaction in which a person receives outputs from algorithms, but a truly dynamic and flexible collaboration” according to Matteo Cocchioni, Human Factors consultant at Deep Blue (yes, our name is a tribute to the super-computer), who has obtained a Master’s Degree in AI from the National Research Council (CNR). “Deep Blue has been working within several European projects addressing the issue of integrating Artificial Intelligence, also for partnering with humans, especially in aviation”. Some projects have recently been completed, others are in the final phases. MAHALO and ARTIMATION aim to create digital assistants to support air traffic controllers for conflict resolution and air traffic management; HARVIS provides a digital second pilot to assist pilots’ decision-making in critical situations (we have discussed these projects here, here and here).
“Currently we are working on the human-AI collaboration from a technical point of view, trying to define the ‘profile’ of this partnership” Cocchioni states. “But in the future it will also be crucial to ‘train’ the pair humans-AI to cooperate, starting with the basics: in Deep Blue we do a lot of training for managerial staff in the aviation sector, for example staff working in national regulatory agencies, dealing with Artificial Intelligence and the changes it will introduce in the industry – changes in tasks, opportunities and critical issues, new ethical and legal aspects”.
Humans and AI: a complex partnership
Achieving a functioning human-AI collaboration is not a trivial issue. Indeed, several problems risk compromising the efficiency of the partnership such as a poor understanding of algorithms, the excessive effort required to interact with an AI system, poor situational awareness, and loss of manual skills in the medium to long term. These are some of the critical issues that need to be resolved in order to fully benefit from teaming, that is, to benefit from the combined abilities by achieving performance that exceeds that of individuals. To achieve this, research is focusing on several issues such as the creation of a new language for people and AI to communicate, the flexible assignment of tasks within the team, identifying who decides the handover between users and AI and what is the best way to accomplish it, and training designed both for the team, in order to optimize its performance, and to the AI system to calibrate its outputs to human expectations so as to increase trust within the pair.
Trust in human-AI teaming requires transparency and an understanding of algorithms. These elements have always been considered indispensable for achieving full, effective cooperation between humans and Artificial Intelligence. If an algorithm is transparent, it is understandable and its behaviour predictable; finding ways to deliver this, without sacrificing too much of AI’s potential but also without overburdening the human operator with information, is one of the most active strands of research. However, new and unexpected indications are emerging in aviation.
Explainability? Not always
Among the various European projects on Artificial Intelligence and Aviation, MAHALO has certainly provided some of the most interesting results. “In the development phase of the digital assistant algorithm, what we were interested in understanding was how personalized, that is tailored to the operator, it should be, thus offering solutions that are compliant with those of the controllers, or non-compliant but transparent, so that the controllers could understand its outputs”, Matteo Cocchioni states. “We found that for making decisions in a safety-critical context, transparency is not crucial. This is because controllers don’t have the time to understand how the algorithm ‘thinks’, they are more interested in whether its solution is safe and effective, or at least if it suggests an action that can be even partially modified and ‘adjusted’”.
“This is a counterintuitive but certainly interesting result, not only on a technical and operational level. Until now, ethical or regulatory AI guidelines, albeit prematurely, pointed to transparency as one of the foundational, fundamental elements: the operator must always be able to understand what the algorithm is doing. This assumption is partly contradicted by the results produced by MAHALO and other similar projects”. In critical contexts, in fact, the operator does not necessarily want to know how an algorithm works. This is not to say that trust in AI is not important, but rather that it needs to be ‘built’ at a different stage than the operational one. During the operational traffic management phase, for example, the pressure and workload, at least in the way this activity is organized to date, do not leave the time and mental resources necessary to analyse the solutions proposed by the algorithm.
MAHALO’s Guidelines
MAHALO researchers do not recommend abandoning the issue of transparency, but rather ‘modulating’ it according to the context (safety critical or not safety critical): it is useful to let air traffic controllers choose how much, what and when to see. This suggestion is proposed and argued in the Guidelines developed by the project. “We are among the first to draft a proposal for Guidelines to improve AI applications in aviation and in general human-AI teaming” says Stefano Bonelli, Research & Development Manager at Deep Blue, “and it is important to know that EASA, the European Aviation Safety Agency, which has long been active with respect to the topic of AI in aviation (this is the roadmap for a human-centric approach to AI) is collecting this kind of input from the different European projects dealing with this topic to prepare its Guidelines”.
Modulating the level of transparency according to context means personalizing the algorithm. In fact, personalization is another recommendation contained in the MAHALO Guidelines. Personalization means building an algorithm that ‘knows’ what is important to the controller and in which context. To accomplish this, however, requires a huge amount of data, and it should not be forgotten that there is a fair level of customization beyond which the algorithm would become a copy of the controller and therefore of little use in the partnership. Another point to consider is flexibility in the interaction. The controller can decide whether to accept or modify a suggestion but also whether to reject it. “Knowing that he is in charge gives the operator confidence, especially if he has the full burden of responsibility”, Bonelli explains. At this stage, we must wait and see what indications will be delivered by other European projects and, more importantly, what EASA’s liaison work will bring and what concluding Guidelines will result from it (see, as an example, the first set of guidelines released by the agency).