Human–AI Teaming: defining a necessary partnership

Human–AI Teaming: defining a necessary partnership

The future of AI depends not only on the power of its algorithms, but also on our ability to build effective hybrid teams. Deep Blue is working project after project to shape an operational vision of Human–AI Teaming tailored to each context and type of activity.

 

Anyone working in the field of Artificial Intelligence (AI) knows that the evolution of this technology – in terms of both potential and applications – is closely tied to the effectiveness of its partnership with humans. A user-centred approach to technology design is essential for a successful human-AI collaboration, but it requires us to define in advance the nature and the requirements of that collaboration. The key question is: what is Human-AI Teaming, and what makes it work well?

The answer is not straightforward. In human teams, a “successful” group is one that shares common goals, follows established rules, processes and procedures, communicates effectively, and where each person trusts — and pays attention to — the physical and psychological condition of others. We cannot simply export this definition into the context of a partnership between humans and AI”, explains Simone Pozzi, CEO of Deep Blue. There are design constraints that currently prevent us from replicating all the qualities of human collaboration, starting with the absence of emotional components and non-verbal communication on the part of the digital partner.”

Above all, the features of a hybrid team depend on the context — just as they do in a team of humans. Consider, for instance, the differences between a medical team and a group of factory workers on an assembly line. “When you put a team together, you choose the people and the type of interaction between them according to what is needed in that specific context and moment”, Pozzi continues.

In other words, there is no single, universal definition of Human-AI Teaming. What kind of team do I need? This is the key question to ask, because it drives the entire phase of technology design,” adds Stefano Bonelli, Head of the Innovative Human Factors area at Deep Blue. “Otherwise, you risk ending up with a kitchen robot that is incredibly fast at chopping, when what you actually need is one that can chop with precision”.

 

Deep Blue’s effort to develop an operational definition of Human–AI Teaming

Thanks to its expertise and long-standing experience in Human Factors and human-centred design, Deep Blue is involved in several European projects — mainly in the aviation sector, but also in manufacturing — that focus on developing AI-based tools through a multidisciplinary approach. This spans from technological development to design, while also addressing the issue of social acceptance of new technological solutions. Among these projects are CODA, TRUSTY, DIALOG, TADA and HAIKU, all of which are working to define, in practical terms,in an operative way,  what Human-AI Teaming actually means — that is, what characteristics an AI-based assistant must have in order to work effectively as part of a team.

 

Does AI always need to be understandable?

The HAIKU project — which focuses on developing digital assistants to support pilots, air traffic controllers and airport operators — has offered valuable insights into a key issue: the explainability of algorithms. “HAIKU has made it clear that the features of an effective Human-AI partnership vary not only according to the application context, but also depending on the specific task and situation,” explains Vanessa Arrigoni, Human Factors consultant at Deep Blue.  One of HAIKU’s case studies involved a digital assistant designed to support pilots in unexpected situations that may trigger a “freeze” reaction, delays in response time or incorrect commands, thereby increasing the risk of an accident. “In such cases, AI is truly useful if it can provide real-time support — detecting the critical event and guiding the pilot towards the most relevant information so they can quickly regain control of the situation”, says Arrigoni. Here, the explainability of the algorithm becomes secondary: what really matters is the system’s ability to act promptly, even if the pilot does not fully understand the AI’s reasoning process.

A very different case study within HAIKU focused on an “intelligent” assistant designed to support operators responsible for ensuring airport operational safety on a daily basis. “In this scenario, we are dealing with a system that continuously analyses data offline to identify risk areas that are not immediately visible”, Arrigoni continues. In this case, it is essential that the AI’s data-processing methods are clear and understandable, so that operators can be properly guided in identifying possible solutions”.

The degree of “explainability” required from an algorithm — which must be modulated according to the specific application — is a central issue in the human-centred design of interfaces. This was the focus of the TRUSTY project, which developed a digital assistant for air traffic controllers working in remote towers. In a context where transparency and comprehensibility of AI are crucial, the research consortium adopted advanced information-visualisation techniques — including visual analytics, data-driven storytelling and immersive analytics — to make the digital assistant’s decision-making processes more accessible and interpretable for human operators.

 

Does one equal one? Not really

The efficiency of a human team sometimes requires flexibility within the roles — even just to respond to unexpected situations — in order to increase the group’s overall resilience and ability to react. However, things become more complicated in a hybrid partnership, where humans still want to remain “in charge”. “What clearly emerged across the various HAIKU case studies is that everyone — from pilots to airport operators — appreciates the technology, but still wants to retain a sense of control”, Arrigoni explains. For this reason, AI is still mostly preferred for routine tasks, supporting the more repetitive and tedious activities — such as locating a specific procedure in a manual. “This doesn’t mean they don’t want help in critical situations; in fact, they fully recognise the potential of AI to guide, speed up and improve decision-making. However, at this stage, they still struggle to accept the idea of AI acting autonomously, making decisions without supervision, because they feel the need to stay in control and have the final say”, Arrigoni adds. This highlights the need for careful co-design of AI tools upfront, in order to define roles, scope of action, and responsibilities.

 

I see you, I stay in sync

In a hybrid team, the effectiveness of collaboration can also be compromised by the absence of awareness: AI, in fact, has no understanding of the human partner’s physical or psychological state. Without this awareness, a digital assistant cannot adapt its behaviour, provide adequate support in critical moments, or prevent errors caused by stress, fatigue or disorientation. “If I am paralysed by fear or under stress, the people I work with will notice immediately — perhaps because I start sweating, turn pale or stop responding. But how can an AI system detect that?” Bonelli explains. “A hybrid team risks losing adaptability precisely because the machine lacks key information about the context or the humans it interacts with. But new approaches and technologies are beginning to fill this gap”.

The European projects CODA and DIALOG worked in this direction. In their effort to develop an advanced air-traffic management system in which control activities are dynamically shared between humans and machines, researchers used wearable, wireless and non-invasive neurophysiological sensors to monitor brain activity, heart rate and skin conductance. The collected data were then processed by algorithms to extract recognisable patterns of psycho-physical states. “The goal was to monitor — and even predict — levels of workload, attention, stress, fatigue and alertness in real time, so that the system could detect them and anticipate potential issues by adapting the team structure, meaning the allocation of tasks, in order to avoid undesirable or dangerous situations (such as a spike in fatigue) that could affect decision-making”, Bonelli adds.

Another desirable aspect of a hybrid team is alignment: the AI should not be too far ahead or too far behind in its processing, but positioned alongside the operator — at most, one small step ahead. “Ensuring this alignment is not trivial. You need to ‘hold back’ the AI by limiting the input and processing of data; otherwise, the technology may become distracting, disruptive or even over-helpful”, explains the CEO of Deep Blue.

“While developing the digital assistant for air traffic controllers in HAIKU” Arrigoni continues, “we realised the value of limiting AI support to a narrow time window — specifically, to the first three aircraft in the landing and departure sequence. Showing the entire sequence would have been unnecessary at best, and confusing or even dangerous in some situations”.

 

The dark side of Human-AI Teaming

As desirable as a solid human–AI partnership may be, this collaboration is not without risks. One of them is deskilling — the gradual loss of expertise over time. “If the operator no longer has the chance to gain ‘active’ experience, to learn and consolidate skills through practice, they risk not fully developing them, or even losing them altogether”, Bonelli explains. “Some abilities can only be acquired and maintained through daily practice and direct interaction”,
Pilots know this well: to avoid “unlearning” how to land — now largely an automated task — they commit to performing manual landings at least once a month, in addition to periodic training sessions.

A second risk is complacency, which occurs when operators place too much trust in the AI system and let go of their critical judgment. “From our experience, this risk appears to be higher for certain individuals and specific roles. A pilot with many flight hours will usually retain a level of control”, Arrigoni notes. “But people with less experience, greater familiarity with AI, or roles focused on planning may rely more heavily on it. This can be both positive and negative: they will certainly be faster and more productive, but in some cases, they may rely too heavily on the system and fail to manage potential risks”.

“For those of us designing support systems, this is an issue that requires careful attention”, the CEO of Deep Blue highlights.

Closely linked to the risk of complacency is the risk of becoming disconnected from the decision-making process. “If an operator relies too heavily on the AI system’s decisions — accepting them even without fully understanding them, simply because they ‘work’ — they risk stepping out of the loop”, Bonelli explains. “The problem is that, in emergency situations, the system pulls back. And not because of technical limitations, but due to legal and ethical constraints: when responsibility is at stake, it must be the human who makes the final decision”.

At that point, the operator suddenly finds themselves required to act without a sufficiently clear understanding of how the situation developed, and without knowing exactly what to do. For this reason, the design of the handover phase — when tasks are transferred from the AI system back to the human operator — is another critical issue. The user must remain involved in the process at all times, at least to a level that allows them to respond consciously if and when necessary.

 

Knowing in order to understand and accept

Alongside the risk of deskilling, we should also ask a more constructive question: which human skills need to be developed to ensure an effective hybrid collaboration? “Very often, AI systems are introduced in a ‘catch-up’ mode: you are given a tool, but you realise you are not fully prepared to use it. You lack awareness of how it works, how it was trained, and what its limits are”, the CEO of Deep Blue notes.

Introducing AI is not enough: people must first be trained so they can truly work in synergy with these new tools. “This also helps overcome resistance to adopting new technology — the same applies to AI as to any other innovation — because it’s natural that if you don’t understand something, you don’t trust it or want it”,  Pozzi concludes.

Get in touch with us