Artificial Intelligence and Human Factors in Aviation

Artificial Intelligence and Human Factors in Aviation

The invention of jet engines in the 1950s and then of the Fly-by-Wire control system in the 1980s were two turning points in aviation history. A third could be the appearance of Artificial Intelligence (AI), an innovation that many expect will be unprecedented in terms of its impact on aviation, provided we understand how to integrate it efficiently and safely with the work of human operators (Human Factors). 

 

A ROADMAP FOR THE APPLICATION OF ARTIFICIAL INTELLIGENCE IN AVIATION

The Roadmap for Artificial Intelligence in Aviation of the European Aviation Safety Agency (EASA) predicts that the first autonomous commercial air transport operations will take place in 2035. “Perhaps too optimistic a forecast,” says Matteo Cocchioni, an expert and consultant in Human Factors for Deep Blue with a master’s degree in AI obtained at the Italian National Research Council. According to EASA’s plan, and based on the opinion of the industry on the applications of AI systems in development, autonomous flights will come about gradually, following a well-defined roadmap and scaling up one level at a time: 

 

Level 1 (2022-2025): Artificial Intelligence assists in human operations by improving performance.

Level 2 (2025-2030): moving towards greater collaboration between man and machine, but the machine does not make decisions or take action. 

Level 3 (2030-2035): machines become increasingly autonomous and capable of making decisions and taking action. This will lead to a fully autonomous scenario in which humans will only intervene in the design and control of AI systems. 

EUROCONTROL’s Fly AI Report 2020 presents some 20 Artificial Intelligence applications that are already available or being developed (we have also talked about some of them here). Most are designed to optimise flight routes, improve air traffic forecasts, weather forecasts or passenger transfer at airports. The impact of new technologies on the work of operators is still limited to a role of support and help, but it will increase as the technologies themselves are implemented and therefore deserves to be analysed properly. Indeed, many believe that the application of AI in aviation should follow a human-centred approach.  

 

RISKS AND BENEFITS OF AUTOMATION AS SEEN BY HUMAN FACTOR EXPERTS

“For those who deal with Human Factors, the key issue is to understand what impact different levels of automation will have on the work of air traffic operators,” says Cocchioni. “For example, workload and worker stress will tend to decrease with full automation as long as we are able to create intelligent applications that are actually useful and usable. At the same time, we will be better able to perform increasingly complex tasks thanks to the ability of algorithms to handle huge amounts of data at the same time. This is certainly true in the first two scenarios, but it is not a given with full automation because human ‘flexibility’ in handling unforeseen events will be lost.” 

 

The lack of ‘creativity’ is precisely one of the risks associated with automation: algorithms are and will always be better at managing compliant situations, where operational and environmental conditions are as expected, but in the case of unforeseen events the analytical and decision-making capabilities of human intelligence are irreplaceable for the time being. Another risk is the loss of human skills due to the decrease or lack of training and practice, a problem that will weigh on emergency management already in the Level 2 scenario. For a complete picture of the risks and benefits of automation in aviation, it is useful to read the document prepared by the SESAR 2020 Scientific Committee Automation Taskforce on behalf of the SESAR Joint Undertaking (European public-private partnership managing the SESAR project for the modernisation of airspace and air traffic control in Europe). This document also highlights the critical issues related to the reaction to new technologies by users (pilots, air traffic controllers, etc.) and society in general. 

 

Regarding this last aspect, in order to achieve the full integration of Artificial Intelligence in aviation, EASA believes it is necessary to proceed on several fronts conducting an analysis of its reliability based on the European ethics guidelines (for example with regard to transparency, privacy and data governance, non-discrimination and fairness, social and environmental well-being); verifying the effectiveness of the algorithms starting from checks on incoming data; working on “transparent” Artificial Intelligence (Explainable AI) so that the choices it makes are as comprehensible as possible to humans; minimising risks, for example by keeping a “person in charge” or having another independent AI agent to supervise the AI itself. 

 

HUMAN-SCALE ARTIFICIAL INTELLIGENCE

On the topic of Explainable AI, i.e. an Artificial Intelligence that tries to explain its solutions to humans and how it got there, Deep Blue is the coordinator of the Mahalo (Modern Atm via Human-Automation Learning Optimisation) project, which is working on digital assistants to help air traffic controllers identify and resolve conflicts. Simulations with controllers from the Air Navigation Services of Sweden and ANACNA (the Italian National Association of Air Navigation Assistants and Controllers) are in progress. By manipulating the transparency and compliance (correspondence to human behaviour) of the solutions proposed by the digital assistants and simultaneously listening to the opinion of the flight operators and recording qualitative and quantitative feedback such as stress load or attention span, the researchers are trying to understand how best to design and engineer these automated tools.  

 

There is a lot of interest in the development of Artificial Intelligence tools to support the work of air traffic controllers, which will become increasingly complicated both because the number of flights will increase and because new types of aircraft such as drones will start operating. SafeOPS is one such project. It starts with the case-study of the go-around of an aircraft that regains altitude just before touching down. This occurs for various reasons, e.g. weather conditions become unexpectedly unfavourable or obstacles are suddenly present on the runway. Deep Blue and the other consortium partners are working on the development of a digital tool able to predict this aborted landing manoeuvre.  

 

“The algorithm must give controllers the information they need at the right time and in a clear way. For example, giving a go-around warning far in advance is not useful because the pilot still has plenty of time to correct his trajectory and land properly; also, the controller would not know how to use this information. Conversely, a prediction given close to the start of the go-around would not add much to the current management of operations,” explains Carlo Abate, data analysis expert at Deep Blue. “In parallel to ‘building’ the algorithm for SafeOPS, we are working with air traffic controllers to identify the most critical situations where the help of AI may really make a difference. For example, we know that it is critical to accurately predict whether or not an aircraft will land and whether or not it will get where it needs to be in each flight phase within the planned timeframe. Having an algorithm that provides this information would help controllers to manage airspace safely and optimally, i.e. maximising the number of aircraft in flight.”

 

Moving from the control towers into the cockpit and focusing on the level of user acceptance, the European project Harvis, funded by the Clean Sky 2 Joint Undertaking, has recently come to an end. Researchers in the consortium developed two digital assistants to support pilots in landing and re-calculating trajectories. “We were interested in understanding not only the impact of these tools on pilot performance,” explains Stefano Bonelli, Research & Development Manager at Deep Blue, a partner in the project, “but above all in answering these questions: do pilots trust the technology? How safe do they consider it? How does it change their work?” The introduction of digital assistants in cockpits is also intended to support single-pilot operations in the future. “The opinions of the pilots who took part in the Harvis simulations clearly show that digital assistants, although they cannot yet replace a second pilot, are able to provide important help, especially in supporting decision-making in complex situations or during emergencies.”  

 

Other important considerations for the future emerge from the results of Harvis. For example, simulations of trajectory re-calculation cases show the importance of a user-centred approach, whose methods and techniques are adapted to interactions with artificial intelligence, which is in several respects different from interaction with ‘classical’ automation. One of the most important aspects that emerged is the management of the automation level is that digital assistants, by collecting and monitoring numerous parameters, help reduce the pilot’s workload, but paradoxically relying too much on technology can limit situational awareness in the cabin. The right compromise is the key to success, especially in terms of safety, which is an absolute must. Turning to one of the crucial issues in the Artificial Intelligence debate today, namely trust in technology, we may say that although in the Harvis project the pilots claimed to ‘believe’ in digital assistants, the choices made in simulations tell a somewhat different story, with pilots not always relying on AI suggestions. In this respect, some interesting suggestions were made by pilots to increase their level of trust in the system: more training to understand how the algorithms ‘think’ and greater control of post-flight data. Research efforts should focus on these aspects so that AI’s full potential may be exploited in aviation. 

 

Get in touch with us