The role of Human Factors in artificial intelligence

The role of Human Factors in artificial intelligence

Too little is said about Human Factors, which play a fundamental role in Artificial Intelligence (AI) research, implementation and application. Matteo Cocchioni, Daniele Ruscio and Stefano Bonelli of Deep Blue explain why Human Factors should be given more attention.

“Human Factors try to ‘open’ the black box to show what is inside.” 

 

THINKING LIKE A MACHINE: NEW DESIGNS FOR INTELLIGENT APPLICATION INTERFACES

 

In Artificial Intelligence, a very important issue is transparency: “To perform a task, machine learning algorithms are trained on a huge amount of data in which they manage to detect patterns that neither humans nor normal statistical techniques see.” Matteo Cocchioni, a Human Factors consultant with a master’s degree in AI obtained at the Italian National Research Council (CNR) explains: “The problem is that we don’t know how they do it. Or rather, we don’t know the parameters of the neural network that are created during training. To simplify, in neural networks there are nodes that receive input data, intermediate nodes that process it and nodes that produce the result. 

 

To solve very complex tasks, neural networks with multiple intermediate nodes may be required and, during training, these nodes search for the best possible constellation of parameters, adjusting their importance. Although we know the mathematical equations behind this process, we cannot know what the true value of these hyper-parameters is. In this sense, AI is like a black box and there has long been talk of an Explainable AI, which responds to the need to make the way an intelligent machine arrives at its result as comprehensible as possible to humans. If an operator, for example a pilot or an air traffic controller, does not understand why the machine suggests a certain solution, he might reject it even if it is the best one and thus compromise safety. Those who study Human Ruscio, a cognitive psychologist who is an expert in assessing Human Factors in person-machine interaction. “When we use a car, is it the fact that we really know what is going on in the combustion engine that reassures us that it will not explode the moment we turn the key? Of course not. We simply trust the technology because it has a long history, there are certification processes, we have prior knowledge about its reliability, it is widely used and there is a certain social perception about its use and danger. It might be the same with the relationship between people and Artificial Intelligence, but we are only at the beginning of this ‘partnership’.”

 

In order to facilitate a ‘healthy’ and effective relationship, we can first of all start thinking about how to communicate the machine’s decisions, by studying new designs for the interfaces of intelligent applications that allow for the clear visualisation of highly complex information, especially when the interaction takes place in a critical context of ‘widespread’ social responsibility (where multiple actors are involved, with different roles) or where there is no time for explanations and interactions (we may think of an air traffic controller who relies on AI to give instructions to the pilot during landing/take-off). Trust in AI must be built before it is used by training operators; during its use, by guiding them with special visualisations in understanding machine ‘thinking’; and afterwards by evaluating the effectiveness of individual solutions proposed by AI (and their explanations) in specific contexts of use.”

 

HOW THE ROLE OF HUMAN FACTOR EXPERTS WILL CHANGE

Indeed, human factor experts have always been involved in the design of new tools and procedures, acting as a ‘bridge’ between technicians and users. They also intervene in change management, training operators whenever new tools or procedures are introduced; they evaluate existing ones with a view to optimising human performance; they train operators, for example in the area of risk management. 

 

“Everyone expects Human Factor experts to continue doing what they have always done, but with AI this is no longer possible, at least not in the way it was done in the past,” admits Stefano Bonelli, Research & Development Manager expert in Human Factors. “Let’s take change management: when it comes to the introduction of AI tools, it is no longer conceivable to go to the operator and explain to him that the machine will behave in a certain way simply because no one knows in advance what kind of result it will provide. 

 

After all, that is exactly what is asked of an intelligent machine: to find solutions that man cannot reach, at least not as quickly. We can only, as Daniele said, work on the design of the new tools so that they give as much information as possible as to why they suggest a certain solution. Neither will evaluation be the same. What we used to do in the past was push a tool to its limits, i.e. to predict a very complex situation and see if and how it would cope. Now it is more difficult to find critical scenarios to run simulations and make evaluations: intelligent machines do not fail because they have been badly programmed but because they might have been given data that does not include all possible scenarios.” 

 

“Data is another important issue,” concludes Ruscio. “We cannot leave algorithm development exclusively to programmers, because they might select only certain types of input data and thus introduce a bias into the system that undermines its effectiveness. That is why experts in Human Factors should also participate in the research phase, not only step in for management, evaluation, and ‘physical’ design of the tools.” 

 

AI AND AVIATION, THE PROJECTS DEEP BLUE IS INVOLVED IN

Deep Blue carries out a lot of activities in the areas of Artificial Intelligence and Aviation. Several projects are currently underway: MAHALO, ARTIMATION, SafeOps, HARVIS and XMANAI

 

With MAHALO, the company is engaged both in research, i.e. in the development of intelligent algorithms for the detection and resolution of conflicts at high altitudes, and in the design of interfaces for AI tools, seeking the right balance between usability, comprehensibility and complexity of situations (we have talked about this here). 

 

The focus of ARTIMATION is instead on-air traffic management supported by machine learning algorithms. Also in this case, controllers struggle to rely on these intelligent systems as the decisions suggested are often not intuitive and understandable. So, work will focus on the development of a transparent and interpretable AI model to foster the right collaboration between human operators and artificial intelligence. 

 

In SafeOps, AI is used in the airport environment to support tower controllers by predicting whether an approaching aircraft will have problems and will have to abort landing. In HARVIS, AI supports pilots with an assistant that facilitates decision-making in complex or emergency situations. Finally, the aim of XMANAI is to support the industrial production process with an AI that integrates ethical aspects as well as performance.

 

Get in touch with us