From the experience and expertise of Deep Blue in aviation comes HAIKAI, a company dedicated to developing Artificial Intelligence solutions for complex and safety-critical domains. Domain knowledge, AI technical expertise, and certification methodologies—this is how HAIKAI aims to break into the market.
Artificial Intelligence: A technological bubble?
Everyone is looking for it, everyone wants it. But how many actually use it? Or perhaps a more interesting question would be: who is using it, for what purpose, and with what benefits?
We are talking about Artificial Intelligence (AI)—what else? A highly popular topic these days. Siri, Alexa & co. have brought it into our homes, and in some industries (pharmaceutical research, finance, logistics, energy), it has already carved out its space. However, in complex and safety-critical domains like aviation, the adoption of AI tools must overcome several “obstacles”: building AI expertise, inadequate IT infrastructure, lack of transparency in the most sophisticated algorithms, and above all, the absence of a regulatory framework for their certification, with all the legal responsibility implications that come with it.
“In aviation, as in many other sectors, AI has triggered a real ‘gold rush’, with inflated expectations driven by everyday tools like ChatGPT, but unrealistic in light of various considerations, the maturity level of the technology, governance complexities, and the actual economic or social benefits of its introduction in different sectors,” says Simone Pozzi, CEO of Deep Blue and an expert in Human-AI Teaming. “In fact, there is still a lot of confusion and uncertainty regarding the real applications of this technology in aviation”.
HAIKAI, where AI and Aviation meet
Based on these reflections, Deep Blue has embarked on a new project: HAIKAI, a company specialized in AI solutions mainly for aviation and air traffic management.
“We want to go beyond the current euphoria and analyze with clarity what can be truly useful and applicable in safety-critical contexts like aviation,” continues Pozzi, Deep Blue’s representative in HAIKAI. “In an environment of high technological inflation, we try to bring clarity and guide our partners through the ‘noise’ of expectations, focusing on the actual benefits that AI can bring to a complex sector like this. Before expectations fade, it is necessary to concretely analyze the potential and consequences of these solutions”.
HAIKAI offers its clients AI-based software solutions for safety management, predictive telemetry, and computer vision. With an added value compared to competitors: it combines deep knowledge of the aviation sector with advanced technical expertise in AI algorithm development.
“We have both AI hardware expertise and 20 years of experience in the aviation sector,” emphasises Daniele Baranzini, partner of HAIKAI, statistician, and expert in advanced machine learning. “We don’t just apply a generic AI model to aviation, but we develop solutions specifically designed for this sector, combining advanced technology and specific expertise. It is precisely our domain knowledge that allows us to design effective solutions to meet the needs of airlines, airports, and air traffic management operators”.
The AI world is polarized between universities and research centers on one side and large industries on the other, with giants like OpenAI or DeepSeek. In between, there are small players. The result is a highly fragmented environment, where breaking through is difficult, especially when trying to enter a closed and traditionally conservative sector like aviation.
“From the experience gained over the years, we know that the AI community and the aviation community speak two different languages and do not understand what one can ask of the other,” admits Pozzi. “There is a need for a complex translation effort, which does not always succeed, and without a real understanding of the sector’s dynamics, achieving concrete results in AI solution development is difficult”.
HAIKAI’s ‘holistic’ approach
Knowing the needs of a sector is not enough to develop an effective technology. A 360-degree approach is necessary.
One example is FOD.Vision, one of the services offered by HAIKAI: an advanced software based on AI algorithms designed to automate the detection and removal of Foreign Object Debris (FOD) from airport surfaces and maintenance hangars.
“We know that this computer vision tool works well and can improve this type of airport operation,” explains Pozzi. “But its effectiveness depends on how it is used, meaning its interaction with operators. This implies a preliminary effort in design, development, and user-centered implementation, as well as ensuring proper training for those who will use the tool.”
This is not a secondary aspect – on the contrary. “The AI Act (the European regulation on Artificial Intelligence) explicitly states that AI system operators must have a minimum level of knowledge to manage this technology safely and responsibly,” adds Baranzini. “Properly training operators (and also revising organizations and company procedures) further reduces the risk associated with AI adoption. HAIKAI, aware of the importance of this issue, organizes AI Training courses precisely to ensure a proper understanding of how AI systems work”.
Returning to FOD.Vision, the software will soon be tested in a major Italian airport, while a leading aerospace company has entrusted HAIKAI with safety management services, particularly software for risk analysis and predictive safety management. “Today, safety-related databases are cumbersome to consult, whereas our software offers a more spontaneous and immediate interaction, you can query it just as you would with ChatGPT,” explains Pozzi. “Above all, it is a private model. It is completely disconnected from the Internet, eliminating any risk of your data ending up on external servers”.
Predictive Telemetry
The most innovative service offered by HAIKAI is predictive telemetry, based on algorithms that operate in a future time window relative to the observer of the AI model. “100% of AI trains algorithms on historical data and makes predictions of a target variable based on that past data,” clarifies Baranzini. “Predictive telemetry, however, the true frontier of AI innovation, does not train the algorithm on past data but on future projections, meaning it predicts predictions, anticipating the very same predictive profiles—the variables that determine a forecast.”
If standard telemetry works in real time, predictive telemetry goes beyond real time: we have a much more anticipatory system that provides useful responses even before the need arises. In fields like aerospace, it is clear that being able to anticipate events, even by just a few minutes, can make the difference between avoiding a risk and a potential disaster. “Think of a pilot and the ability to predict, one minute in advance, a moment of mental or physical workload overload,” adds Baranzini.
Great, but… Can we trust it?
Trustworthiness and Certification
One of the biggest barriers to adopting probabilistic algorithms–the most useful but also the most “uncertain”–in aviation is the certainty of their reliability. “It’s natural for there to be skepticism about accepting a probabilistic AI system in a safety-critical sector like aviation, where every error can have serious consequences,” emphasizes Baranzini. “Here, we’re not talking about the success of a commercial shoe sale, but about the possibility of an aviation incident. We know that zero risk does not exist, but algorithms must minimize it, and above all, we need to certify this robustness”.
“Typically, an AI system is trained on a dataset and is very good at ‘reasoning’ based on what it knows,” explains Pozzi. “The problem is that it doesn’t know how to handle exceptions. We ensure that our AI systems can ‘see’ and handle exceptional cases. For example, we explain why they return a certain type of output: we justify the algorithm’s responses with references”. This is the issue of trustworthiness: an AI that is robust, safe, transparent, fair, and explainable. “We strive to reassure those purchasing our AI software that it has been developed according to rigorous criteria so that it can manage various contingencies, and we demonstrate how it handles them. This way, the user can decide whether to rely on it or not,” continues Pozzi.
The topic of trustworthiness is closely linked to certification.“Certification means demonstrating to a third-party body, such as regulatory authorities, that the product produces reliable results, meaning it is safe,” explains Pozzi. “How is this done? Partly by measuring the outputs to show that the product’s performance is secure, and partly by proving that the development process was correct, that the initial dataset was appropriate, that tests were conducted to identify exceptions, etc. In other words, it must be demonstrated that the training process was of high quality”.
“At present, no AI system is certifiable in aviation because there are no established guidelines,” stresses Baranzini. “At HAIKAI, we are making a real effort to develop methodologies to answer the question: Is this system certifiable within my industry? We are combining this with a risk analysis study linked to the introduction of a new technology”.
In doing so, HAIKAI is staying ahead of Europe’s regulatory timeline, where AI certification is still a long way off in all sectors (for aviation, it is expected by 2026). “The United States follows a Silicon Valley-driven approach, believing that too many obligations and regulations can hinder innovation,” explains Pozzi. “Europe, on the other hand, is more cautious and has introduced the AI Act, which establishes that AI regulation must be proportional to risk: for high-risk systems, it states that they cannot be used without specific conditions and leaves it up to individual sectors to define their own regulations.” As a result, industries are developing internal methods to demonstrate that their AI systems are safe, while waiting for regulatory authorities to define certification guidelines. HAIKAI is also working to suggest a path forward for the aviation sector. Ultimately, overcoming these barriers – in design, certification, privacy management, and legal responsibility–is essential for AI to gain a real foothold in aviation and other complex sectors.