Breathing Life into Artificial Intelligence and Next Generation Autonomous Aerospace Systems

For many years, artificial intelligence (AI) experts have worked on models for machine learning (ML) and adapting those models to make sense for humans. A computer that makes decisions seems intelligent, even intuitive based on certain circumstances, however a closer look under the hood reveals how unintelligent AI really is. People have always both romanticized and feared machines becoming intelligent in the advent they take over humans, it might seem we are not even close to this much-prophesized next-generation autonomy. Or are we?

When it comes to image recognition in today’s AI world, a human is required to initially train the underlying system to recognize an object. This is achieved though tagging countless images and recording what the object in the image really is. On a detection of the same or similar object, the AI algorithm is then able to lookup like images to see if there’s an exact or close match. If yes, it recognizes that object. The problem however is when something is changed enough, AI fails to recognize that object for what it truly is. A clear example of this includes items with complicated geometry such as human hands. When it comes to hands, there are no universal collections of lines or shapes that AI can use to identify. AI must combine various shapes and combinations to identify hands with a high degree of confidence. An interesting mathematical problem for AI scientists. The human brain however overcomes this with basic logic.

The K1000ULE has a 16-foot wingspan, measures 9.8 feet in length and has a maximum takeoff weight of 42.5 lbs. (Image: Kraus Hamdani Aerospace)

With enough training however, AI can do the basics using underlying graphics processor unit (GPU) technology to mathematically smash through databases of images and make an image match with some degree of certainty. We as humans interpretate this as intelligent, while the truth is that it’s not. This is just part of the AI picture. This is brute-force pattern matching.

Combining AI with other human traits, including speech and comprehension of written language are similar to the image recognition problem described above. Patterns upon patterns of human comprehended linguistic logic are trained into the system. This is why platforms such as OpenAI work. It’s a pattern-matching-machine providing responses in which the human perceives logic by initially breaking down our own input to computable chunks and deriving a sequential response to an initial query.

The K1000ULE is capable of autonomously following ground troops on the move. (Image: Kraus Hamdani Aerospace)

We can however train these systems in any way we want, and they can mislearn or we can inadvertently encode the human biases that exists when the human brain is pattern matching. Not so long ago for example, Microsoft unveiled “Tay” — a Twitter bot that the company described as an experiment in conversational understanding. It took less than 24 hours for Twitter users to corrupt the untrained and innocent AI chatbot. The more humans interacted with Tay, the “smarter” it became. However, the conversations didn’t stay innocent for long.

Within 24 hours of Tay launching, people started tweeting the bot with misogynistic and racist remarks. Tay learned and started repeating these sentiments back to users proving the adage “garbage in, garbage out.” It had no idea and in short, the AI logic failed and was corrupted by the biases that were introduced by the training data provided.

Stefan Kraus and Fatema Hamdani from Kraus Hamdani Aerospace have envisioned a future where open-source autonomy enables machines through human collaboration and software development. A future where those machines in turn become intelligent enough to comprehend an understanding of “self.” This is the turning point where AI fantasy becomes reality.

In short, the network understands its own capabilities and limitations, and operates within those confines without a human training the system on “known knowns.” The key is to change our human perspective on AI and understand that it’s not on the component level such as imagery and language, but the network itself. By bridging the divide between basic pattern matching systems and programming an understanding of its own limitations into the greater system, the network has the opportunity to understand what it can and cannot do.

With every system connected to that network having an AI sublayer for basic pattern matching, and communicating those patterns back to the network, systems have the potential to receive that “match” and further have an effect on the real-world. As an example, should a system identify an intruder through pattern matching within a particular scenario, another system can then be dispatched autonomously to neutralize that threat using its own AI. Every aspect of such an operation is based on some form of GPU-enabled pattern matching.

What the operator sees when looking at the Kraus Hamdani Aerospace ground control station screen. (Image: Kraus Hamdani Aerospace)

The system that Kraus Hamdani Aerospace is developing is similar to the Skynet from the iconic Terminator movies. The difference however is that this system is no longer science fiction. Kraus Hamdani’s long-endurance fully autonomous 100 percent electric aircraft, the K1000ULE, has the capability to team with other aircraft and systems within its network and provision autonomous self-healing networks. This hyper-enables the humans interacting or tasking the systems to effectively communicate without any input to the system.

During the Project Convergence 2021 event, the Kraus Hamdani team demonstrated the ability of the K1000ULE to successfully act as the communication extension and provide the radio bridging needed across disparate radios on different land and air assets (air-to-air and air-to-land). (Image: Kraus Hamdani Aerospace)

The Kraus Hamdani network learns as it operates, continually improving itself and repositioning to ensure the best possible network to provide coverage to participating people and systems alike. It’s an organic system which is self-aware of its own capabilities and limitations, and simultaneously organically positioning the assets in the network based on this knowledge and based on the task or problem it is trying to solve. Even as the dynamic environment changes, the network learns and adapts in ways the human operator could not mathematically compute at the speed at which the network is able to do so.

The compute power of a collection of GPU-based systems enables the Kraus Hamdani Network to fully autonomously problem solve based on billions of variables. Further, the network is able to problem solve and achieve outcomes autonomously by comprehending sensory input from payloads and understanding its own capabilities and limitations. While this is glorified pattern-matching at the edge, the network can extract logic from those patterns for a greater effect.

As an example, should an adversarial radar system be detected by an electronic support sensor onboard a K1000ULE, the network can ingest the sensory input and make decisions to further qualify the radar system by queuing an additional sensor. K1000ULE is powered by lithium ion and photovoltaic batteries with a range of 600 to 1,000 miles at a cruising speed of 40 kts and an operating ceiling of 20,000 feet.

By autonomously dispatching other systems to the perceived threat with network-selected relevant sensors, multi-sensory input is obtained to confirm the threat with absolute confidence. The network can then take further steps to act on that threat including denying the radar its capabilities through electronic warfare tactics. This is a chain of events based on an ever-changing landscape through the networks that have human-like comprehension of threats and the environment in real-time.

The solar powered autonomous flying and networking capabilities of K1000ULE were previously proven during the U.S. Army’s 2021 Project Convergence technology demonstrations at Yuma Proving Ground, Arizona. During the demonstration, K1000ULE was able to fly for 30 hours continuously and the network it created was used by a soldier on the front line of an air assault scenario to request a Hellfire missile strike from a nearby MQ-1C Gray Eagle drone.

Think IBM Deep Blue versus Garry Kasparov in a game of chess. Now imagine the team at Kraus expanding on that concept to include the real-world, and problems faced in real world threat environments such as war zones or disaster stricken areas. We require services and need to fulfil complex and sometimes dangerous missions. Our AI-enabled network has the promise to solve these highly complex problems in ways humans could not solve with the limitations of the human brain.

This article was written by Fatema Hamdani, Co-Founder and CEO, and Stefan Kraus, Co-Founder and CTO, Kraus Hamdani Aerospace. For more information, visit here .