Tuning-Up AI’s ‘Understanding’ to Make Safer ADAS, AVs
Kognic’s advanced interpretation of sensor data helps artificial intelligence and machine learning to recognize the human thing to do.
In December 2023, Kognic, the Gothenburg, Sweden-based developer of a software platform to analyze and optimize the massively complex datasets behind ADAS and automated-driving systems, was in Dearborn, Michigan to accept the Tech.AD USA award for Sensor Perception solution of the year. The company doesn’t make sensors, but one might say it makes sense of the data that comes from sensors.
Kognic, established in 2018, is well-known in the ADAS/AV software sector for its work to help developers extract better performance from and enhance the robustness of safety-critical “ground-truth” information gleaned from petabytes-upon-petabytes of sensor-fusion datasets. Kognic CEO and co-founder Daniel Langkilde espoused a path for improving artificial intelligence-reliant systems based on “programming with data instead of programming with code.”
There’s a broad term that probably is unfamiliar to most not intimate with software or artificial-intelligence development: AI alignment. It’s the part of AI safety research attempting to assure that AI systems are aligned with human intent and values. Langkilde asserted in a Forbes article in fall 2023 that “In its capacity to power self-driving cars, AI has not lived up to consumer expectations. The problem is not one of intent. The problem is there is no single way to drive.”
SAE Media spoke with Langkilde about what AI alignment really means and what will be required from software to improve the safety and performance of ADAS and high-level driving automation.
What exactly is AI alignment and is it a new thing?
Langkilde: It’s a rather new term. It’s a new concept for most people. In the AI community, I guess people interchangeably use ‘AI safety’ and ‘AI alignment.’ There are a few different interpretations, I suppose, because it’s an emerging field, but typically AI alignment is about ensuring that the behavior of an AI system is consistent with either the preferences or goals of humans.
So as these systems become more capable, it becomes more opaque what is in fact the ‘policy’ by which the system operates, and that increases the importance of being able to probe that and ensure that it actually does what you intended to do. I’m an engineer. I’d love to think the world is well-behaved and easily quantified and so forth. You realize that it’s not. It’s ambiguous and it’s subject to a lot of interpretation.
[Driving] still is a very subjective task. There are many types of negotiations between drivers and judgment calls about the viability of a path, the intention of other objects and so forth. So I think under ideal circumstances, self-driving vehicles are actually already here. I mean, Waymo works really well.
As ‘outsiders,’ if we look at what you’re trying to do now with AI alignment, is it teaching machine learning how to learn?
Langkilde: It needs to understand human preference. I guess I prefer to be very precise here. Typically, with today’s machine learning, you put together a dataset, you select a type of neural network or something, and you train that on the entire dataset. It takes days or weeks. It uses huge amounts of GPU.
For most self-driving-car companies, and certainly almost all ADAS products, scenario interpretation is typically not based on machine learning, or at least not neural networks. It’s actually because the tech stack is typically divided into three pieces. You have first a perception system that you train to understand the world around you. That is 100% machine learning today pretty much, because that’s where modern deep learning turns out to be very powerful – to understand the camera images and lidar point clouds and radar reflections, deep learning works really well.
So you put together a large dataset, you label it very carefully, and then you train a machine learning model using supervised learning. That’s step number one. Step number two is that you try to predict where everything will be going; your perception system gives you a snapshot of the world, and then you try to immediately make a prediction of where everything is going to go. This has to happen very, very fast because you don’t have a lot of time – you have 10 milliseconds or something to make a prediction. That’s also a task that machine learning is actually pretty well-suited for, is to predict where all the trajectories are going.
The third step is the planning. You then need to figure out, based on your goals, how do you plan to navigate the scene as an agent, which is the preferred trajectory for yourself? That’s the part that, as of right now, is actually fairly rule-based, or at least more traditional in its design.
As of right now, I’m fairly sure an end-to-end trained neural network that does perception prediction and planning as one big chunk wouldn’t be good enough. And it appears that the leading robotaxi companies agree to that. But it’s possible that a solution will overtake other options very quickly.
Either way, the designer of the system needs to carefully consider what sort of data they base their product on, because it doesn’t really matter if it’s end-to-end trained or if it’s the three P’s: perception, prediction, planning.
The 2023 Cruise robotaxi incident in San Francisco — in which a pedestrian struck by another vehicle was subsequently dragged by a Cruise robotaxi — seems like an example of lack ofAI alignment, because it appears the vehicle’s intelligence did not understand how to react to that situation.
Langkilde: It’s fair to assume that that is not something they had ever really seen in their training and testing data. I’m pretty sure that was what you would call an edge case that they were not prepared to handle.
There’s nothing magical going on inside a machine-learning system. It’s a set of parameters that are fine-tuned based on a dataset to the best of the computer’s ability to replicate known things. If you can’t crisply define what your expected behavior is, you will suffer from unwanted events.
When we started six years ago with Kognic and we talked to the German OEMs, they basically said what is now becoming the big problem for the robotaxi companies, that they have tried on these kinds of technologies for many years, but building a working safety case has always been very difficult.
Does Kognic have competitors?
Langkilde: So, it depends a little bit. It’s a big world, so it’s hard to definitively claim you’re unique, I guess, because there’s always someone who’s intersecting a little bit here and there. The things that we believe make us stand out in the global market is, first of all, the combination of safety-critical applications, mobility and shaping datasets. We are the only company that is exclusively focused on what we call ‘embodied AI’ and the tools required to shape such datasets.
We focus on products that have a physical manifestation and that require sensor fusion, so camera, lidar, radar, with the intention of navigating the work. So if it’s a niche, we are the global leader for sure – and possibly we’re maybe even the only one, actually. It makes it easier to be the leader, I guess.
Who are your customers?
Langkilde: Full-line customers we can publicly talk about: we are the global software platform of choice for Bosch, which obviously is a major tier one in the automotive community. So all of the [SAE] L2, L3 and L4 systems development that Bosch is engaged in is based on our dataset-management platform. Another example is Qualcomm, which actually has become a rather aggressive player in the ADAS game.
In the background of our Bosch deal [are OEMs such as] Ford and Volkswagen, who are major customers of the Bosch-developed perception systems. With Qualcomm, it’s BMW. With Volvo cars, we are a direct supplier or vendor or partner or whatever they prefer we call it. We also have Continental and Kodiak. So we work with a mixture – it can either be an OEM, a tier one, or a sensor maker or an L4 player.
Where do you see the larger picture of AV development in 36 months? Is it going to be in a better place?
Langkilde: If we talk about the automation of driving as a whole – as opposed to only robotaxis – starting from the top, Waymo will carry on Waymo-ing and they will be just fine. They will gradually increase their operational design domain, including LA [Los Angeles] and maybe a few more cities. The challenge Waymo has is lowering the cost of expanding the operational design domain to a point where it’s actually profitable to drive someplace. Right now, it’s massively unprofitable.
Then there’s Aurora and Kodiak. I think both will successfully do [commercial] deliveries in the next 36 months. Cruise, I think, will survive because they are dependent, so they can’t fold. They are betting it will improve GM’s access to great ADAS, which it’s probably doing.
I think Ford is going to emerge as a much stronger ADAS player than previously – because of its Latitude organization and the bet they are making to vertically integrate a lot of things.
I also believe Mercedes is doing really well.
When it comes to consumer experience, I think, first of all, penetration of L2 ADAS is like 4.5% on new-vehicle sales. So for the most part, the industry will just increase penetration and hope the take rate is enough to fund further development.
Top Stories
INSIDERManufacturing & Prototyping
Boeing to End 767 Production, Reduce Workforce Amid Ongoing Union Strike
INSIDERManufacturing & Prototyping
Army Receives New Robot Combat Vehicle Prototypes
INSIDERRF & Microwave Electronics
Germany's New Military Surveillance Jet Completes First Flight
INSIDERManufacturing & Prototyping
Army Evaluates 3D Printing for Bradley Fighting Vehicle's Transmission Mount
INSIDERAerospace
Army Seeks to Expand 3D Printing to the Tactical Edge
ArticlesPropulsion
Cummins New X15 Engine Meets Upcoming Regs While Boosting Efficiency
Webcasts
Transportation
The Rise of Software-Defined Commercial Vehicles
Automotive
Avoiding Risk Analysis Pitfalls: Implementing Linked DFMEA, HARA,...
Automotive
A Quick Guide to Multi-Axis Simulation and Component Testing
Software
Best Practices for Developing Safe and Secure Modular Software
Defense
Countering the Evolving Challenge of Integrating UAS Into...