Data Drives Driverless Truck Launch
Smart diagnostics, advanced validation help support the reliability metrics required to gain confidence that autonomous trucks are ready for the road.
Transforming raw data into high-quality structured data is a critical path to properly fueling machine-learning (ML) models and deploying artificial-intelligence (AI) applications across autonomous fleets. Companies are working to overcome data challenges to ensure their ML algorithms can produce the AI required to achieve widespread SAE Level 4 and 5 operations.
“When our trucks drive on the road, they’re collecting terabytes upon terabytes of data, and we need to get that up into the cloud and into the hands of our engineers, ultimately,” said Brandon Moak, co-founder and CTO of autonomous technology developer Embark, whose recently-launched prototype SAE Level 4 tractor is shown above. The startup uses “active learning” techniques to identify the most relevant detections and provide the most useful insights into critical edge cases.
“You can think of active learning as a way for us to understand the ways in which our machine-learning models are failing,” Moak explained. “We can actually sample our data using this technology to build high-quality datasets that are lower in volume but higher in quality to get more performance out of our systems.”
The level of preprocessing required to make sure the raw data is useful for machine learning is a key challenge, said Tom Tasky, director of Intelligent Mobility at FEV North America. The supplier has a patented preprocessing and analytics solution that handles up to 40 TB of data per vehicle per day in an L3 Pilot project taking place in Europe. The amount of time and effort involved in this part of the process can be underestimated, he said.
“Once you start developing and looking at the data, you see it might be poor quality and how much additional software effort is required,” Tasky said. “To really understand the sensors, the quality output, any limitations in certain environmental conditions, things like that really need to be factored in to make sure you have time to account for it, prior to go running some expensive tests.”
Field data analytics are extremely valuable for the development and production of optimized components, as well as to identify design weaknesses, status analysis and predictive maintenance. “Especially if you look at ADAS features, these are new components being developed and you don’t necessarily have the same reliability data in different applications being introduced,” Tasky explained. “Discovering that information is extremely valuable to [determine] trends and the life cycle of these new components.”
The digital twin also is a useful tool in identifying potential failures and determining root causing issues. A complicated example, according to Tasky, is where you have onboard monitoring with offboard failure analysis and a digital twin.
“You enable this through the use of a gateway that has the connectivity aspects to communicate to the digital twin in the cloud,” he said. “There’s a lot involved with setting up this infrastructure, which we help our customers with today. But the value of this is identifying failures well in advance during development or even field data in fleets.”
Smart diagnostics
As automated-vehicle (AV) developers began combining more advanced sensors such as lidars, infrared cameras and L4-specific sensors for redundancy and higher integrity, diagnostic capabilities lagged initially, according to Ananda Pandy, technical specialist for ADAS and autonomy at ZF.
“The focus was on improving the computing ability and hardware development for the ‘virtual driver’ and how to scale that development,” Pandy explained. “The diagnostic capabilities of the vehicle actuation systems were still at the same level as how it was developed for the core ADAS functions and were dependent on the safety driver in the vehicle during these development phases.”
As AV development enters the “shakeout” phase, where the integrated vehicle platform accumulates significantly more mileage and infrastructure setup begins, the focus shifts more to creating the reliability metrics necessary to build confidence prior to rollout.
“Diagnostics play a major role in supporting these reliabilities, and it’s not just the number of miles driven without interventions or the number of trips being completed,” Pandy said. “It’s imperative to have the predictive diagnostics and not have any latent failures in the system in order to make the call for the driverless launch.”
Smart diagnostic features should include a “self test” at the beginning of the journey to check for any pre-existing faults, Pandy said. A passing test can be a condition required to enter autonomous configuration. Self-test can be done either by simulating inputs internally within the fail-operational steering system, for example, or can be executed by the virtual driver before requesting autonomous configuration.
After a successful self-test, “dynamic diagnostics” can help continuously check for validity of the inputs required to detect for normal steering effort. Feedback can be provided to the virtual driver and vehicle trajectory planning can be enabled to handle any latent faults. “For example, by going to a safer speed before entering a tighter curve or ramp. This can help ensure safe maneuvering of the vehicle,” he said.
Another important feature is a “fail operational window” that can be different for different actuation systems. “A general best practice could be to report the validated time that is available for the virtual driver to bring the vehicle to a safe stop when fail operation is initiated by a particular actuation system,” Pandy noted. “This information can be used by other actuation systems, or it could be used for compliance purposes to ensure that the vehicle was indeed brought to a safe state within the fail-operational window that was provided.”
Over-the-air troubleshooting during the early stages of development and integration work also is key. “A master diagnostic message that consolidates the status of a fail-operational system, including the fault codes, fail-operational time window, and performance measures, can help other subsystems to plan for safe actions as well as roadside inspections,” Pandy said. “This can be similar to the existing diagnostic message, but it is tailored for AVs such that it can be easily integrated with the V2V or V2I [vehicle-to-vehicle or vehicle-to-infrastructure] communications.”
Release testing and homologation
A sequence of testing – from model-in-the-loop (MIL), software-in-the-loop (SIL), component, domain and vehicle hardware-in-the-loop (HIL) and finally real-vehicle testing – is necessary to achieve the software quality and robustness required for ADAS and AV systems.
“It’s about running a lot of tests and finding the critical edge cases that you need to validate the software that needs to be deployed,” said Jace Allen, director of ADAS/AD Engineering and Business Development for dSpace, Inc. “Integral to all of this is really trying to get a truck to certification or homologation.”
dSpace participates with different organizations including ISO and collaborates with companies such as TÜV to offer broad expertise from system requirements to release testing and homologation. The company also expanded its partnership with BTC Embedded Systems to offer a new web-based solution, SIMPHERA, that uses simulation to validate and homologate autonomous-driving systems.
Available for use in general customer projects starting second half of 2021, the SIMPHERA simulation and validation environment integrates the BTC ScenarioPlatform that creates scenarios and generates and evaluates tests based on coverage. The high level of abstraction makes it possible to express thousands of test cases with just one abstract scenario, the companies claim.
BTC’s automated test generation functionality uses advanced technology such as model checking, AI and intelligent weakness detection, which allows test cases to be generated based on statistical methods and meaningful coverage metrics. Compared to random or “brute force” test generation approaches, this strategy “considerably” reduces the amount of test data and delivers clear metrics, says BTC, even with regard to future homologation criteria.
“It’s the same methodology we’ve talked about for SIL and HIL – test asset reuse,” Allen explained. “I can define my sensors, my vehicles, my scenario, the interface to my SUTs [systems under test] and then run whatever simulations I want so that I can find those edge cases, so that I can evaluate my AV-system safety according to SOTIF [Safety of the Intended Functionality] and so forth.”
Top Stories
INSIDERDefense
This Robot Dog Detects Nuclear Material and Chemical Weapons
INSIDERManned Systems
Testing the Viability of Autonomous Laser Welding in Space
INSIDERTest & Measurement
Germany's New Military Surveillance Jet Completes First Flight
NewsUnmanned Systems
The Unusual Machines Approach to Low-Cost Drones and Drone Components
INSIDERSoftware
Accelerating Climate-Compatible Aircraft Design with AI
INSIDERManufacturing & Prototyping
Webcasts
Software
Best Practices for Developing Safe and Secure Modular Software
Power
Designing an HVAC Modeling Workflow for Cabin Energy Management...
Aerospace
Countering the Evolving Challenge of Integrating UAS Into...
Manned Systems
How Pratt & Whitney Uses a Robot to Help Build Jet Engines
Manufacturing & Prototyping
Scaling Manufacturing and Production for 'Data as a Service' Electric Drone
Test & Measurement
A Quick Guide to Multi-Axis Simulation and Component Testing