Metamoto Goes Deep on AV Simulation

Simulation based on actual vehicle physics and sensors continues expanding to replace miles of potentially dangerous real-world testing.

In Metamoto Designer, clients create ego vehicles with different configurations, and sensor placement can be parameterized on a per-test basis. (Metamoto)

Autonomous-vehicle (AV) development companies frequently boast about the millions of miles their test vehicles travel in real-world testing. The claims reflect a long-held, more-is-better mentality when it comes to AV miles in the wild. But that was before a few high-profile accidents sent the AV industry into the so-called trough of disillusionment. “Two years ago, we spent a lot more time presenting the argument for simulation,” said Daniel Schambach, head of design and co-founder of Metamoto, based in Redwood City, Calif. “Now, people realize that we need simulation. There’s a lot more due diligence.”

Metamoto’s Analyzer is a visual debugger that allows customers to review tests, determining why tests passed or failed. (Metamoto)
Metamoto’s Designer allows customers to rapidly create visual scenarios that can be scheduled to run in the Director program. Tests can be run for both initial development and regression testing. (Metamoto)

The timing for the shift to simulation was perfect for Metamoto, a three-year-old startup with less than a dozen employees. The company is on a mission “to build a testing platform that’s scalable, physics-based and highly parameterizable.” Other simulation players in the space – Applied Intuition, Cognata, Tass Prescan, VTD and others – also provide simulation tools and services, although with varying degrees of high-fidelity graphics and use of physics to train and test AV algorithms.

A matter of physics

After Schambach loads a sample simulation, you might think it’s a sanitized version of Grand Theft Auto or another title created by a game-creation engine such as Unreal or Unity. But AV-simulation tools depict a vehicle, sensors and roadways not based solely on visual appeal, but according to actual physics. For example, a simulated wet road versus a dry one not only looks different but also has unique physical characteristics. “When a car drives over a puddle, the coefficient of friction of its wheels changes because it’s on a wet surface,” explains Schambach.

Using Metamoto’s Designer tool, we set up our ego vehicle – the term used to designate the specific vehicle being tested (in simulation or the real world). It’s just like choosing a character in a video game and then customizing its costume and gear. The most common AV platforms, like the Chrysler Pacifica, Ford Fusion and Chevrolet Bolt, are readily available from Metamoto. About 40 more have been created and a new make/model can be added within a day. After the ego vehicle is selected, it can then be equipped with cameras, lidar, radar and other sensors.

Each sensor, depending on where it’s placed on the ego vehicles, generates data that’s fed into an AV company’s actual driving algorithm. A simulated sensor’s capabilities are matched to specific, real-world lidar equipment that exists in the marketplace. Made-to-order sensor specifications can be established by the client, typically an AV company. Metamoto’s customer base also includes sensor makers who can place their device on a range of vehicles and in an array of scenarios to see how it performs.

In simulation, the same scenario can be simultaneously run with countless variations for time of day, weather, and numerous other conditional variabilities. Each instant can be individually evaluated. (Metamoto)
The camera angle for test scenarios allows for viewing the scene in different ways, each with a video-game-like clarity. (Metamoto)
Daniel Schambach, head of design and co-founder of Metamoto. (Metamoto)

“You want your cameras to behave and record data the way it sees in the real world,” said Schambach. However, for clients working on control and path planning – rather than perception – the ego vehicle can alternatively utilize object-level simulation. It’s a way to bypass the variability of sensors and their location and get right to the ground truth. Object-level simulation also runs faster.

A test of, say, a Waymo-like Pacifica with a customized set of sensors can be placed in a scene utilizing detailed high-definition maps from specific domains, like downtown San Francisco or the outskirts of Peoria. HD maps are sourced from vendors like Atlatech, Civil Maps, or Here. Then a scenario is created.

For example, the ego vehicle can be challenged to make a blind left turn during rush hour. The cars in the oncoming traffic can be set to white, red, blue or a combination of colors. Or the line of approaching vehicles can be entirely changed. Weather conditions, time of day, traffic patterns and nearly a thousand other parameters can be adjusted. “Think of simulation as a giant mixing board,” said Schambach.

Test, tweak, test again

Schambach hits “preview” on a test scenario set in Peoria. Before the time-consuming test is executed, he can scrub through the ego vehicle’s trip. “If we want more traffic on the road, we go back into Designer and add more vehicles,” he said. When the scenario matches the testing requirements, we watch an animation of the simulated ego AV trying to make the unprotected left.

In the real world, that would happen one time and then another – over and over again, perhaps many thousands of times, until engineers and test drivers feel like sufficient data has been collected. However, in simulation, the same left turn can simultaneously happen a thousand times, with each turn occurring at a slightly different time of day, using a mix of sensors, with various levels of traffic, under changing weather conditions or at different speeds. Pick your variables.

“You could run your test with a 32-channel lidar and then you could run the exact same test with 16 channels simultaneously,” Shambach said, explaining the concept of parallelization that makes simulation so powerful: multiple alternate realities co-exist.

For the sake of processing efficiency, noise models might simulate the effect of rain rather than programming every droplet; the GPS signal can be degraded to approximate the effect of traveling past tall buildings. Parameters can be set to the degree in which the sensor stack can understand the ground truth – a kind of built-in failure rate. The possibilities are nearly as limitless and messy as reality itself. “We can inject noise or other failure models into any part of an AV,” said Schambach.

Analyze this

After a set of tests is run, the imagery and data are downloaded. One can look through the multiple tests visually or examine the raw sensor and camera data. In our test demo, the ego vehicle sideswiped a vehicle exiting a roundabout – an obvious candidate situation for in-depth study. Using Metamoto’s Analyzer tool, we could establish a wide range of thresholds for success or failure.

For example, an ego vehicle coming within a meter of a pedestrian might be acceptable in some domains, like in traffic crawling through an urban environ, but not in a flowing school zone. After we replayed video from an onboard camera, it appears that our ego vehicle paused at the roundabout but not long enough. “Knowing where things fail sooner rather than later is ideal,” said Schambach. “In some cases, it helps to work backward from there.”