Dining on Data

Processing, in real-time, the enormous data stream that’s flowing through AVs is increasingly the job of NVIDIA’s mighty GPUs. Danny Shapiro relishes the feast.

Danny Shapiro: “We want to make sure you have this whole security cocoon around the car, but you need to make it affordable.” (Image: Sebastian Blanco)

Every year, NVIDIA hosts its GPU Technology Conference (GTC), a multi-day news conference/ expo to highlight the advances the tech company has made in the past 12 months. In 2018, the big automotive news at GTC was the introduction of DRIVE Constellation, a simulation system designed to “train” the artificial intelligence (AI) algorithms used for autonomous driving.

One year and countless line of code updates later, NVIDIA announced at GTC 2019 that the Toyota Research Institute-Advanced Development (TRI-AD) would be the first publicly named customer for DRIVE Constellation. Danny Shapiro, NVIDIA’s senior director of automotive, could have used the moment as an excuse to celebrate. Instead, he was all business as he sat down with SAE’s Autonomous Vehicle Engineering and contemplated the work still to do.

“We will never be at the end,” he said when asked about the state of NVIDIA’s AI for autonomous vehicles tech. “Things will always get better. We’ve designed this open platform that’s designed to get over-the-air updates just like your phone. Your phone gets better, and the same will happen in your car.”

Shapiro can make this all sound relatively simple, but he knows the evolution from today’s ADAS and SAE Level 2+ automated driving systems to tomorrow’s Level 4 and 5 autonomous vehicles will require a massive amount of dollars and time from a tremendous number of companies. That’s why recent NVIDIA AV announcements have not been about getting production vehicles on the road, but rather on how other companies will now be doing their own testing and production.

Deep-neural networks

NVIDIA Drive modules for Bosch (left, with finned heat-sink case) and Volvo. (Image: Sebastian Blanco)

Shapiro said “hundreds and hundreds” of companies use the NVIDIA Drive platform, but many keep their connection to NVIDIA private. Some of the ones who are happy to share the spotlight displayed their cars at GTC, including TuSimple, NIO and Xpeng. Automotive suppliers Bosch, Continental and Veoneer, as well as Volvo, also displayed NVIDIA-based supercomputers designed for cars.

DRIVE Constellation is an open platform, which allows the production-intent computer modules to all be a little bit different depending on what the supplier thinks will work best for its own customers. They have different connection ports, for example, because each partner will have different requirements for their customers’ own radar or lidar or camera sensors. The cooling methods are also up to the clients, which is why some of the boxes use fans, while others are plumbed for liquid cooling. Forced-air cooling can work, he noted, but liquid cooling the NVIDIA-based boxes is a logical solution for AVs, especially since cars offer on-board liquid-cooling—and because the thermal management challenges of sensors and data processing will only continue to grow.

“Everything is getting faster and faster, and by nature everything is unprecedented,” he said. “We are bringing supercomputing down into the car now. We’ve really focused on our chip and making it highly energy-efficient. We’re delivering 30 trillion operations per second on a single chip—and consuming 30 watts, [equivalent to] a bathroom nightlight bulb. As the scaling of performance goes up, there is more energy consumption. But I think at this point, for robotaxis and others, we’re the most energy-efficient, high-performance solution on the market.”

As different as the supplier boxes are, what they all have in common is that they are moving towards production. The Continental box, for example, will be in production by the end of 2020. Every next-generation Volvo with SAE Level 2 autonomous tech as standard equipment uses a system based on NVIDIA Drive. And Mercedes-Benz announced at CES 2019 that it would develop its AI Cockpit and autonomous driving capability using a centralized NVIDIA system that replaces dozens of ECUs, Shapiro said.

It’s important that NVIDIA and its partners do not rush solutions to market, Shapiro said, adding that safety remains paramount.

“We’re bringing this technology into current cars to prevent humans from doing bad things, whether they intend to or not,” he said. “Falling asleep, driver distraction, all these things are being built into our Level 2+ solutions to enable us to see the benefits very soon.”

Those solutions are all based on the work NVIDIA has done with AI and deep-neural networks. DRIVE Constellation is basically two different servers that talk to each other. One simulates a virtual world and the sensors (e.g., cameras, lidar and radar) that collect data on the objects in that world. The other is a self-driving-car AI that uses that data to make decisions on how to move from point A to point B. DRIVE Constellation allows NVIDIA’s customers to test more AVs in more situations in less time.

“We’re still going to want real test vehicles, but now we can have a data center of these [DRIVE Constellation units] and we can be deploying virtual cars anywhere in the world instantly without having to ship a car or wait for it to rain or only be able to test at sunset two minutes a day,” he explained. “We can sit here and just iterate over and over and over, 24 hours of sunset.”

These virtual tests are made more effective because of the way NVIDIA works with its partners to ingest all sorts of data. This could be maps (from HERE or TomTom), virtual vehicle dynamics (from IPG) or traffic (from Cognata, an Israeli startup). All of this data gives NVIDIA AI even more information to work with. One example of a better end result, Shapiro said, is a deep-neural network that can be refined to not just detect a pedestrian but detect if that pedestrian is looking at their phone, which causes the AI to react in a different manner.

Simulation and experiments

NVIDIA’s partners get benefits, too. DRIVE Constellation can be used to quickly adjust the number and placement of virtual sensors to see which set-up works best. “We can say, let’s put three cameras on the front and one on each corner. Or two radars here,” he said. “We can run these scenarios, get data back and this is so valuable for our customers trying to define what’s the optimal configuration.

“We want to make sure you have this whole security cocoon around the car, but you need to make it affordable, right? So, what types of sensors, what grade of sensors, what resolution? They can experiment a lot to maximize the results.”

Shapiro said one example is ultra-high-resolution radar. It obviously offers more data than standard radar, and Shapiro said the GPU can handle all of the data. But in simulation, engineers can see if that extra resolution actually offers the AI a benefit.

“I think the reality is that there’s a lot of development experimentation going on,” he opined. “I think that’s where Constellation and the simulation helps. They can experiment with different resolution radar, for example, and see, ‘Are we seeing a benefit? Are we seeing greater accuracy?’ Maybe we don’t need such high resolution.”

Whatever sensors the Tier 1s and OEMs end up using, to NVIDIA’s system it’s just data. You can’t take a neural net trained on radar and run it on lidar, but as long as the data is the same type or the system is trained to fuse multiple data types, Shapiro said it will work.

“We love data,” he said. “That’s what the GPU does. It just eats it up.” NVIDIA’s challenge is to get the GPU to correctly process and manage all of that data.

NVIDIA’S software-development team is dedicated to the company’s Automotive business. Their work is tied in to what NVIDIA does in other fields. Its automotive technologies—including the DRIVE operating system, the DRIVE AV stack for autonomous vehicles, and the DRIVE IX stack for facial monitoring—are built on the deep-neural networks that the company uses for things like its Metropolis smart city cloud platform, its intelligent video analytics and medical imaging AI.

“How do you get an AI to read a CAT scan?” he asked. “Well, you train it on patterns, and we do the same thing for cancer cell detection as we do for pedestrian detection. We’re able to leverage many different parts of the company. It’s all data. It’s all algorithms.”

During his presentation at GTC 2019, NVIDIA CEO Jensen Huang called autonomous vehicles “one of the world’s great computational challenges.” He told a story about the history of big data processing where, “eventually, Moore’s Law started to slow, and the data kept getting bigger and bigger.”

When it comes to autonomous vehicles, processing the ever-increasing terabytes of data from connected cars can really only be done with NVIDIA’s GPUs, Shapiro claimed.

“The GPU is the only path forward to handle the massive amount of data coming in to a vehicle that needs to be processed in real time,” he said. “We want to be able to understand precisely what’s happening in the environment, so as the complexity keeps going up, computational requirements keep going up, our GPUs are the only way to handle that complexity moving forward.”