Autonomy’s Computing Edge

Reducing latency by processing data closer to devices using it will play a huge role in autonomous safety, with Stellantis a first mover in edge-computing trials.

Stellantis will be one of the first OEMs to trial edge computing publicly, using localized infrastructure to alert drivers to nearby emergency vehicles. (Stellantis)

Computing clouds get more headlines, but edge computing is likely to play a more crucial minute-by-minute role in the autonomous-vehicle (AV) future. Roughly defined as bringing the processing of data physically closer to the devices requiring it, edge computing will form a critical part of the digital infrastructure required to make AVs a safe and responsive transportation option. If computing clouds are the centralized processing centers forming the hub of a network wheel, then edge computing exists out towards the end of the wheel’s spokes, closer to where data is acted on or being gathered.

Digital networks can transfer data at the speed of light, which sounds fast until a split-second traffic decision is required – and the cloud-based server feeding data to your AV is half a world away. Latencies measured in microseconds might not dampen a web-browsing experience, but not knowing that the bridge you’re about to cross just iced over, or another vehicle just pulled out to block the intersection you’re entering could prove genuinely detrimental. Edge computing seeks to leverage locality to improve the relevancy and immediacy of data.

Out of the cloud

Edge computing is a natural extension of cloud computing, which seeks to offload processing requirements from a device (i.e., a smartphone, or a vehicle) to scalable servers, which can take up entire city blocks and be located thousands of miles from the device using the data. Cloud architectures permit access to enormous computing resources, with the only real downside being latency.

“If you think about the difference in latency of going to the corner store versus going across town to the big Costco, you have a shorter trip. You have it right there where it's much faster,” explained Josh Johnson, an enterprise architect with Akamai Technologies, an edge-computing concern and provider one of the world's largest distributed-computing platforms. “And the same thing with the data. It's much lower latency when you have the data and processing close to where the user is, as opposed to going halfway across the country to get to an on-premise data center or a cloud region.”

Johnson, who has prior experience with automotive OEMs, noted that latency is typically measured in thousandths of a second, but that’s still a crucial interval. “If you look at trying to set up a central location, the speed of light sounds quick, but then you look across continents and that can easily be 200 milliseconds,” he said. “Doesn't sound like a lot, but when you have a lot of that going back and forth at a time, it all adds up, and a lot of decisions need to be made quicker than that.”

Edge computing eliminates expansive data travel by localizing computing ability, be it via nearby infrastructure, or more closely located data servers. On top of the distance aspect is volume. Edge setups can help reduce overall network loads by processing more data locally. “Beyond latency, it's just a matter of handling that amount of data going across to a central location,” Johnson said. “I've seen estimates of 25 gigabytes an hour or more for data generated from autonomous vehicles. Imagine if you have that per vehicle, that's going to add up to when you figure how many vehicles are on the road at once and what that would take to be able to send that data to somewhere that's a central location.”

“Just being able to process that data at the edge, without leaving the ISP [internet service provider] in many cases, is really important,” he continued. “You're not constrained by bandwidth with the backbone of the internet, or constrained by the bandwidth of a single data center of a few regions.”

Autonomy’s vehicle network

Where edge computing dovetails so effectively with AVs is by helping maintain local computing ability for the data the vehicles will generate and share vehicle-to-vehicle (V2V), and for distributing information gathered from the nearby vehicle-to-everything (V2X) infrastructure. “That ties into autonomous vehicles and V2V or V2X communication, where you want to be communicating with the vehicles in an area. It can be important to keep that data and that communication local in that region” Johnson said.

According to Johnson, the goal for edge-server suppliers such as Akamai is optimization, to ensure all devices within a physical zone are able to communicate effectively, and to relieve the device from unnecessary computing loads. “The right data, at the right place at the right time,” he labeled it. “Parallel to that, you're offloading from the device so you can do potentially more-powerful processing than what you want to do on the device directly. Or maybe you're working with combined data from multiple devices, and you may not want to send data from one vehicle into another vehicle, just for security reasons.”

For automakers, reduced computing loads can equal reduced cost, as embedded computing capability can be minimized. Reducing processing bandwidth also frees up the potential for delivering more content to the vehicle, such as monetized entertainment features. “In addition to the vehicles, there's the human occupants that are going to be consuming more media – watching videos or what have you,” Johnson explained. “Autonomous vehicles are going to lead to increases in media consumption and the content delivery is auxiliary but related.”

The infrastructure also can improve data robustness for less time-dependent data. “It's definitely more than just real-time,” Johnson said. “For OTA [over-the-air] updates for manufacturers, we're improving the performance of it by having it right there at the edge. It's a much more reliable connection to the last mile, as opposed to across the internet. Various middle-mile issues can crop up that can cause packet loss, data not getting transmitted. Having that presence of the edge [creates] the most reliable route.”

“And if you're having the same update delivered to multiple vehicles, we can store that data locally at the edge to just offload that bandwidth as well,” Johnson said. “So you're not downloading the same update from the server every time. The edge has it cached and stored locally, so it can be more efficient.”

Applying the edge

Stellantis will be one of the first OEMs to publicly leverage edge computing in vehicles, announcing in September 2021 two trials that will employ the technology. The programs are focused on safety, involving in-vehicle alerts via the Uconnect infotainment system related to pedestrian/intersection data and notifications tied to emergency-vehicle proximity. Both trials employ V2X connectivity and the Stellantis edge platform it labels MEC (Multi-access Edge Computing).

The first program will demonstrate the MEC platform’s cellular 5G connection abilities, permitting localized systems to make decisions at the point of data collection. The setup will use on-site cameras and sensors at intersections to provide data outside the reach of a single vehicle’s on-board sensor suite. The MEC system can then locally process and communicate safety risks to both onsite pedestrians and approaching vehicles. The goal is to evaluate faster data-exchange infrastructures for increased levels of vehicle autonomy, along with future applications for new connected services.

Stellantis will be working with 5G Automotive Association (5GAA) partners that include Intel, Verizon, Harman, Altran, Telus and American Tower. The testing will involve a pair of 2021 Jeep Wrangler 4xe plug-in hybrid vehicles equipped with Uconnect, and take place at the University of Michigan’s Mcity test facility, with plans to expand into the Detroit area with cooperation from the Michigan Department of Transportation.

The second trial will make use of the Safety Cloud system from HAAS Alert, the Chicago-based software-as-a service (SaaS) provider of real-time, automotive collision-prevention data for public safety and roadway fleets. Engineering teams from Stellantis collaborated with HAAS Alert to pilot the feature, which delivers warnings to the vehicle’s Uconnect system (top) when emergency vehicles or roadway hazards tracked by the system are in close proximity.

Initial tests will leverage company-owned vehicles in metro Detroit in MY2018-and-newer models equipped with Uconnect. The pilot project will measure effectiveness of delivering alerts to in-car screens, the impact the service has on driver safety, as well as identify opportunities for improvement. Pending the results from the pilot, Stellantis may develop a commercial rollout plan.

“Greater connectivity speeds, improved hardware and expanded software expertise have opened new opportunities for Stellantis with safety systems being one of the many areas we focus on,” said Mamatha Chamarthi, head of software business and product management at Stellantis. She noted the company will continue to leverage “smart and strategic partnerships” to capitalize on next-generation systems and prove out technology.

According to Johnson at Akamai, edge computing’s speedier exchange of digital data will continue to grow in importance while opening new avenues for safer vehicles and an improved mobility experience. “I don't want to say ‘most important,’ because obviously, your vehicle’s not going to go out without wheels, either,” he couched. “But I think it all comes together to kind of complete that picture and really have a better product, a better consumer experience and drive innovation overall.”