IAA 2025: Arbe Chipset Improves Automated Driving Decisions

Autonomous vehicles around the world are facing stricter regulations, which Arbe aims to address with cameras, radar, and extensive processing power.

Arbe chipset uses its large channel array to give next-gen radars the power to elevate L2+ driving experiences to higher safety levels. (Arbe)

Perception radar company Arbe was at IAA Mobility in Munich this year to press the case that customers can and should trust automated vehicles. One reason is the global trend of stricter regulations from the NHTSA, Euro NCAP, and in China, which now require automated vehicles to safely meet demanding use cases that are not covered by current sensors, according to Arbe co-founder and CTO Noam Arkind. Arkind told SAE Media that one such category is detecting vulnerable road users (VRU) in poor weather and lighting conditions.

“We know from recent tests that a lot of Chinese cars, for example, failed VRU detections in the dark,” he said. “Camera alone doesn’t really have reliable pedestrian detection in a dark situation. Radar is a great sensor. It's very sensitive. It's not dependent on weather conditions or lighting conditions, but it's noisy, it's low resolution, and it's hard to use.”

Arbe’s solution, Arkind said, is to offer a perception system with the highest number of channels and the lowest cost per channel. The company’s 2K ultra-high resolution Phoenix Perception Radar, for example, has 2,304 virtual channels, a forward range of 350 m (1,148 ft) and a range resolution of between 7.5 cm and 60 cm (3 in and 24 in). This kind of high-resolution radar allows the system to understand that there's something in front of the vehicle, even in a highly reflective environment, such as when there's a guardrail with a lot of metal up ahead.

Arbe co-founder and CTO Noam Arkind at IAA Mobility 2025. (Sebastian Blanco)

“What makes a radar (an) imaging radar is basically the resolution, the ability to really resolve all use cases in a high probability, in high confidence, and not need to resolve ambiguities or things like that,” Arkind said. “If you have low resolution in the radar and high resolution in the camera, then fusing them is really inefficient. The way we see it is, the resolution for the radar needs to be high enough so the fusion will be effective.”

Radar is good at intention estimation, Arkind said, knowing when something is starting to move. But radar is bad at resolution, so Arbe claims to have solved this problem by giving the system the ability to detect free space from radar alone, something Arkind said is unique to imaging radars and to Arbe’s radar, specifically.

Arbe designed and now sells three types of chips for its systems: transmitters, receivers and processors, as well as the software. The company’s processor chip can process real-time data from up to 2,304 virtual channels and provide more than 10,000 detections at 20 frames per second, and has an equivalent processing throughput of 3Tbps. The transmitters and receivers operate in the 76-81 GHz band, utilizing 12 channels on the receiver and 24 on the transmitter. Arbe says its chipset features the industry's largest channel array, comprising 48 transmit and 48 receive channels, along with a dedicated processor for handling the massive data streams. Processing four-dimensional information coming from the radar and two-dimensional data from the cameras requires new software, Arkind said.

“The azimuth/elevation that are really good in camera are okay in this radar for fusion, but the range Doppler, the range of motion, are really good with radar, and the power comes from the fusion of attaching a range Doppler estimation for each object in the camera,” he said.

Arbe’s system doesn’t need lidar, Arkind said, but adding that sensor could be important for OEMs and suppliers that need redundancy.

“In terms of sensitivity, radar is more sensitive than lidar,” he said. “In terms of resolution, camera is higher resolution than the LIDAR. So if you have a good fusion of radar and camera, then you have higher sensitivity, motion estimation and high resolution from the camera. The combination of all these data streams means the lidar is not adding new value. It's good for redundancy, but from a performance perspective, radar and camera should give you the entire performance level of the need for [automated driving] decision making.”



Magazine cover
Automotive Engineering Magazine

This article first appeared in the December, 2025 issue of Automotive Engineering Magazine (Vol. 12 No. 9).

Read more articles from this issue here.

Read more articles from the archives here.