Simulation Developer RFpro Mimics Vehicle Sensors
Using what it calls ‘ray-tracing’ software, it enables the training of ADAS systems entirely by simulation. The company says eight of the top 10 OEMs are using it.
U.K.-based simulation systems developer rFpro recently launched its new ‘ray-tracing’ simulation rendering technology, designed to replicate the way that vehicle sensors “see” the world. The company believes that it is the first to achieve this and will enable ADAS and automated-vehicle systems to be trained and developed completely via simulation, reducing development time and cost in the process.
rFpro’s background is in simulation for racecar development. Perhaps inevitably, having spent so much time developing systems for leading race teams, the company started to look at other ways that it could use those technologies and simulations and techniques that it had already developed.
“Originally, Honda were using it in their racing programs and Toyota were using it in their racing programs in the U.S. and in Europe, as well for [World Endurance Championship] programs,” rFpro Technical Director Matt Daley told SAE Media. “They all realized the power and that transfer of technology between this high-fidelity simulation developed for racing and asked, “Why can't we use this for road cars”? We now have eight of the 10 top OEMs using rFpro for a variety of different vehicle development programs.
“In about 2017, this term ADAS (Advanced Driver Assistance System) was there. People were having lane-keeping assistant systems that everyone was turning off because they were just annoying rather than useful and we started getting phone calls from customers asking if we could simulate these. That just made us stop and think again,” Daley continued
“There's plenty more to come as well as we start to integrate very specific sensor models,” said Daley, “it's now not just about putting the customer’s chassis model and tire models into the simulation. They are buying really safety critical components like camera sensors, lidar sensors and radar sensors. We have to build very detailed models of those and put those into the simulation as well.”
This vehicle model is computed at some 1,000Hz, meaning that the full dynamics of the car and its acceleration states are being calculated every millisecond. These also are being fed with a high-definition road surface. “Underneath each of those tires you have an independent road input and that's running through what we call our rFpro terrain server,” says Daley.
This server takes the position of every wheel, and it calculates a volume, it doesn’t just take a single point sample. It is looking at the tire’s volume through the ground, using a one-centimeter (0.39-inch) grid.
“We're modelling that under each tire and therefore passing back a very detailed road shape to those tires individually,” explains Daley, “That’s still at 1,000 Hertz, once a millisecond and that's going back up through the system, through the suspension, the chassis and the vehicle models are calculating their accelerations and that fundamentally is what gets passed over to the simulator driver.
“What is the acceleration of the vehicle? It's vertical, it's horizontal, it's pitch, it’s roll and the other thing that the simulator driver gets back there is the steering feel, you feel the torque in the steering. So, you provide torque and position input yourself and the system reacts by loading up that wheel to the right level.”
Multiple image generators
For dynamic driver-in-the-loop simulators, the rFpro system uses nine independent image generators (IGs), which basically are PCs with a graphics card generating an image. Each generates a small portion of the 180-degree screen. “The 180-degree screen is divided by nine. You've got 20 degrees per screen, but because you need the screens to overlap with each other you need to render more than 20 degrees and then blend the two together, so you get a seamless image. So, each IG is probably giving 30 degrees to allow for five degrees of overlap at each edge,” says Daley.
Simulation is not just about a faithful recreation of what we as humans can see. Simulation systems designers such as rFpro need to factor in the camera sensors, lidar sensors and radar sensors and how these devices map their surroundings. Then the appropriate training data for those sensors can be generated from machine learning.
Daley likens the training data to how you train children, using dogs as an example, “You train your children by showing them a picture of a dog and saying, ‘Dog, and here’s a different dog.’ It's still a dog, but it's a different dog and I might point at a real dog, or I might point at a picture of a dog. You're giving your child training data, which is the input image and the answer.”
The team at rFpro realized a few years ago that in addition to real-time simulations, for safety- critical situations, the engineering data needed to be correct. “We can reuse the vehicle simulations, the traffic control, we can reuse the locations that we've built, but what we need to do is allow us to do these lighting calculations. We started from the ground up with a blank sheet of paper and we wrote a dedicated ray trace rendering engine that now sits alongside the real time engine. They are two independent engines that you swap between. This is not a hybrid solution,” explains Daley.
While humans sample what we see in one go, a camera sensor samples it line by line down a chip, like a cathode ray tube television screen that builds the picture line-by-line. It means that each pixel is sampled at a slightly different time. That in turn means that it can’t be sampled in real time. “You get a ‘rolling shutter’ effect”, says Daley, “So, when you have fast-moving objects, it creates a motion blur. It is critical to replicate this phenomenon as that is how the sensor ‘sees’ the world. That’s why we took a blank sheet of paper and started again for a dedicated ray traced engine designed for sensor simulation.”
It is not purely about ray tracing. Light from the vehicle needs to be simulated as well as light from the sun and other light sources on the road. This has proved useful to headlight manufacturers who have used the simulation technology to reduce the required amount of road testing.. It has led to a reduction in working with prototype headlights on actual roads and reduced the time engineers work during the hours of darkness.
Applicable to lidar
Models can be built for lidar too. “Lidar on the vehicle is moving around, it's shooting individual rays, individual laser beams into the scene to collect that distance information and reflectivity about what it is hitting. In the same way that light has a level, lasers come back with a reflection strength, so you can build all that into a simulation and really replicate the physical devices,” says Daley.
Because digital images are built up from multiple sampled images, this introduces other issues when dealing with light sources that are not constant, such as LEDs. Those used in traffic lights flicker at about 200Hz, for instance, so if you are sampling the scene with exposures of one or two milliseconds, the traffic lights flickering every five milliseconds won’t always be lit in the images generated. If the sampling rate is lengthened, to say 10 milliseconds per exposure, it will compensate for the flicker, but will then introduce motion blur, either from a static source with passing objects or a moving source when sampling static objects, which, in simulated testing would need to be simulated too.
“You can't just post-process an image and give something motion blur”, said Daley, “It doesn't appear in the right way. You have to model it into the way that you're sampling the world. You're keeping the pixels open for a period of time, and things move. So, you often see it at the edge of images where the relative movement next to you is fast, so your own motion, or where vehicles are passing in front of you. Motion blur depends on the type of imager, so some imagers have what we call a global shutter. They sample the world for all lines at the same time. Most of them don't, most of them have rolling shutters where the top line of pixels is sampled a microsecond before the next line and a microsecond before the next line, so by the time you've got from the top to the bottom of the image you're at quite a significant amount of time difference, so a pillar, a pole actually becomes slanted in the image because you've passed it.
“Now, if you're training a safety-critical system to recognize the world, you can't ignore some of the fundamental problems you have with the way that they are engineered, so you have to model that in. Again, that is where the ray tracer helps. It's about enabling us to sample the world in exactly the same way as a physical chip does,” Daley explained
“The ray tracer has given us the sensor’s image as accurately as possible. We need to give customers the answers as well, so you have to create bounding boxes, very simplistic sets of training data, 2D bounding boxes. Then you also do things like semantic segmentations. People are manually labelling images, they’re taking half an hour per frame to draw around a set of pixels to say, ‘That was a car.’ In simulation you get it instantaneously, so we're able to not only produce the highest quality sensor data, but also the most accurate and the cheapest way to label it. The combination of the two is immensely powerful.”
Top Stories
INSIDERMechanical & Fluid Systems
Starliner to Perform Uncrewed Return Flight From International Space Station...
INSIDERDefense
Archer Delivers First Midnight eVTOL to US Air Force
INSIDERAerospace
ESA to Test Canadian Startup's Diamond Quantum Sensors in Space
INSIDERAerospace
EA-37B Compass Call: The US Air Force's New Electronic Attack Aircraft
INSIDERAerospace
Modern Commercial Jets Create Longer Living Contrails Than Older Aircraft,...
INSIDERManufacturing & Prototyping
Anduril Takes Software-Defined Approach to Hyperscale Defense Manufacturing
Webcasts
Automotive
Mitigating Risks, Ensuring Reliability: Deep Dive into Automotive...
Automotive
Accelerating Time to Market: Tackling NVH Challenges in Electric...
Communications
Space Communications and Navigation Summit 2024
Electronics & Computers
Utilizing Model-Based Systems Engineering for Vehicle Development
Software
Meeting the Challenges of Software-Defined Vehicles With...
Software
Automotive Hardware Security Modules: Functionality, Design, and...
Similar Stories
ArticlesSensors/Data Acquisition
Testing ADAS Functions in Parallel with EMS Measurement
NewsEnergy
Simulation Squeezes More Range out of Formula-E Racers
Technical InnovationSoftware
Siemens Improves Optimization for Engine Development
INSIDERSoftware
Software Accelerates Hypersonic Engine Development
BriefsAerospace
Reduced Order Modeling for Rapid Simulation of Blast Events of a Military...
NewsAerospace
Lockheed Martin’s Next-Gen, 360-Degree Vision System Flies on the Bell V-280...