Simulation plays a vital role in developing autonomous transport. It allows teams to test early, iterate quickly, and explore situations that would be difficult, impractical, or dangerous to test in the physical world.
Yet simulation is only valuable if we can trust it produces reliable inputs to the autonomous driving software. If simulated sensors do not behave like the sensors on the real vehicle, results will be misleading, and without validated sensor models, it becomes harder to rely on simulation results when we move from virtual testing to real-world operations.
Validated sensor models help ensure that when autonomy software is tested in a virtual environment, it is tested with sensor outputs that closely reflect real-world sensor behavior. Sensor signals are the data sensors send to the virtual driver about its surroundings, so they need to match real-world behavior closely enough for the intended purpose for the simulation results to be meaningful. This is critical for safe autonomous operations in the real world.
Feng Liu is an experienced simulation engineer at Volvo Autonomous Solutions.
A sensor model is a mathematical description of how a sensor works and behaves. In simulation, it recreates what a real sensor would output when it “looks” at a specific environment, obstacle, or moving objects. In essence, that means modeling the physics of the sensor: how it detects the world around it and turns it into data.
For simulation testing, each component needs its own model; the environment, such as trees, rocks, landscape details, roads, and signs, is carefully reproduced. The truck itself is also simulated, along with sensors such as LiDAR and radar. But that is not enough, as some sensors contain built-in software that filters and processes signals before the data reaches the virtual driver. That internal processing can shape what the sensor outputs. The goal of the sensor model is therefore not only to capture the physics of sensing but also reproduce how the sensor behaves in practice.
At a practical level, the model should match basic characteristics such as resolution and sampling rate. If the real LiDAR scans at a certain frequency, the simulation should do the same. If the real sensor has a specific scan pattern or number of laser lines, the model should reflect that. When those fundamentals align, simulation becomes a more reliable environment for development.
Validating a sensor model means checking, in a structured way, whether the model behaves like the real sensor. This is an iterative process that builds confidence over time, and it starts with understanding both the sensor and the environment it will operate in. Acceptance criteria for what constitutes a sufficient match are defined in advance, based on the intended use of the simulation and the specific questions it is being asked to answer.
The first step is to study the sensor specifications. Engineers study the sensor specifications to understand what the sensor is designed to do. For a LiDAR, that might mean how it scans, how frequently it updates, and what kind of signals it sends and receives.
Then they take the sensor out into the real world and record what it outputs. They do this not only on clear days, but also in rain, fog, snow, and dust and so on; conditions that tend to challenge autonomy. This is where you learn what the sensor actually does, including the effects of any built-in processing inside the sensor. For example, some LiDARs may use internal software to filter out reflections from dust, while others handle those conditions differently. This will in turn affect what the sensor outputs.
Once the engineers have real recordings, they compare the simulated sensor output against what the real sensor produced in the same type of situation. They use defined measures to make that comparison practical, such as how accurately the sensor reports distance, how stable its detections are, or how its signal strength changes with conditions. In other words, the real sensor becomes the reference point for the sensor model.
If the match is not good enough, the model is adjusted and the comparison is repeated. Over time, this loop helps the simulated sensor to capture the real sensor’s limitations and characteristics. That is what makes simulation results meaningful, because the autonomy software is being tested with sensor inputs that reflect reality.
Sensor models can be built in different ways, depending on what you need the simulation to do. One approach is physics-based modeling, where you describe how the sensor works and how signals interact with the world, such as how light or radar waves travel and reflect off different materials.
Another approach is data-driven modeling, where large sets of real sensor recordings are used to build a statistical model of the sensor’s behavior. Data-driven models can be very effective within the types of conditions they have been designed for, while physics-based models help explain behavior in a more general way. Therefore, engineers often combine both approaches and choose different levels of details, so the simulation is realistic enough for the task while still fast enough to run at scale.
Just as both physics-based and data-driven approaches offer engineers flexibility in building sensor models, each can be implemented at varying levels of fidelity. Fidelity describes how closely a sensor model replicates real-world sensor behavior, and is a key consideration when determining whether a model is appropriate for its intended application. By carefully choosing the level of fidelity—whether high or low—engineers ensure the simulation provides meaningful results for the development tasks at hand.
A high-fidelity model tries to reproduce more details in how a sensor behaves, including the kinds of imperfections you see in real life. A lower-fidelity model focuses on the essentials and leaves out some of those details to keep the simulation simpler and faster. Comparing the model with real sensor behavior is what shows whether that level of fidelity is sufficient.
That distinction matters because sensor models are used for different purposes during development. Sometimes engineers are checking the basics: does the autonomy software run correctly from start to finish, and are all signals connected and flowing as they should? For that kind of work, the sensor model does not need to capture every nuance. A simpler model is often enough to move quickly and find issues earlier.
Other times the goal is to understand performance. If a team is investigating why object detection gets worse in certain situations, then fidelity becomes more important. In those cases, the fidelity of the model can become the difference between a useful simulation and a misleading one. Heavy snow, for instance, can be extremely difficult to model credibly with commonly used game-engine for real time simulation. But it can be modeled with more advanced methods which allow engineers to study the sensors' detection performance in snowy conditions.
This creates a constant trade-off between detail and speed. In general, the more faithfully a sensor model reproduces real-world behavior, the more computing power and time the simulation requires. That can limit how many scenarios engineers are able to test. Simpler models run faster and make it possible to explore more cases, but they may leave out details that matter for specific questions. By comparing the model with real sensor behavior, engineers can decide what level of fidelity is sufficient for the task and what the simulation can, and cannot, be trusted to answer.
The true value of sensor modeling is that it enables simulations that not only reflect real world behavior, but also quantify how accurate that behavior is. By rigorously comparing sensor models against real measurements, we build confidence that simulation results are reliable. This tight link between modeling and validation lets teams iteratively improve models, uncover issues earlier, and check fixes virtually before committing to physical tests which can take time and are costly. When engineers understand how closely simulation matches reality, they can make faster, better decisions and focus their efforts where it matters most.
At V.A.S., we use sensor models across the full product lifecycle, combining high fidelity physics-based models with data driven statistical models to support efficient development and a safe product. This is especially critical in demanding environments such as mining and quarry operations, where dust, snow, fog, and rain can rapidly change the conditions for safe operation. With simulations based on reliable sensor models, engineers can predict system performance in adverse conditions, verify robustness and take actions to ensure safe operations.
Despite the core role it plays, it’s also important not to treat simulations as a standalone proof of safety. At V.A.S. we believe that safety should be based on a multi-pronged approach where confidence comes from validated simulations, physical testing, structured safety architecture reviews, and continuous operational monitoring. In this way each layer of evidence can reinforce the other and close the gap between the predicted and real-world performance of our autonomous transport solutions.