Autonomous Solutions

Digital twins: the ultimate virtual proving ground

Yufei Zhang
2025-06-11
Blog Autonomous Next
Author
Yufei Zhang

Autonomous vehicles promise transformative gains in safety, efficiency, and productivity. Delivering on that promise requires developing and validating systems that can safely navigate the real world. Validation is one of the biggest challenges in autonomy. It’s not just about making a system work: it’s about proving, with confidence, that it will behave safely and predictably under a wide range of conditions. Real-world testing alone is expensive, hard to replicate, time-consuming, and potentially dangerous. To accelerate development and mitigate risks, Volvo Autonomous Solutions (V.A.S.) employs digital simulations — a powerful way to test autonomous driving (AD) systems across thousands of scenarios safely and efficiently.

 

At the heart of this approach is the digital twin: high-fidelity virtual replicas of trucks and environments. With digital twins, we can accelerate virtual driver development and enable fast, scalable testing, helping us build safe autonomous transport solutions.

 

What is a digital twin?

A digital twin is a realistic model of a real-life place or object. At V.A.S., the concept includes two key components: an environment twin – a collection of 3D models that represent the physical environment which are used to build test scenarios tailored to the operational design domain (ODD) – and a vehicle twin, which digitally mirrors the truck, trailer, and its sensor frames. Together, these twins form a simulation environment where every parameter, from road friction to lighting, can be controlled and customized. This lets us test edge cases, repeat scenarios, and accelerate learning across the full development cycle.

 

So how do we build a digital twin? Using a computer game engine! Game engines are optimized for building complex 3D environments, providing real-time rendering and built-in physics. This allows us to create visually and behaviorally realistic digital twins for autonomous vehicle testing. Their programmability enables full control over weather, lighting, and road conditions, making it easy to run and repeat complex scenarios.
 

Building the virtual world

Building the virtual world begins by using aerial photogrammetry, flying camera drones over the site we want to replicate. These drones capture hundreds of high-resolution images from various angles. The 2D photos are stitched into accurate 3D meshes using computer vision algorithms, forming the foundation of the area’s digital twin. However, raw photogrammetry is not great at capturing objects that are changeable by nature such as season-specific vegetation. This can inadvertently train the virtual driver on a narrow snapshot of the year, so we replace such elements with assets like leaves, trees and bushes in the game engine virtual world. This gives us the flexibility to change seasons, tweak plant density, or adjust lighting conditions without rescanning the site. As a result, we can efficiently conduct diverse testing in our digital twin of the test site.

 

Building the virtual truck

The virtual truck is built from Volvo’s computer-aided design (CAD) data, which defines the geometry of the chassis, wheels, suspension, and mounting points. Realistic textures are added, while a high-fidelity model simulates how the truck behaves on the road, capturing suspension movements, tire grip, and drivetrain response.

 

We then add digital twins of the truck’s sensors, such as radar, LIDAR, Global Navigation Satellite Systems (GNSS), and Inertial Measurement Units (IMU). These mathematically model how their real-world counterparts perceive the environment, but in a simplified, efficient way. While we don’t simulate every radar wave or photon, we model the outputs with a high degree of realism. This ensures the virtual driver receives reliable input during simulation, just as it would in the real world.

 

 

Testing the virtual driver

The purpose of building a simulation with digital twins is to evaluate and improve the virtual driver. The AD software is responsible for the actions of the self-driving vehicle – throttling, steering, and braking – based on input from the sensors. During the early design phase, we use idealized models to verify basic behavior. Once the design matures, we introduce “fault injection”, deliberately simulating errors, communication losses, or extreme weather. This challenges the system with real-world unpredictability and builds robustness. We learn from the outcome and update the virtual driver software accordingly. The updated software is then retested in simulation to ensure improvements before physical deployment.

 

Suppose you want to test how the virtual driver handles losing a LIDAR sensor in freezing rain. Doing that in the real world is risky, hard to control, and even harder to replicate after a software update. In contrast, with a digital twin, you simply adjust parameters – rain intensity and sensor configurations, for instance – and run the scenario in seconds.

 

This type of simulation provides a “God mode” view where you have perfect ground truth – the precise, internally known data within the simulation – of all positions, object shapes, and environmental parameters. This allows us to compare sensor readings directly with this perfect truth, helping us evaluate how well the virtual driver is performing.

 

Software and hardware in the loop

To refine the AD system’s behavior, we’ve created a closed-loop simulation setup, where the same control signals sent by the virtual driver can be directed either to a real truck or its digital twin. In a closed-loop simulation, the output of the system is fed back into the virtual environment to influence the next set of inputs in real time, creating a continuous loop of action and reaction. This setup allows us to observe how the software performs in both physical and virtual environments. We use any mismatches to pinpoint weaknesses or bugs, letting us fine-tune the system before deployment.

 

At the heart of our testing process is closed-loop Software-in-the-loop (SIL) simulation. This method runs the virtual driver software inside a simulation containing digital twins of the site and truck. The AD system reacts to simulated environments and decides when to steer, brake, or accelerate. Every moment is recorded – what the sensors “see,” how the environment appears, and what decisions the software makes. These insights help us detect flaws and improve algorithms more effectively than in the physical world.

 

We build on SIL testing with Hardware-in-the-Loop (HIL) testing. In HIL, physical components like electronic control units (ECUs) and computing hardware are connected to the simulator alongside the virtual driver. These HIL rigs operate the virtual truck just as they would a physical one, offering a hybrid testbed where software meets real hardware constraints. This uncovers hardware-software integration issues, latency effects, and timing bugs that SIL alone might miss. Together, SIL and HIL form the foundation of our advanced simulation strategy: a closed-loop, full-stack validation approach, enabling us to fine-tune the system before real-world testing.

 

Balancing fidelity and building confidence

Simulation fidelity describes how closely a scenario mirrors reality. High-fidelity simulations are more realistic but slower and resource-intensive, while lower-fidelity ones run faster but sacrifice detail. The goal is to find the right balance: realistic enough to be useful, yet efficient enough to scale. Even if the virtual driver performs correctly in 99% of simulations, this doesn’t guarantee real-world success. That rate builds confidence, but it’s only a necessary condition—not a guarantee—for safe deployment.

 

To strengthen confidence, we balance fidelity with broad scenario coverage. High fidelity ensures accuracy under realistic conditions, while wide coverage tests the system’s performance across diverse situations. We stress test edge cases by exaggerating road conditions, weather, or visibility to evaluate how the system handles the unexpected. Simulation doesn’t replace physical testing—but it helps us focus and refine what needs to be verified in the real world.

 

Enhancing customer value

The benefits of digital twins extend beyond testing: they help us design better operations for our customer sites. We can build full-scale mining sites complete with terrain, equipment, and traffic patterns, and run production shifts virtually.

 

This predictive capability helps us answer our customers' important questions before construction begins. Should we add another hauler? Will traffic rerouting reduce idle time? Can we improve fuel efficiency by redesigning traffic routes? Simulating these choices leads to safer, more efficient, and more sustainable operations. 

 

Looking ahead

Procedural generation, a method for creating 3D environments through algorithms, and AI can make digital twins even more powerful. AI is already used to perform scenario mining – automatically combining environmental parameters to uncover failure points. This allows us to test edge cases at scale and continuously challenge the virtual driver.

 

In the future, we may craft test sites using natural language prompts – “Generate a snow-covered, 3 km² mining site” – and instantly run the scenario. AI could also create new test scenarios to uncover blind spots and adapt models on the fly.

 

Digital twins aren’t replacements – they’re accelerators. They broaden what we can test and validate, helping us build smarter autonomous vehicles, guided by safety.