Kevin McNamara
At Parallel Domain, our mission is clear: Advance machine perception by providing the best simulation platform in the world. Today, I want to share some thoughts on why this mission matters more than ever, particularly as AI systems are becoming ubiquitous in our daily lives.
Cars with lane keeping assistance, drones delivering groceries, door cameras that alert you to a delivery, an autonomous Waymo that swerves to avoid a child running into the street. Embodied AI is finding its way into every facet of our lives, whether we directly use the technology or not. All of these systems depend on some form of perception: the ability for an AI system to see its surroundings. As these systems reach large-scale deployment in the real world, comprehensive testing has become paramount to ensuring their safe operation. But we’re not testing enough.
Why not? The combinatorial explosion of places, people, things, behaviors, and environmental conditions is so massive that companies can’t exhaustively test their systems in the real world. The time, money, and physical danger of doing so is prohibitive.
This is where simulation can become transformative, with the promise of testing unlimited scenarios safely in a virtual world. However, previous approaches have struggled to achieve sufficient realism and scale to be trusted by teams to translate to real-world performance. But what if we could recreate the real world as a series of digital twins that give us the knobs and controls to test all of these permutations? It would almost be like driving one mile to generate a thousand miles of useful tests.
The biggest problem with traditional 3D simulation is that it’s very difficult to scale the variety of virtual worlds and scenarios while maintaining a sufficient level of realism. Our new approach combining digital twin generation with controllable 3d simulation provides a breakthrough in simulation realism and scalability. Let’s discuss both.
Scale. What if we could transform modest amounts of data collected in the real world into massive virtual test suites? Our PD Replica technology enables users to convert their existing fleet data into realistic simulations, and then use our API to create the novel variations needed for valuable testing. Going beyond static digital twins, our simulation engine allows teams to create systematic variations of their real-world captures. This approach transforms each piece of fleet data into a foundation for extensive testing. Examples might include:
This multiplicative approach means you can leverage fleet data to create much larger, richer, and more challenging test scenarios, to an extent that would be impossible to capture in the real world.
Realism. In order for these simulations to have value, they have to be sufficiently realistic. The AI system has to react in the same way to a simulated test as it would a real test. We are ultimately interested in ensuring real-world performance. This new approach with digital twins affords us a new type of comparison: perception performance can be measured in both the simulated drive as well as the original drive that created it, providing an apples-to-apples comparison between virtual and real-world performance that was never possible before.
Our recent research has shown that these digital twins provide a significant boost to realism and data efficacy, benchmarked by open datasets as well as real-world data from one of the world’s largest Tier 1 suppliers. Stay tuned for a full blog post on this next month.
Once these virtual test suites are created, there are two simulation approaches that serve different but complementary purposes:
Open-Loop Testing
In open-loop testing, sensor data is generated from predefined scenarios without real-time feedback from the perception system. Think of it as playing back a recording where the scene unfolds exactly as scripted, regardless of how the perception system responds. This approach is particularly valuable for:
Closed-Loop Testing
Closed-loop testing creates a dynamic environment where the perception system’s outputs directly influence how the scenario unfolds. Like a real vehicle on the road, each decision affects what happens next. This approach enables:
The choice between open and closed-loop testing often depends on the development stage and specific testing goals. Most mature perception testing strategies employ both approaches: open-loop for systematic verification and regression testing, and closed-loop for validating real-world behavior and system integration.
The ultimate goal of thorough perception testing is safety. Reliable perception is foundational to autonomous systems making the right decisions in critical situations. By enabling comprehensive testing of perception systems across countless scenarios, we’re helping to build a future where autonomous vehicles can consistently identify and respond to their environment safely and effectively.
Want to try PD Replica for yourself? We offer a free trial for qualified individuals where you can get started building your own simulated test suite. Fill out the below form to connect with someone on our team or send us a message on our contact-us page