Our mission at Parallel Domain (PD) is to empower developers to test and validate their autonomous systems with the most realistic and controllable sensor simulation software on the market. At the heart of our offering is PD Replica, a technology that creates pixel-accurate digital twins of real-world locations by leveraging customer-provided sensor logs and captures. PD Replica employs advanced scene reconstruction and rendering techniques to provide developers with realistic simulation environments suitable for open and closed-loop testing and validation of perception models. However, while PD Replica Sim can programmatically generate scene variations, there are limitations to the full breadth of environmental controls available today. This is where the NVIDIA Cosmos Transfer world foundation model (WFM) can enhance the quantity of variations available using text-based prompts.
PD Replica Sim delivers high-fidelity, fully controllable simulations tailored to the needs of autonomous system developers. It supports synchronized multi-sensor (camera, lidar, radar), multi-modal (rgb, segmentation, bounding boxes, depth, keypoints) data streams. This allows teams to run repeatable, deterministic tests with precise control over scenario configuration, which are critical for tasks like regression testing, scenario tuning, and compliance-driven validations (such as NCAP).
With pixel-level realism and accurate scene annotations (including depth, bounding boxes, keypoints, segmentation, and motion vectors), PD Replica Sim enables developers to validate not just high-level planning behavior, but perception stack performance itself. This level of fidelity and control allows teams to shift more testing from the road to simulation, thereby saving time, reducing costs, and speeding up development.
Autonomous systems must operate safely across an extraordinary range of real-world scenarios. Validating performance under varying weather conditions, different lighting, and diverse dynamic situations is critical. However, capturing all these variations through real-world data alone is impractical, costly, or simply impossible. This limitation significantly impacts both comprehensive testing (closed-loop simulations) and the diversity of simulation-based datasets used for perception and prediction models.
Recent advancements in generative AI are beginning to address this critical industry challenge. Generative models are capable of synthesizing highly realistic images and videos by gradually refining noise into structured outputs. Crucially, these models can condition their generation on various input modalities such as semantic segmentation masks, depth maps, edge maps, or blurred images. By doing so, they aim to enable targeted and realistic modifications to environmental elements, such as weather or lighting, while striving to preserve overall scene fidelity.
We see significant potential in generative AI to complement and enhance our PD Replica pipeline. These technologies could dramatically scale the ability to generate varied and realistic scenarios, addressing the growing needs of developers for robust simulation environments and diverse synthetic training datasets.
One particularly promising multicontrol generative AI model is NVIDIA’s recent release, Cosmos Transfer world foundation model. Cosmos Transfer conditions video across different environments and lighting using ground-truth and structured inputs.
These input modalities provide Cosmos Transfer with detailed structural guidance, helping to maintain core scene characteristics while modifying environmental details such as weather, lighting, or dynamic elements. This approach aligns closely with the needs we have identified from our customers, who seek efficient, scalable methods of generating realistic variants of real-world scenes.
In partnership with NVIDIA, we have been exploring the use of Cosmos Transfer in conjunction with PD Replica Sim. Early evaluations have shown that diffusion models like Cosmos may be an applicable future technology for generating realistic weather and lighting variations from single-scene inputs. For example, transforming scenes captured in clear conditions to convincing variations with rain, fog, or snow.. The Cosmos technology is early in development and showing initial promise, and we’re looking forward to improvements in scene semantic stability, multi-camera support, and temporal stability.
Below are some early examples, demonstrating realistic variations in weather conditions, and lighting scenarios:
As the technology continues to evolve, we see significant potential for future advancements. As diffusion models mature further, particularly with enhanced control mechanisms and improved fidelity, we foresee Cosmos Transfer becoming a powerful complement to the PD Replica workflow. Improvements in multi-camera synchronization, temporal stability, and accuracy in preserving original scene context could dramatically enhance developers’ capabilities.
We at Parallel Domain are very excited about our collaboration with NVIDIA and the trajectory of Cosmos Transfer. As we continue our exploration, we anticipate significant customer benefits through more scalable, flexible, and realistic scenario creation. This will help our customers accelerate both model training and closed-loop simulation capabilities.
Together, Parallel Domain and NVIDIA aim to empower developers to comprehensively test and validate autonomous systems, ultimately driving safer and more robust performance in real-world conditions.
We look forward to sharing further developments as generative AI continues to advance, moving closer to enabling realistic, scalable scenario variations with precision and fidelity.