Platform Overview

The Parallel Domain platform is a one-stop-shop for perception developers looking to generate, manage, and utilize synthetic data to improve the training and testing of their perception models. Our platform offers a range of tools that allow users to:

  • Generate synthetic data for various camera, LiDAR, and radar sensors. Using Batch or Step, users get fine-grained control over the environment, lighting, sensor configuration, agent behavior, and annotations.
  • Visually explore datasets and annotations using the Visualizer tool, making it easy to identify patterns and outliers in the data.
  • Unlock a whole new perspective with Map Explorer - an interactive tool that lets you explore the worlds we've created from an aerial view. Plan drone scenarios and send them to our APIs to generate data with ease.
  • Access and query data for machine learning workflows using the PD SDK, streamlining the data pipeline.
  • Integrate synthetic generation engines with their simulation stack via an API, enabling closed-loop perception testing.

With the Parallel Domain platform, perception developers have the tools they need to generate high-quality synthetic data and take their perception models to the next level.


All sensor data is captured within virtual worlds procedurally generated from real-world map data. This enables worlds to contain deep synthetic complexity while maintaining realism. As a result, data captured within Parallel Domain's virtual worlds contain the complex and noisy details needed to accurately model the imperfections of the real world. Learn more by visiting our Worlds page.


Parallel Domain virtual worlds are brought to life with a rapidly growing library of dynamic agents, roadside props, weather and lighting conditions. Learn more by visiting our Content section.

Put it all together, and you'll soon be creating data like this:

We look forward to seeing what you build! Feel free and reach out to if you have any questions.