Artificial intelligence has the opportunity to significantly improve the quality of life for billions of people, but training and testing machines to accurately and reliably perceive the world suffers from a critical bottleneck: collecting and labeling data. Autonomous vehicle companies alone spend billions of dollars per year collecting driving data and then paying labeling vendors, some of which enlist over 1 million human data labelers, to outsource the labor of drawing and tracing the annotations that the algorithms need to learn. This is neither a scalable nor reliable approach – collecting and labeling data in this way is inaccurate, inflexible, and unsafe.
Parallel Domain’s synthetic data generation platform is the solution. Data that was once time-consuming, expensive, and dangerous to collect and label is now at the developer’s fingertips. This technology is faster, produces smarter AI, and offers massively better economics, all without requiring developers to leave their desks.
“Synthetic data is key to making cars and robots smarter. Thanks to its high degree of realism and flexibility, the Parallel Domain platform enables rapid exploration of cutting-edge Machine Learning ideas. Furthermore, its cost effectiveness enables accelerated paths to transfer and deployments at scale. This combination — flexibility, realism, and scalability — makes the Parallel Domain platform really unique and a huge advantage for us to develop the future of robot autonomy.” – Adrien Gaidon, Senior Machine Learning Manager, Toyota Research Institute
Our customers include some of the world’s top AI companies, auto manufacturers, delivery companies, and more. With a suite of APIs and software tools, the Parallel Domain platform provides value throughout the development cycle and across multiple teams within an organization. From initial prototypes to the heavy lifting of model training, from integration testing and deployment to model maintenance, Parallel Domain data is powering customers’ autonomous systems across the development spectrum.
With talented veterans from the world’s top graphics and AI companies such as Apple, Pixar, Microsoft, and Amazon, Parallel Domain’s technology generates synthetic data at a level of realism and scale that surpasses anything in the industry. This quality is critical to improving the performance of computer vision algorithms – a fact that we have been able to demonstrate with our customers. It is profound to see that as we improve our data generators, we improve our customers’ model performance. We look forward to sharing more about these results in future posts.
Our platform is packaged as a modular API that is purpose-built to integrate into existing data pipelines. Our mission is to augment your workflow, not replace it. Our API modes enable customers to generate configurable, accurate, and rich synthetic sensor data on-demand:
Batch: Generate large batches of labeled data with a simple API call. Train more accurate and reliable models by specifying the distribution of data that you need, day to night, rain to shine, cars to bicycles, and more.
Step: Directly control each individual frame by integrating with your simulator or log playback to provide controllable sensor data. Train and test your systems in a live loop, enabling continuous integration testing and live training.
Stream: An interactive sensor stream that responds to user input in real-time, enabling customers to pilot interactive and immersive experiences.
In an afternoon, developers can experiment with new ontologies, new dataset distributions, new sensor configurations and obtain massive quantities of data – efforts that would have taken months otherwise. In many cases, companies don’t even endeavor to make those types of changes because of the cost associated with changing real-world data. With this accelerated development cycle, the whole paradigm for machine learning changes, enabling customers to make rapid and more frequent breakthroughs, ultimately fueling innovation and accelerating the path to autonomy. Read more about our products.
Our customers are seeing major improvements and undeniable results from synthetic data. When our customers add Parallel Domain synthetic data to their pipeline, they directly improve their systems’ ability to perform critical vision tasks, such as detecting a bicyclist at night or determining the state of a traffic light. Our synthetic data is used across the development spectrum. Customers have been able to train initial perception algorithms to perform tasks such as traffic light classification, emergency vehicle detection, and more, with nothing but Parallel Domain synthetic data. In other cases, customers mix synthetic and real-world data together, seeing as much as a 45% reduction in error rates. In all cases, our customers save significant time they would otherwise spend curating real-world data, allowing them to instead focus on what they do best: developing their autonomous systems.
We’re excited to welcome to our team Lindel Eakman and Ryan McIntyre from Foundry Group, Kevin Dunlap from Calibrate Ventures, and we are grateful for the continued support of Costanoa Ventures, Ubiquity Ventures, and Toyota AI Ventures who participated in the round.
If you’re bringing computer vision to market, please get in touch!
If you’re as fascinated by these problems as we are, come join us!