We turn millions of consumer XR headsets into a scalable, gamified training ground for robots
Physical AI is exploding, but real-world teleoperation cannot keep up with the demand.
AI still requires human validation, but doing this in the physical world is incredibly slow.
Today, robotics companies hire gig workers to record themselves doing chores with iPhones strapped to their heads — $15/hr, limited variety, privacy concerns, and no physics ground-truth. We replace that with something fundamentally better.
How it compounds
SIM XR captures real human intelligence — the way people naturally grasp, manipulate, and reason — inside a physics-accurate virtual environment. The result: high-quality training data at a fraction of the cost.
Leveraging the global install base of 20M+ VR/XR headsets and full body suits. No need to fly operators to a lab.
Using $500 consumer hardware instead of $50,000 custom teleoperation rigs.
Physics-based simulation (Isaac Lab) and cloud delivery (CloudXR) ensures data transfers seamlessly to the real world.
Three forces converged to make this moment possible — and irreversible.
Millions of consumer VR devices are gathering dust. This is a massive, distributed workforce waiting to be activated.
Hardware existed, but physics didn't. Now, NVIDIA Isaac Lab + Cloud GPUs make real-time simulation possible at scale.
We bridge the gap between the XR community and Robotics labs. We turn "gamers" into "trainers".
A three-sided platform connecting robotics companies, VR developers, and a global crowd of operators.
Submits a task spec and budget. Receives a validated, benchmarked dataset ready for model training.
Distributes tasks, validates data quality, and routes payments. We keep the margin — our flywheel grows with every episode.
Global users play gamified tasks in VR. Their actions are recorded as clean physics trajectories — paid per validated episode.
Watch how a single VR session generates thousands of validated training episodes for physical AI models.
Our XR & metaverse expertise lets us wrap any data collection task in a compelling game loop — immersive, rewarding, and genuinely fun. Users participate not just for pay, but because it's engaging.
We are building the engine that generates infinite, photorealistic training worlds for Vision-Language-Action models.
We replay validated human trajectories inside simulation to generate perfect synthetic sensor data — RGB-D, LiDAR — for training Vision-Language-Action models.
We combine 3DGS for absolute photorealism with rigid-body physics, creating digital twins indistinguishable from reality.
Using NVIDIA Cosmos World Foundation Models, we generate millions of environment variations, exponentially scaling Imitation & Reinforcement Learning.
Produced and curated XR events across Europe and CIS. Managed synchronized multi-headset deployments (up to 65 devices) for audiences of 500,000+ people. Founder of Film XR (Estonia/France).
Hands-on research in 3D Gaussian Splatting and Volumetric Video for XR/Film/Animation. The core technology of SIM XR's long-term vision is already in active development.
Member of ATAS Emerging Media Peer Group. Mentor, Venice Biennale College Cinema – Immersive (2025). Fulbright Graduate Fellow.
We're building the training layer for physical AI. Join early to shape the platform.
We're working with robotics companies that need high-quality teleoperation datasets for training robot policies. If you're building robots and need demonstration data, join the early access list.
If you're an engineer, researcher, or XR developer interested in the project, feel free to reach out.
We're working with early robotics partners. Let's talk.
Get in Touch →