Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles

Photo of author

By news.saerio.com

Nomadic raises .4 million to wrangle the data pouring off autonomous vehicles


To build the autonomous machines of the future, sometimes your model needs a model. 

Companies developing self-driving cars, robots manipulating the physical environment, or autonomous construction equipment collect thousands, if not millions, of hours of video data for evaluation and training. 

Organizing and cataloging that video is now a job for humans, who have to watch all of it. Even fast-forwarding, that doesn’t scale. Nomadic AI, a startup founded by CEO Mustafa Bal and CTO Varun Krishnan, wants to solve problems for customers who have 95% of their fleet data sitting in archives.

The challenge becomes harder when looking for edge cases — the most valuable data depicts events that rarely occur and can befuddle inexperienced physical AI models.

Nomadic is working to solve that problem with a platform that turns footage into a structured, searchable dataset through a collection of vision language models. That, in turn, allows for better fleet monitoring and the creation of unique datasets for reinforcement learning and faster iteration.

The company announced a $8.4 million seed round Tuesday at a post-money valuation of $50 million. The round was led by TQ Ventures, with participation from Pear VC and Jeff Dean, and will allow the company to onboard more customers and continue refining its platform. Nomadic also won first prize at Nvidia GTC’s pitch contest last month. 

The two founders, who met as Harvard computer science undergrads, “kept running into the same technical challenges again and again at our jobs” at companies like Lyft and Snowflake, Bal told TechCrunch. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“We are providing folks insight on their own footage, whatever drives their own AVs [and] robots,” he said. ”That is what moves these autonomous systems builders forward, not random data.”

Imagine, for example, trying to fine-tune an AV’s understanding that it can run a red light if a police officer is directing it to do so, or isolating every time that vehicles drive under a specific type of bridge. Nomadic’s platform allows these incidents to be identified both for compliance purposes, and to be fed directly into training pipelines. 

Customers like Zoox, Mitsubishi Electric, Natix Network, and Zendar are already using the platform to develop intelligent machines. Antonio Puglielli, the VP of Engineering at Zendar, said that Nomadic’s tool allowed the company to scale up its work much faster than the alternative of outsourcing, and that its domain expertise set it apart from other competitors.

This kind of model-based, auto-annotation tool is emerging as a key workflow for physical AI. Established data labelling firms like Scale, Kognic, and Encord are developing AI tools to do this work, while Nvidia has released a family of open-source models, Alpamayo, that can be adapted to tackle the problem.

Varun argues that his company’s tool is more than a labeler; it is an “agentic reasoning system: you describe what it needs and it figures out how to find it,” using multiple models to understand action taking place and put it in context. Nomadic’s backers expect the startup’s focus on this specific infrastructure to win out.  

“It’s the same reason Salesforce doesn’t build its own cloud and Netflix doesn’t build its own [content distribution facilities],” Schuster Tanger, a partner at TQ Ventures who led the round, told TechCrunch. “The second an autonomous vehicle company tries to build Nomadic internally, they’re distracted from what makes them win, which is the robot itself.”

Tanger praises Nomadic’s talent, noting that Krishnan is an international chess master ranked as the world’s 1,549th-best player. Krishnan, meanwhile, brags that all of the company’s dozen or so engineers have published scientific papers.

Now, they’re hard at work developing specific tools, like one that understands the physics of lane changes from camera footage, or another that derives more precise locations for a robot’s grippers in a video. The next challenge, from the point of view of Nomadic and its customers, is to develop similar tools for non-visual data like lidar sensor readings, or to integrate sensor data across multiple modes. 

“Juggling around terabytes of video, slamming that against hundreds of 100 billion plus parameter models, and then extracting their accurate insights, is really insanely difficult,” Bal said. 



Source link

Leave a Reply