The Case for a Robotic Cambrian Explosion
What might AI software engineering tools change about the robotics thesis?

Something strange has happened to software development over the last year: the machines have taken over. Software development is faster and easier than ever with the aid of tools like Claude Code and Cursor. Similar changes are coming in other industries; AI tool use will come for computer-aided design (CAD) tools next; it will make it easier to design a robot or other hardware in software, write the bridges to connect it to simulation, and deploy it.
This challenges a couple major assumptions that have been driving massive robotics investment over the last few years, so I think it’s worth exploring the idea.
The Case for One Robot
It’s a cliche that you have the same iPhone and the same laptop as Elon Musk. Scale, right now, is king in software and hardware. Electronics products are impeccably engineered, optimized, and designed for manufacturing at scale.
One might assume the same logic would apply for robotics; in fact, one might assume it would be even more extreme for robotics. This is because robots need real-world data; robot models are data hungry, and there is a huge data gap to be overcome.
Co-designing robot sensors, hardware, and models is already very important for good real-world performance. This means that deploying one robot everywhere, an approach championed by Tesla and Figure, has a lot of appeal. You solve these hard problems once, then scale, scale, scale until the robot is affordable and incredibly capable. There are clear parallels to how humans “work,” as in how humans are employed in a wide variety of industries and to solve a wide variety of problems.
This is only sort of how humans work, though. Humans use tools. Incredibly inhuman automation is vastly more productive than humans for a wide variety of problems, from package sorting to harvesting crops. Humans provide the intelligence, but our bodies increasingly are not actually performing the labor — our minds are.
The humanoid-only champions have an answer here: that the robots will be using human tools. This is something that I am deeply skeptical of: if robots are doing all the work, why would you design tools to support legacy hardware (humans) forever?
Initial Conditions
First, let’s start with a few assumptions:
Software development will become effectively “free” over the next year, meaning that you can use AI coding agents to produce any code you can clearly design, given about a month and a few thousand USD.
Simulations will continue to improve, to the point where they’re operationally indistinguishable from reality, at least on a mechanical level, given some real-world data to “seed” them.
Cross-embodiment learning — a “single brain” for all robots — works reasonably well, and it’s not hard to deliver intelligence for a new form factor.
AI agents will improve their ability to use CAD tools for hardware and electrical design, and can “close the loop” with a simulation like NVIDIA Isaac, Mujoco, or potentially a new upstart like Lucky.
What would be possible in such a world?
The Robotic Cambrian Explosion

The Cambrian Explosion was a rapid evolutionary event, a period roughly 539-520 million years ago when a vast number of animal phyla appeared in the fossil record for the first time.
If these conditions exist — if agents can be trusted with hardware design, if they can be given reasonably-accurate simulations of real-world tasks, then instead of seeing a few humanoids appear everywhere, we would see a massive diversity of different robot form factors.
After all, the motors that power a robot dog and a humanoid and an industrial pick-and-place machine don’t need to be all that different. One might easily imagine a world where a robotics company builds modular components like LimX’s TRON 2, which can be recombined in many different ways according to the needs of a particular task. They might go farther, with a list of 4-5 supported actuators and a handful of common sensors, with shells and wiring harnesses designed on demand.
This would mean that instead of a single humanoid in a factory, you would see a family of dozens of interrelated robots; big cargo haulers, fast-moving inventory-checkers, loaders and packers, all build out of the same core parts and on the same hardware. If you want one for your home, maybe it’s using lower-powered actuators, and cheaper sensors, but again it’s still the same mind and the same parts.
Importantly, if cross-embodiment learning works — which, to an extent, it seems to — these robots are still all sharing data and still making each other more intelligent.
This to me seems like a much more “AI-first” way of viewing robotics hardware development, even if in many ways it’s still far fetched.
Why It Might Not Happen
A few potential issues:
Maybe contact-rich simulation remains intractable. Despite the best efforts of NVIDIA and World Labs and so on, perhaps contact physics never become quite good enough to fully automate the design process. This would make the economics much worse for our robotics Cambrian Explosion: you would need much more rigorous real-world testing, with more room for human engineering and intuition in the process.
Human data is more useful for humanoids. But we’ve actually seen from recent works that you can get incredibly good results just with in-domain data, and there’s also a large and growing body of work on cross-embodiment learning.
In-domain data might be too important. Basically, cross-embodiment learning might be good enough for a demo, but what if it’s never good enough for true, high-reliability cross platform performance? This would, again, make the economics of this Cambrian Explosion harder, since you’d need lots of real-world data for each one (if it can all be done in simulation, you’re fine).
And there’s some evidence for this one: in a recent blog post from Physical Intelligence, we see that they achieved their best results by mixing downstream customer data into the pretraining for their foundation model. Similarly, for the NVIDIA example above, cross-embodiment robot data may or may not have helped any more than other egocentric data.
This is a pattern we have actually seen many times; robotics problems tend to be these little isolated “islands” of data with less evidence of positive transfer between them than you would hope. On the other hand, this will almost certainly improve as we “fill in the gaps” with more robot deployments.
Final Thoughts
The truth, likely, will be a mix between the two: some general-purpose platforms, and a great deal more robots for a wide variety of tasks. For now, at least, real-world data seems irreplaceable, but this could all change fast, and it constantly is.
The real test, I think, will be if data-driven simulation really takes off and proves that it can handle complex contact dynamics. If this happens, it will be much easier to train specialized robots in simulation. This reduces the risk and means you can produce a much wider range of robots without all the expensive real-world deployments and data collection.
But please let me know what you think below.


Great piece, Chris. The Cambrian Explosion framing is compelling, maybe even more than you intended. Because the other half of the Cambrian story is massive extinction. Most of those phyla didn't make it. The explosion produced the diversity, but what survived was determined by something else entirely.
I believe simulation will keep getting better, and it's actually one of the fields I'm working in right now, so I'd bet the explosion happens. I'm also really curious what determines which form factors survive it.
I wrote about wizard of oz in robotics regarding the Chinese viral video. It is true we came close yet so far to control these robots.
I want to be better to write this kind of topic just like you. I really adoring your writing Chis. The topic that hard to be found and yet always fascinated me.