Discussion about this post

User's avatar
Neural Foundry's avatar

Excellent point on how error compounding makes offline datasets basically useles for robots. The lack of standardized benchmarks really does make it feel like we're all picking cherries to show off our best results. When I was testing manipulation policies the sim to real gap was always way bigger than any paper suggested especally with contact dynamics.

Avik De's avatar

"authoring tasks in simulation is hard" This is such an excellent an easy-to-forget point. It's easy to underestimate how long it takes to make the simulation "work", whatever that even means.

I used to think that the DARPA challenges were helpful in quantifying and also seeding innovation - do you think that model makes sense?

No posts

Ready for more?