I had the privilege of competing in the ANA Avatar XPRIZE four years ago, and as you know, the core challenge was teleoperation. Beyond the invaluable lessons we learned — such as the fact that the technology must ultimately outperform the mediocre human teleoperator — the challenges have only grown: if you commit to logging outgoing commands, you must also log the return signals and the surrounding perception data.
Transfer learning is the only viable path forward, and this is where not only open source becomes essential: an open telerobotics experience is an absolute must.
I do think that teleoperation is vastly improved! The web infrastructure built out during the pandemic is incredibly powerful, and you can now do reasonably good teleop across continents. Although I think realistically no one wants to do this -- 1x seems to plan to have all teleoperators in the continental united states for now.
my opinion is that general-purpose autonomy will look like waymo, which is to say it's a sliding scale not a binary. teleop will be used occasionally for a long time.
Insightful. Given the clear divide between ambitious goals and current dexterous manipulation capabilities, where do you see the primary bottleneck: fundamental AI algorythms or more robust hardware-software integration? Your balanced perspective, acknowledging both the optimism for useful home robots and the significant work ahead, is truly refreshing and well-articulated.
I am pretty optimistic about the trajectory algorithms are on; I think on the grand scale the "base model + RL" approach that worked for LLMs will work here. But likely need to get RL algorithms much more efficient, or use other tricks to make it tractable.
Really good blogpost, the whole „is data scaling enough“ debate is so (justifiably) heatful nowadays and I can‘t wait to see what the policies in the next years have to offer. The aim is probably IL initialization with RL on top as of right now but will this really deploy general robot policies? The whole big world hypothesis and inherent with that this explosive distributional shift is what really bugs me. I guess we need to really put more emphasis on continual learning approaches and per-robot/environment fine-tuning will be the only way as of now to deploy somewhat useful policies.
Yeah i think the recipe for working robots, in the end, wiill look similar to LLMs. Get a good enough base models (requires imitation data at huge scale) and you can start to scale up reinforcement learning to close the last gap.
Like I know people doubt ai code, but when playing around with game dev im always amazed that I can give Cursor or some other coding agents are high level instruction like "improve enemy ai so they use cover more, and also clean up all this dogshit code and refactor so i can understand it" and mostly expect it to work.
We will get there with robots but we are a few cycles away and the first step is just deploying a lot of robots and getting the right data.
Good article, i think what freaks me about these is, why do they have to look like a human, looks like a mannequin from a horror movie. If I came down in the morning and saw that in my kitchen , AI this freaky!!
I actually love the look of the figure 03, and NEO is growing on me. There are a lot of options and probably the friendliest looking is the figure GR3, which is pretty cute
Be strange if you could order by your own specific design look etc. no need to wonder where that goes! On a serious note, we have human rights, could we get to a time, NEO has AI rights, jeepers im going to deep, need to get back to shallow waters......
I had the privilege of competing in the ANA Avatar XPRIZE four years ago, and as you know, the core challenge was teleoperation. Beyond the invaluable lessons we learned — such as the fact that the technology must ultimately outperform the mediocre human teleoperator — the challenges have only grown: if you commit to logging outgoing commands, you must also log the return signals and the surrounding perception data.
Transfer learning is the only viable path forward, and this is where not only open source becomes essential: an open telerobotics experience is an absolute must.
TeleHug from México 🥳🥳🥳
I do think that teleoperation is vastly improved! The web infrastructure built out during the pandemic is incredibly powerful, and you can now do reasonably good teleop across continents. Although I think realistically no one wants to do this -- 1x seems to plan to have all teleoperators in the continental united states for now.
my opinion is that general-purpose autonomy will look like waymo, which is to say it's a sliding scale not a binary. teleop will be used occasionally for a long time.
Insightful. Given the clear divide between ambitious goals and current dexterous manipulation capabilities, where do you see the primary bottleneck: fundamental AI algorythms or more robust hardware-software integration? Your balanced perspective, acknowledging both the optimism for useful home robots and the significant work ahead, is truly refreshing and well-articulated.
I am pretty optimistic about the trajectory algorithms are on; I think on the grand scale the "base model + RL" approach that worked for LLMs will work here. But likely need to get RL algorithms much more efficient, or use other tricks to make it tractable.
Really good blogpost, the whole „is data scaling enough“ debate is so (justifiably) heatful nowadays and I can‘t wait to see what the policies in the next years have to offer. The aim is probably IL initialization with RL on top as of right now but will this really deploy general robot policies? The whole big world hypothesis and inherent with that this explosive distributional shift is what really bugs me. I guess we need to really put more emphasis on continual learning approaches and per-robot/environment fine-tuning will be the only way as of now to deploy somewhat useful policies.
Yeah i think the recipe for working robots, in the end, wiill look similar to LLMs. Get a good enough base models (requires imitation data at huge scale) and you can start to scale up reinforcement learning to close the last gap.
Like I know people doubt ai code, but when playing around with game dev im always amazed that I can give Cursor or some other coding agents are high level instruction like "improve enemy ai so they use cover more, and also clean up all this dogshit code and refactor so i can understand it" and mostly expect it to work.
We will get there with robots but we are a few cycles away and the first step is just deploying a lot of robots and getting the right data.
Good article, i think what freaks me about these is, why do they have to look like a human, looks like a mannequin from a horror movie. If I came down in the morning and saw that in my kitchen , AI this freaky!!
I actually love the look of the figure 03, and NEO is growing on me. There are a lot of options and probably the friendliest looking is the figure GR3, which is pretty cute
Be strange if you could order by your own specific design look etc. no need to wonder where that goes! On a serious note, we have human rights, could we get to a time, NEO has AI rights, jeepers im going to deep, need to get back to shallow waters......
I personally don't think this is crazy; I posted about it on my personal blog for Bad Takes Only: https://cpaxton.substack.com/p/we-need-to-talk-about-consciousness
We live in interesting times. This is a technology I’ve waited for. But I’ll wait another year for people to test it lol.