ICRA 2025 was bigger and more serious
The robotics field is changing fast, and the flagship IEEE robotics conference demonstrated some of the recent trends.
ICRA — the IEEE International Conference on Robotics and Automation — is the flagship and largest robotics conference in the world. Held every year, it has a special place in the robotics research community. This is the conference where, back in 2024, Unitree unveiled its G1 humanoid, which you have undoubtedly seen in many videos since. It also was a year absolutely dominated by AI: the best paper award went to Open X Embodiment, a massive data aggregation and end-to-end learning project whose author list included just about every well-known professor at the conference, and some more besides.
This year, the organizing committee was clear that things would be different. Seth Hutchinson even opened the conference saying that there would be “no AI hype” this year. And there was, in fact, a pretty noticeable difference.
While last year we saw a lot of huge promises from various companies, and the whole thing was overshadowed by the launch of Physical Intelligence and shakeups in the robotics teams at tech companies like Google and Meta, this year we saw much more concrete reality from those teams that were present.
While last year, the best paper was a big moonshot VLA project, this year the best learning paper was “Robo-DM: Data Management For Large Robot Datasets,” a project centered around the long-term reality that we have a lot of work ahead of us in creating, curating, and maintaining large-scale robotics data [1].

Again, this isn’t about training a VLA: it’s about data compression and retrieval; reducing the size of robot trajectory data and improving the time it takes to retrieve that data from disk for training.
Compare to last year’s best paper [2]:

This was, undoubtedly, a cool and ambitious project, but it speaks to a field that was, in my opinion, in a very different place: captivated by the idea of big data solving robotics (something we will revisit), but not the reality of it.
The actual best papers of ICRA 2025 were:
Marginalizing and Conditioning Gaussians Onto Linear Approximations of Smooth Manifolds with Applications in Robotics
MAC-VO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
Which I won’t go into, but again represent quality work and a sort of retreat from the “data solves everything” maximalism I think we saw last year. You can check out last year’s awards and this year’s awards online.
Robot Hardware
There was some really impressive robotic hardware at ICRA this year. One thing that caught my eye was the new hand from Sharpa robotics:
Sharpa is planning to sell their 1:1 sized 22dof humanoid robot hand by the end of this year, probably for somewhere between $35-50k (tariffs permitting). This was a really impressive piece of hardware, capable of really great precision and dexterity. And it has touch sensors built in. Find out more about Sharpa on their website.
Unitree had impressive demos, as one might expect of them. It’s hard to believe that G1 was barely walking around this time last year, because now it was sprinting down pathways and throwing punches. They have strong fast-followers in Booster Robotics and Fourier Robotics; Fourier had their robot sprinting as well (although mostly in their booth), and Booster had an entertaining demo where the robot would kick a soccer ball.
Astribot showed a live VR teleop demo, and even let conference attendees control the robot. It’s an incredibly responsive system, doing things like tossing balls around and pulling or pushing tables.
Hardware Trends
Mobile manipulators are in. We saw plenty of arms, grippers, and hands, but one trend I saw was that there were many more of these Astribot-style “Mermaid” robots, with a jointed torso attached to a heavy, wide, and flat base. These bases have lots of issues when deployed in real environments in my opinion - they’re way too wide - but they make sense a ton of sense in a research lab because they’re very stable and fairly safe to be around. One famous example of this “mermaid” style of robot that you may be familiar with is the 1x Eve.
Research Trends
I noticed a lot of great work on tactile sensing. The aforementioned Sharpa hand had touch sensors; we saw impressive demos of tactile sensing from Lerrel Pinto’s lab, including their previous-generation tactile sensor [6] and its successor, the 3d printed “E-Flesh.” I also liked the RUKA Hand [7], a humanlike open-source end effector you can build in a few hours.
We also presented Dynamem [8] at the conference, which I think is very exciting work and is one of the fairly few robotics projects I’ve actually seen work “out of the box” in my own home. I consider work like this very important because world representations for robotics are so important to building useful, real-world robots — something I’ve written about before and will undoubtedly write about again.
What does the world representation for home robots look like?
For them to be useful assistants, robots must be able to understand their environment. Vision-language models like GPT-4o are actually great at understanding their environments, at least from a single image; unfortunately, they’re still not amazing at understanding the relationships between different objects and understanding images over time.
In general, sensing and interacting with the world was a strong theme: we saw lots of great SLAM work, a growing excitement about tactile hands and dexterous manipulators, and many systems for robot teleoperation. Less end-to-end, less excitement about VLAs and giant transformers than last year, but a persistent interest in capable mobile manipulators.
Who paid for all this?

The sponsors behind ICRA are largely logistics: VisionNav, Symbotic, Amazon, Berkshire Grey. Industrial robotics showed up as well. This trend also showed up in one of the papers, winner of the best automation award [5]: “Physics-Aware Robotic Palletization with Online Masking Inference.” The authors used reinforcement learning to perform a realistic palletization task, stacking boxes as they might arrive in a logistics context.

Many of the humanoid robotics companies arrived, but not to sponsor - mostly to sell hardware to the labs and teams using these robots. It’s interesting to me how little of the trendy robotics shows up in these lists. We never see a Figure or a Tesla; the closest to Tesla was Zoox (so, Amazon again).
Final Thoughts
This was the most selective ICRA ever, with only about 35% of papers accepted. This may seem high to those more accustomed to computer vision or machine learning, but it’s actually very low for a robotics conference — the prestigious Conference on Robot Learning hovers around 37% -40% accept rate, with ICRA usually having a 40-45% accept rate. This was due to the size of the venue more than anything, but it’s still noteworthy.
It’s been interesting to see the focus of the conference shifting, with less prominent focus on massive models to solve everything, more hardware innovation and more focus on mobile manipulators. I am excited to see where the field goes from here.
References
[1] Chen, K., Fu, L., Huang, D., Zhang, Y., Chen, L. Y., Huang, H., ... & Goldberg, K. (2025). Robo-DM: Data Management For Large Robot Datasets. arXiv preprint arXiv:2505.15558. [arxiv] [github]
[2] O’Neill, A., Rehman, A., Maddukuri, A., Gupta, A., Padalkar, A., Lee, A., ... & Chen, M. (2024, May). Open x-embodiment: Robotic learning datasets and rt-x models: Open x-embodiment collaboration 0. In 2024 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6892-6903). IEEE.
[3] Guo, Z. C., Forbes, J. R., & Barfoot, T. D. (2024). Marginalizing and Conditioning Gaussians onto Linear Approximations of Smooth Manifolds with Applications in Robotics. arXiv preprint arXiv:2409.09871.
[4] Qiu, Y., Chen, Y., Zhang, Z., Wang, W., & Scherer, S. (2024). MAC-VO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry. arXiv preprint arXiv:2409.09479.
[5] Zhang, T., Wu, Z., Chen, Y., Wang, Y., Liang, B., Moura, S., ... & Zhan, W. (2025). Physics-Aware Robotic Palletization with Online Masking Inference. arXiv preprint arXiv:2502.13443. [arxiv] [github]
[6] Pattabiraman, V., Cao, Y., Haldar, S., Pinto, L., & Bhirangi, R. (2024). Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins. arXiv preprint arXiv:2410.17246. [website] [arxiv]
[7] Zorin, A., Guzey, I., Yan, B., Iyer, A., Kondrich, L., Bhattasali, N. X., & Pinto, L. (2025). RUKA: Rethinking the Design of Humanoid Hands with Learning. arXiv preprint arXiv:2504.13165. [website] [arxiv]
[8] Liu, P., Guo, Z., Warke, M., Chintala, S., Paxton, C., Shafiullah, N. M. M., & Pinto, L. (2024). DynaMem: Online Dynamic Spatio-Semantic Memory for Open World Mobile Manipulation. arXiv preprint arXiv:2411.04999. [website]