2 Comments
Sep 24Liked by Chris Paxton

Thanks for the write up Chris!

I just wanted to note that in terms of the symbolic planning the LLM didn't help with correctness, it did help significantly with speed. More critically it is what enabled translation from natural language to predicates. We could always use the graph search based planner from our previous work, once given a logical goal, but the robot never needed to!

That's because what we did find is the LLM never gave us logically inconsistent plans. We can verify any generated plan against simple mutual exclusion rules on the predicates to make sure it never created conflicting states like `on(A,B) & on(B,A)`. We could always fall back to the graph search to find the skill sequence if this failed.

I think this is also a place for interesting improvement, where we could look at more complex logical implications as a form of verification. Further, we could do more informed searches to repair issues with the LLM plan instead of simply switching to a completely separate, uninformed, search algorithm.

And we agree on the need for feedback, hopefully we'll have an update on that in a few months. :)

Expand full comment
author

Thanks for the correction!

Expand full comment