Dexterous In-hand Manipulation by Guided Exploration using Simple Sub-skill Controllers

Recently, reinforcement learning has led to dexterous manipulation skills of increasing complexity. Nonetheless, learning these skills in simulation still exhibits poor sample-efficiency which stems from the fact these skills are learned from scratch without the benefit of any domain expertise. In this work, we aim to improve the sample efficiency of learning dexterous in-hand manipulation skills using controllers available via domain knowledge. To this end, we design simple sub-skill controllers and demonstrate improved sample efficiency using a framework that guides exploration toward relevant state space by following actions from these controllers. We are the first to demonstrate learning hard-to-explore finger-gaiting in-hand manipulation skills without the use of an exploratory reset distribution.

Paper


Team

1 ROAM Lab, Columbia University            

BibTeX

TBD

Contact

If you have any questions, please feel free to contact Gagan Khandate