PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play

Lili Chen*    Shikhar Bahl*   Deepak Pathak
Carnegie Mellon University
Conference on Robot Learning (CoRL) 2023
* equal contribution

We run our approach on 7 different environments, including 3 real world settings. We show the results of running our policy below. All goals are unseen at training time.

Play Data Collection

We collect language-annotated play data using teleoperation. This process is fast and efficient (< 1hr per task).

Abstract

Learning from unstructured and uncurated data has become the dominant paradigm for generative approaches in language or vision. Such unstructured and unguided behavior data, commonly known as play, is also easier to collect in robotics but much more difficult to learn from due to its inherently multimodal, noisy, and suboptimal nature. In this paper, we study this problem of learning goal-directed skill policies from unstructured play data which is labeled with language in hindsight. Specifically, we leverage advances in diffusion models to learn a multi-task diffusion model to extract robotic skills from play data. Using a conditional denoising diffusion process in the space of states and actions, we can gracefully handle the complexity and multimodality of play data and generate diverse and interesting robot behaviors. To make diffusion models more useful for skill learning, we encourage robotic agents to acquire a vocabulary of skills by introducing discrete bottlenecks into the conditional behavior generation process. In our experiments, we demonstrate the effectiveness of our approach across a wide variety of environments in both simulation and the real world.


PlayFusion

Method

PlayFusion extracts useful skills from language-annotated play by leveraging discrete bottlenecks in both the language embedding and diffusion model U-Net. We generate robot trajectories via an iterative denoising process conditioned on language and current state.


PlayFusion