ALAN : Autonomously Exploring Robotic Agents
in the Real World

Russell Mendonca         Shikhar Bahl         Deepak Pathak
Carnegie Mellon University
ICRA 2023


Robotic agents that operate autonomously in the real world need to continuously explore their environment and learn from the data collected, with minimal human supervision. While it is possible to build agents that can learn in such a manner without supervision, current methods struggle to scale to the real world. Thus, we propose ALAN, an autonomously exploring robotic agent, that can perform many tasks in the real world with little training and interaction time. This is enabled by measuring environment change, which reflects object movement and ignores changes in the robot position. We use this metric directly as an environment-centric signal, and also maximize the uncertainty of predicted environment change, which provides agent-centric exploration signal. We evaluate our approach on two different real-world play kitchen settings, enabling a robot to efficiently explore and discover manipulation skills, and perform tasks specified via goal images.

Method Overview

Exploration Time-Lapse

ALAN gets better at manipulation by exploring how to interact with objects.

Achiever Performance

After autonomous exploration the robot can perform tasks involving multiple objects in a zero-shot manner using goal reaching.