Research theme
Learned communication
Differentiable inter-agent communication — letting robots learn what to share with their teammates directly from the downstream task.
Rather than hand-designing a protocol, we treat inter-agent communication as a differentiable component of the policy itself. Graph neural networks give us the right inductive bias: each robot aggregates messages from its local neighborhood, and the content of those messages is learned end-to-end from the downstream task — perception, control, or planning.
Our work has shown this style of learned communication scaling to real-world problems: cooperative visual perception where robots share spatial information without needing overlapping camera views (CoViS-Net), shared monocular SLAM across a team (DVM-SLAM), and heterogeneous teams where different robot classes learn distinct communication roles (HetGPPO). A recurring research question is how to make these learned protocols generalize beyond the team size, topology, and environment seen at training time.