Robot Learning under Misspecification

confidence-aware robot learning from human input

To enable robots to use human input as guidance on desired behaviors, system designers typically equip them with a representation of possible objectives the person could care about. However, these designers and, hence, the techniques for robot learning that they employ operate on the assumption that the human's desired objective can always be captured by the robot's representation. In our work, we investigate what the robot can do when this assumption breaks. We propose a method where the robot reasons explicitly about how well it can explain human inputs given its hypothesis space and use that situational confidence to inform how it should incorporate human input.

Project materials: