Learning Movements by Imitation from Event-Based Visual Prediction


In this talk, I introduce a new method for robots to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed.

This method was published in IEEE BIOROB 2018