Robotics and Biology Laboratory

Recurrent State Representation Learning with Robotic Priors

People

Marco Morik
Divyam Rastogi
Rico Jonschkowski
Oliver Brock

Motivation

In robotics, the sensory inputs are high dimensional, however, only a small subspace is important to guide the actions of a robot. Previous work by Jonschkowski and Brock in 2015 presented an unsupervised learning method extracting a low dimensional representation using robotic priors. These robotic priors encode knowledge about the physical world in a loss function. However, their and that of other methods to learn such a representation rely on an Markovian observation space.

Description of Work

We extend the idea of robotic priors to work on non Markovian observation spaces. For this, we train a recurrent neural network on trajectories, such that the network learns to encode past information in its hidden state. With this we are able to learn a Markovian state space. To train this network, we combine and modify existing robotic priors to work in non Markovian environments. We test our method in a 3D maze environment. To evaluate the quality of the learned state representation, we introduce a validation network that maps from the learned states to the ground truth.