direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Aditya Bhatt

Lupe [1]

Email: e-mail query [2]

Room: MAR 5.065
Telephone: +49.30.314-73118

Interests

Deep Reinforcement Learning (RL) has made large strides in recent years, by making it possible to learn controllers for complex or unstructured observation and action spaces.

Unfortunately, Deep RL methods still require considerable amounts of data --- either via unrealistically long durations of real-world interaction, or through optimization in parallel simulations --- to find acceptable policies for various robotics tasks; an area in which classical robot control methods excel.

Impressive demonstrations of Deep RL also use powerful compute infrastructure that is not widely available. While certainly encouraging, it should not be too surprising that, given a good-enough representational substrate (e.g. a neural network), expensive computation finds a parameter configuration to solve a problem.

What are the missing ingredients that will make robot learning work quickly and cheaply with minimal interactions, directly with physical robots? Are there some broadly applicable algorithmic priors that could speed up learning across many tasks? I hope to combine ideas from probabilistic robotics, representation learning, and intelligent exploration methods to answer these questions.

My immediate focus is to tackle the problem of dexterous in-hand manipulation using the RBO Hand 2 and 3.

------ Links: ------

Zusatzinformationen / Extras

Direktzugang:

Schnellnavigation zur Seite über Nummerneingabe

Copyright TU Berlin 2008