Inhalt des Dokuments
Manipulating objects as dexterously as humans remains an open problem in robotics - not so much in carefully controlled environments such as factories, but in every day household environments, so-called unstructured environments.
Interactive Perception, as the name suggests, is about acting to improve perception. The fundamental assumption is that the domains of perception and action cannot be separated, but form a complex which needs to be studied in its entirety. Using this approach, we try to design robot that explore their environment actively, in a way that reminds of how a baby explores a new toy.
Online Interactive Perception
We developed an RGB-D-based online algorithm for the interactive perception of articulated objects. In contrast to existing solutions to this problem, the online-nature of the algorithm permits perception during the interaction and addresses a number of shortcomings of existing methods. Our algorithm consists of three interconnected recursive estimation loops. The interplay of these loops is the key to the robustness of our proposed approach. The robustness stems from the feedback from our algorithm which can be used to adapt the robot's behavior.
Contact: Manuel Baum [1], Aravind Battaje [2]
Generating Task-directed Interactive Perception Behavior
Robots should not only passively process sensor information and then act based on the knowledge they extract from that sensor stream. They should (inter)actively shape that sensor stream by adapting their behavior so that the sensor stream becomes maximally informative for solving their tasks. We research how to generate such behavior in an engineering way, but also by learning it. In this research, we are interested in the relation between task-solving behavior, task-directed exploration and information based exploration. It's important for a robot to focus its exploration efforts on task-relevant variables, as pure information based exploration would generally not focus the robot's limited resources (due to its embodiment) enough for its behavior to be efficient.
Contact: Manuel Baum [3], Aravind Battaje [4]
Active Outcome Recognition
Robotic behavior can not only reveal properties of the environment, like kinematic degrees of freedom, but it can also be adapted to better understand the interaction between robot and environment itself. Robots need to assess the outcomes of their own actions and the sensory data created from task-directed behavior may not be sufficient for such an estimation. We work on adapting robotic behavior such that it is easier to estimate the outcomes of actions. This is a problem of learning interactive perception.
Contact: Manuel Baum [5], Aravind Battaje [6]
Acquiring Kinematic Background Knowledge with Relational Reinforcement Learning
[7]
- Click to open video (youtube)
[8]
- © RBO
If a robot faces a novel, unseen object, it must first acquire information about the object’s kinematic structure by interacting with it. But there is an infinite number of possible ways to interact with an object. The robot therefore needs kinematic background knowledge: knowledge about the regularities that hint at the kinematic structure.
We developed a method for the efficient extraction of kinematic background knowledge from interactions with the world. We use relational model-based reinforcement learning, an approach that combines concepts from first-order logic (a relational representation) and reinforcement learning. Relational representations allow the robot to conceptualize the world as object parts and their relationship, and reinforcement learning enables it to learn from the experience it collects by interacting with the world. Using this approach, the robot is able to collect experiences and extract kinematic background knowledge that generalizes to previously unseen objects.
Contact: Manuel Baum [9], Aravind Battaje [10]
Funding
[11]
- © AvH
Alexander von Humboldt professorship [12] - awarded
by the Alexander von Humboldt foundation [13] and funded through the
Ministry of Education and Research, BMBF [14],
July 2009 - June
2014
Publications
Order by: Author [21] Year [22] Journal [23]
_baum/
d_battaje/
_baum/
d_battaje/
_baum/
d_battaje/
otos/hoefer-icra14-thumb.png
_baum/
nd_battaje/
esearch/Humboldt%20Logo.JPG
n-humboldt-professorship.html
nteractive_perception/?showp=3&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=8eab2b03e1b9ac4ccba20facb90693
ed
nteractive_perception/?showp=1&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=f3b60aab3f2ebf59bf1b3701f7035b
f7
nteractive_perception/?showp=2&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=7db468c792cdc0316f0c1d832f37f3
92
nteractive_perception/?showp=3&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=8eab2b03e1b9ac4ccba20facb90693
ed
nteractive_perception/?showp=5&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=de3ccc864a2466f68f9f41b61e0cd6
33
nteractive_perception/?showp=5&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=de3ccc864a2466f68f9f41b61e0cd6
33
nteractive_perception/?showp=4&tx_sibibtex_pi1%5Bso
rt%5D=author%3A1&cHash=87fdbc2551e4c5142aecce2522c8
c6be&type=1
nteractive_perception/?showp=4&tx_sibibtex_pi1%5Bso
rt%5D=year%3A1&cHash=e9709156d79453a3ee0953509cfeb5
22&type=1
nteractive_perception/?showp=4&tx_sibibtex_pi1%5Bso
rt%5D=journal%3A1&cHash=22aedd1a52827a6981d08ee94f2
dd443&type=1
Publikationen_pdf/martinmartin_16_icra.pdf
nteractive_perception/?no_cache=1&tx_sibibtex_pi1%5
Bdownload_bibtex_uid%5D=1150872&tx_sibibtex_pi1%5Bc
ontentelement%5D=tt_content%3A438139
Publikationen_pdf/martinmartin_16_21cc.pdf
nteractive_perception/?no_cache=1&tx_sibibtex_pi1%5
Bdownload_bibtex_uid%5D=1253984&tx_sibibtex_pi1%5Bc
ontentelement%5D=tt_content%3A438139
Publikationen_pdf/actionrepresentation_iros2016-final.p
df
nteractive_perception/?no_cache=1&tx_sibibtex_pi1%5
Bdownload_bibtex_uid%5D=1212818&tx_sibibtex_pi1%5Bc
ontentelement%5D=tt_content%3A438139
ublikationen_pdf/baum_17_humanoids.pdf
nteractive_perception/?no_cache=1&tx_sibibtex_pi1%5
Bdownload_bibtex_uid%5D=1352722&tx_sibibtex_pi1%5Bc
ontentelement%5D=tt_content%3A438139
ublikationen_pdf/baum_icra2017.pdf
nteractive_perception/?no_cache=1&tx_sibibtex_pi1%5
Bdownload_bibtex_uid%5D=1297222&tx_sibibtex_pi1%5Bc
ontentelement%5D=tt_content%3A438139
nteractive_perception/?showp=3&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=8eab2b03e1b9ac4ccba20facb90693
ed
nteractive_perception/?showp=1&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=f3b60aab3f2ebf59bf1b3701f7035b
f7
nteractive_perception/?showp=2&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=7db468c792cdc0316f0c1d832f37f3
92
nteractive_perception/?showp=3&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=8eab2b03e1b9ac4ccba20facb90693
ed
nteractive_perception/?showp=5&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=de3ccc864a2466f68f9f41b61e0cd6
33
nteractive_perception/?showp=5&tx_sibibtex_pi1%5Bso
rt%5D=year%3A0&cHash=de3ccc864a2466f68f9f41b61e0cd6
33