direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Predicting Interaction Points for Robot Exploration from RGB-D Data

Lupe

Motivation

A large part of our everyday environment consist of rigid bodies and kinematic joints that connect them and restrict their relative motion. Together, these joints and bodies form kinematic structures called articulated objects, such as cupboards, drawers, doors etc. Robots that aim to work in our homes or offices need to be able to explore, analyze and manipulate such kinematic structures.

 

In order to reveal and learn about the kinematic joints that constrain the motion of an articulated object, a robot needs to generate motion of the bodies attached to the joints. E.g. to estimate if a door is a sliding or a hinged door, and the direction in which it opens/closes, a robot needs to interact with the handle and/or the surface of that door to pull or push it. If the interaction is appropriate and causes motion, the robot can perceive the motion constraints and estimate the type and properties of the kinematic joint.

 

Exploring by interacting randomly with the environment is very inefficient. Instead of that, the robot can create create hypotheses where and how to interact and exert forces that more promising will reveal kinematic joints, given previous experiences or predefined perceptual priors (e.g. saliency). The focus of this thesis would be on generating such hypotheses.

 

Description of work

You would work on a module that takes RGB-D images / videos as input and predicts promising interaction points (grasping poses, pushing approaches, etc.) for the robot's end-effector. This problem could be tackled in two ways:

1.) Using machine learning and human labeled training data that contains RGB-D images and human labels for promising grasping points. Here you would train a neural network to predict grasping poses.

2.) Engineering a Computer Vision pipeline that creates a feature description of the scene and computes promising grasping poses.

 

Requirements

Strong C++ and/or python programming skills

Computer Vision

Machine Learning

Robotics (ROS)

 

Contact

Manuel Baum

Roberto Martín-Martín

Oliver Brock

back

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions