direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Open Theses

Robotics Related

Problem Solving Using a Multi-Strategy Approach

11. May 2022

Solving diverse tasks in different environments is something humans can do with ease. The same cannot be said about robots. The range of problems that a single robot can solve is usually very narrow. One widely accepted explanation for human superiority in problem-solving is the multi-strategy framework. It is assumed that the mind has a repertoire of different strategies that it uses in solving cognitive and behavioral tasks. The selection of the appropriate strategy depends on the perceived task environmental cues. Although this theory is well studied, the mechanisms underlying the selection of the right strategy from the toolbox are still not well understood. The strategy selection framework as a rational meta-reasoning, introduced by F. Lieder & T. L. Griffith, 2017, proposes a model of how strategy selection can be realized. It assumes that the strategy selection from the toolbox is based on a subjective assessment of the strategy's accuracy and cost, which are in a trade-off against each other: This means that by knowing the accuracy and cost of each strategy, an optimal choice can be made. This method has been analyzed and approved for multiple simple tasks such as gambling. Whether this theory would apply to more natural tasks like escaping an escape room or opening a lockbox is still unknown. more to: Problem Solving Using a Multi-Strategy Approach

Human-like grasping with robot-like reactiveness

25. April 2022

If we want robots to help with daily chores, first they need to interact with their environment physically, and second, these machines need to accept their imperfection both in their motion and perception. This is a difficult problem because modeling the environment and dynamic interactions are either a weak approximation or too complex to plan robustly or efficiently. more to: Human-like grasping with robot-like reactiveness

Distance estimation using fixation and event camera

20. April 2022

Humans navigate and interact with the 3D world robustly without complicated 3D sensors like lidars, but 2D sensors in the eyes. If compared (rather naively) to widely available camera sensors the human retina has vastly diminished capabitlies, such as resolution, refresh rate etc. How then can humans interact with the 3D world so robustly? One way is to exploit regularities in the 3D space unlocked by actively moving in specific ways. Gaze fixation is a specific movement of the body and eyes, and it can be shown to be a very useful behavior for extracting relevant 3D properties of the world. Gaze fixation is the act of looking at one object at a time under movement. On the other hand, event cameras---also inspired by human vision---can sense visual movement efficiently, especially tiny movements. Gaze fixation and event cameras, both mimick human embodiment in certain ways, and can help a robot interact robot in the 3D world as effortlessly as humans do. more to: Distance estimation using fixation and event camera

When Robots Know What to Reach for - Contact-Based Motion Planning

28. May 2020

We all tried to navigate in a dark room in the middle of the night. Even when we know the room, it is challenging because our visual perception is high uncertainty. Uncertain perception and motion make motion planning an even harder problem. But reaching out to the wall to guide our hand is a simple and robust way because contact not just reduces uncertainty in our location. more to: When Robots Know What to Reach for - Contact-Based Motion Planning

Event panorama

11. January 2022

Image panoramas are created from RBG images. By warping a set of images taken from different camera positions into the same shared set of coordinates, a set of images maybe combined into a larger composite image - a panorama of a scene. A similar idea can be applied to events but instead of a typical image panorama we will obtain a large 'edge'-image capturing the contours of object edges that triggered events on the event camera's sensor. By aligning events over time we will get a better estimate about the scene's structure and conversely scene structure will help us to align events accurately. more to: Event panorama

Learning to Manipulate Articulated Objects From Human Demonstration

25. April 2022

Robots operating in everyday human environments require interacting with articulated objects, such as locks and drawers. These articulated objects are usually specifically designed for humans to be easily used. To this end, the function of many objects is encoded in their kinematic structure. For example, there are over 1000 types of scissors with distinct appearances and sizes, but the type of their kinematic models remains the same. This enables humans to transfer experience between objects of the same type. This insight can be leveraged by learning robot manipulation skills from human demonstrations. Using kinematic models of objects as the underlying representation for Learning from Demonstration, we can impart similar transfer abilities to robots, which can be shown in this video: https://www.youtube.com/watch?v=Ya1DlGQ5hOU. more to: Learning to Manipulate Articulated Objects From Human Demonstration

Improving In-Hand Manipulation Skills With Constraints Provided by a Second Thumb

21. April 2022

The thumb can be considered the most important finger of the human hand for handling objects. Compared to the other fingers, the thumb has much greater range of motion. This allows the thumb to flexibly place contacts and exert a variety of forces on an object held in the hand. The thumb design of the latest RBO Hand 3 (RH3) allows our hand to behave more skillfully than prior versions. In this work, you will investitage a two-thumb design of the RH3, with the goal of being able to flexibly provide contacts to improve manipulation capabilities in the hand. more to: Improving In-Hand Manipulation Skills With Constraints Provided by a Second Thumb

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions