Inhalt des Dokuments
Es gibt keine deutsche Übersetzung dieser Webseite.
Compensating the Pneumatic Actuation Noise by Using Strain and Acoustic Signals
Master Thesis
Veronika Pavlova
- We are attempting to combine the liquid metal strain sensors (top) and the active acoustic sensor (bottom) to achieve improved robustness and increased measuring accuracy.
[1]
- © RBO
Motivation
To
estimate the robot state and environment, we can use different
sensors. For example, we can measure the sound inside of soft
pneumatic actuators and use small changes in the sound signal to sense
a wide range of actuator properties (Active Acoustic Sensor).
Alternatively, strain sensors attached to the robot finger
measure the deformation of the soft hull (Strain Sensor). While each
sensor has shown promising results in different tasks, some
disturbances lead to data corruption or loss. To achieve a robust
sensor prediction, we combine multiple sensors. Additionally, this
fusion might also give us more information about the pneumatic
actuator state and environmental interaction and may thereby lead to
a more precise contact location detection.
Description of Work
We developed two primary sensors in the lab, which
we are using for different robotics tasks.
First, we have the
Active Acoustic Sensor, which has a speaker and microphone attached
inside at the tip and bottom chamber. The speaker generates a sound,
and if the finger deforms or makes contact, the recorded sound of the
microphone varies. These changes can give us precise predictions on
the finger state, e.g., the applied force, deformation, or even
temperature. However, one disadvantage are external sound sources,
like the inflation/deflation sound, which corrupt the recordings.
Secondly, we have four strain sensors at the top and sides
of the soft hull. They measure deformations of the finger via changes
in the electrical resistance. The sensors are filled with liquid
metal which disconnects sometimes and lead to data loss.
The goal is to combine the acoustic and strain sensors and achieve
the same (or even better) contact location accuracy during all times
and variable tasks. Maybe even go further and use the higher amount
of state information from both sensors to, e.g., classify different
objects by attributes like the density or shape.
otos/MA_20_Pavlova_teaser.jpg
ter/de/font2/minhilfe/
t_wall/parameter/de/font2/minhilfe/