We investigate human and robot perception, with the goal of developing a constructive understanding of robust perceptual information processing. To do this, we follow a transdisciplinary approach, bridging psychology and robotics. We model some mechanism of human vision that we observe as a visual phenomenon, and make some initial predictions. Using this we collect psychophysical data pertaining to the predictions, and if there are deviations, adjust the model with new observations. We do this in an incremental fashion (the analytic-synthetic loop) to slowly increase our understanding of perceptual mechanisms underlying human vision.
Based on insights resulting from investigating multiple phenomena/mechanisms, we will produce an algorithmic model of human perception, capable of replicating larger parts of human vision. We will also produce robot perception algorithms that leverage insights about the human perceptual system to advance the state of the art in the synthetic disciplines.
Our main motivation to pursue this line of research is the perplexing match between characteristics of human vision and robot vision. It turns out that there are large similarities, at different abstractions, between the information processing architecture of visual cortex and that of interactive perception models developed in robotics, here at RBO [previous publication, another current project focused on leveraging the same information processing models]. So, given the match in properties, the insights we derive from models developed using information processing pattern from robotics should bear a high degree of relevance to human vision.
For variations of (van Liers, 2009) go here.
For variations of (Suchow, 2011) go here.
This project is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2002/1 "Science of Intelligence" - project number 390523135.