Computer Vision and Remote Sensing

current projects

© Alex Knight on Unsplash

Hierarchical modularized vision system for perception-action loops

Visual understanding is a key component of biological and synthetic intelligent systems. As visual sensors (of any kind) provide high-dimensional data vectors with structural relationships between vector elements, e.g. multi-channel 2d images, the analysis of visual data unavoidably is a search problem in highly complex spaces. This is especially true if the visual input has a time component as in the visual system of an acting agent.

© Martin Maier

Knowledge-augmented face perception

Face perception and categorization is fundamental to social interactions. In humans, input from facial features is integrated with top-down influences from other cognitive domains, such as expectations, memories and contextual knowledge. For instance, whether a face is perceived as depicting an angry expression may depend on prior knowledge about the context (Aviezer et al., 2007) or person (Abdel Rahman, 2011; Suess, Rabovsky & Abdel Rahman 2014). Furthermore, humans have a strong tendency to infer traits such as trustworthiness directly from faces.

© Andy Kelly on Unsplash

Social responsiveness and its effects on learning in human-human and human-robot interaction

This project combines research from educational psychology and computer vision to examine principles of social responsive teaching behaviors in social learning situations. Perceiving and appropriately reacting to social cues facilitate effective knowledge transfer between interaction participants. We are developing computational models of socially responsive behavior in learning situations, that will allow for in-depth analyses of the relations between socially responsive teaching behavior and student engagement, emotion, and cognitive performance in human-human and human-robot interactions.

Completed projects

© SCIoI

Mouse Lock Box

Our research is concerned with automatic annotation of videos from a laboratory experiment. Mice are presented with so-called lockboxes - small puzzle boxes containing a food reward. The animals have to manipulate a series of movable objects that are interlocked. We improve tracking algorithms, that tell us where the mouse is when. The methods also determine the state of the lockbox, such that we automatically acquire interesting time-series about the behavior. At the same time we are interested in computing 3D coordinates of the scene. Through all this we are able to process large amounts of video data requiring minimal human labor.