Flierl, Markus

Researcher

Markus Flierl

Visual Sensor Networks

Visual information plays a prominent role in our daily lives. This is not surprising as the human being is ocular-centric. We aim to use our visual sense most efficiently and we make use of visual information to communicate our messages. Images and video have changed the way we see the world. An image is capable of capturing the impression of a moment. Video adds another dimension capturing the constant change of the expression of our world. But still, images and video let us perceive the world as “one-dimensional”. Being able of binocular vision, the human being benefits from more than one view of the world. Multi-view imagery adds another dimension capable of capturing the constant change from various perspectives.
The research on Visual Sensor Networks investigates distributed visual communication with emphasis on both source coding and transmission over networks. In particular, this research project considers visual communication of natural dynamic 3D scenes. Spatially distributed video sensors capture a dynamic 3D scene from multiple view-points. The video sensors encode their signal and transmit their data via the network to the central decoder which shall be able to reconstruct the dynamic 3D scene. The sensor network shall exploit the correlation among the many observations of the scene. Also, communication among the visual sensors shall enhance the efficiency of the sensor network. This project will address interesting problems like how to sample the dynamic 3D scene efficiently as well as what messages have to be exchanged among the video sensors to maximize their efficiency.
Apart from communication tasks, Visual Sensor Networks may be helpful for other applications: As the views of the cameras overlap, multi-view image sequence data may be used to track objects in 3D space or to estimate the motion field of voxels. Centralized algorithms for these problems are known, but due to the large data volume generated by dense camera arrays, such algorithms may not be feasible. To conclude, the signal that is desired at the fusion center of the Visual Sensor Network will also shape its design. Efficient reconstruction for driving a holographic display with all camera signals will impose different constraints than rendering a single novel view.

Pictures

Scientific Advisory Board, mentors, and researchers at the Annual Review Meeting, 05 November 2015, at the MPI-INF in Saarbrücken. Related posts: Annual Review Meeting at Stanford, February 2014 Review Meeting at Saarbrücken, November 2012 Photo of the Review Meeting in Stanford 2012

Related posts:

The Advisory Board and Researchers at the Annual Review Meeting on February 11, 2014 at Stanford University. Related posts: Photo of the Review Meeting in Stanford 2012 November 2015: Annual Review Meeting in Saarbrücken Review Meeting at Saarbrücken, November 2012

The Advisory Board and Researchers at the Review Meeting on February 14, 2012 at Stanford University. Related posts: Annual Review Meeting at Stanford, February 2014 Review Meeting at Saarbrücken, November 2012 November 2015: Annual Review Meeting in Saarbrücken Signing of the Memorandum