Siyu Tang

I am a research group leader in the Department of Perceiving Systems at the Max Planck Institute for Intelligent Systems, my group is funded by the DFG through the CRC 1233 on Robust Vision.

I am interested in the intersection between computer vision and machine learning with a focus on holistic visual scene understanding. In particular, I am interested in analyzing and modeling people in our complex visual scenes.

Offers:I am looking for highly motivated PhD student and PhD interns. I also have projects for bachelor and master thesis. If you are interested, please contact me direclty or send your application to ps-apply@tuebingen.mpg.de

News:

I will be an area chair for ACCV 2018.

I received anEarly career research grantto start my own research group at the Max Planck Instiute for Intelligent Systems and the University of Tübingen, details coming soon. I am looking for highly motivated PhD student and PhD interns!

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract

This paper considers the task of articulated human pose estimation of multiple people in real-world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other.
This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems