Developing solutions to comprehensive human visual understanding in the wild scenarios, regarded as one of the most fundamental problems in compute vision, could have a crucial impact in many industrial application domains, such as autonomous driving, virtual reality, video surveillance, human-computer interaction and human behavior analysis. For example, human parsing and pose estimation are often regarded as the very first step for higher-level activity/event recognition and detection. Nonetheless, a large gap seems to exist between what is needed by the real-life applications and what is achievable based on modern computer vision techniques. The goal of this workshop is to allow researchers from the fields of human visual understanding and other disciplines to present their progress, communication and co-develop novel ideas that potentially shape the future of this area and further advance the performance and applicability of correspondingly built systems in real-world conditions.
To stimulate the progress on this research topic and attract more talents to work on this topic, we will also provide a first standard human parsing and pose benchmark on a new large-scale Look Into Person (LIP) dataset. This dataset is both larger and more challenging than similar previous ones in the sense that the new dataset contains 50,000 images with elaborated pixel-wise annotations with comprehensive 19 semantic human part labels and 2D human poses with 16 dense key points. The images collected from the real-world scenarios contain humans appearing with challenging poses and views, heavily occlusions, various appearances and low-resolutions. Details on the annotated classes and examples of our annotations are available at this link http://sysu-hcp.net/lip. The challenge is conjunction with CVPR 2018, Salt Lake City. Challenge participants with the most successful and innovative entries will be invited to present on this workshop.
Regarding the viability of this workshop, the topic of this workshop is attractive and active. It is very possible that many active researchers would like to attend this workshop (actually the expected number of attendees is 100 from a conservative estimation based on the past publication record on related topics). It is related to yet still clearly different from past workshops as explained below. In addition, we have got confirmation from many renowned professors and researchers in this area and they are either glad to give a keynote speech (as listed in the program) or kindly offer help. We believe this workshop will be a very successful one and it will indeed benefit the progress of this research area significantly.
Time |
Schedule |
---|---|
Location: | Room 250 D-E |
08:30-08:40 | Opening remarks and welcome |
08:40-09:00 | The Look Into Person (LIP) challenge introduction and results |
09:00-09:15 | Oral talk 1: The second place of single-person pose estimation challenge(Track 3), Speaker: Zhenqi Xu(ByteDance AI Lab) |
09:15-10:00 | Invited talk 1: Xian-Sheng Hua, Distinguished Engineer/VP, Alibaba Group |
10:00-10:30 | Poster session and coffee break(Hall A; Halls 1-4) |
10:30-11:15 | Invited talk 2: Visual Commonsense Reasoning, Speaker: Yixin Zhu, Postdoctoral Scholar, VCLA lab at UCLA |
11:15-11:30 | Oral talk 2: Winner of single-person(Track 3) & multi-person(Track 4) pose estimation challenge, Speaker: Wu Liu(JD AI Research) |
11.30-11.45 | Oral talk 3: Winner of single-person(Track 1) & multi-person(Track 2 & Track 5) human parsing challenge, Speaker: Yunchao Wei(University of Illinois Urbana-Champaign) |
11:45-14:00 | Lunch(Hall A; Halls 1-4) |
14:00-14:30 | Invited talk 3: Jimei Yang, Adobe Research |
14:30-14:45 | Oral talk 4: The second place of Multi-Human pose estimation challenge(Track 4), Speaker: Sheng Jin(Tsinghua University), Wentao Liu(Tsinghua University) |
14:45-15:15 | Poster session and coffee break(Hall A; Halls 1-4) |
15:15-15:45 | Invited talk 4: Jia Deng, University of Michigan |
15:45-16:15 | Awards & Future Plans |
Important Dates |
|
|
|
|
Format Requirements |
|
|
|
|
|
Submission Details |
|
|
|
Important Dates |
|
Look Into Person: Single Person Human Parsing Challenge(50462 images)
Look Into Person: Multi-Person Human Parsing Challenge(38280 images)
Look Into Person: Single Person Human Pose Estimation Challenge(50462 images)
Look Into Person: Multi-Human Pose Estimation Challenge(25403 images)
Look Into Person: Fine-Grained Multi-Human Human Parsing Challenge(25403 images)
Please feel free to send any question or comments to:
gongk3@mail2.sysu.edu.cn, liych28@mail2.sysu.edu.cn, xiaodan1@cs.cmu.edu, zhaojian90@u.nus.edu, jianshu@u.nus.edu