One of the ultimate goals of computer vision techniques is to augment human in a variety of application fields. Developing solutions to comprehensive human-centric visual applications in the wild scenarios, regarded as one of the most fundamental problems in computer vision, could have a crucial impact in many industrial application domains, such as virtual reality, human-computer interaction, human motion analysis,and advanced robotic perceptions. Human-centric understanding including human parsing/detection, pose estimation, relationship detection are often regarded as the very first step for higher-level activity/event recognition and detection. Nonetheless, a large gap seems to exist between what is needed by the real-life applications and what is achievable based on modern computer vision techniques. By taking a further step, more virtual reality and 3D graphic analysis research advances are urgently expected for advanced human-centric analysis. For example, the 2D/3D clothes virtual try-on simulation system that seamlessly fits various clothes into 3D human body shape has attracted numerous commercial interests. The human motion synthesis and prediction can bridge the virtual and real worlds, such as, simulating virtual characters to mimic the human behaviors, empowering robotics more intelligent interactions with human by enabling causal inferences for human activities. The goal of this workshop is to allow researchers from the fields of human-centric understanding and 2D/3D synthesis to present their progress, communication and co-develop novel ideas that potentially shape the future of this area and further advance the performance and applicability of correspondingly built systems in real-world conditions.
We will also organize the third large-scale Look Into Person (LIP) challenges which include five competition tasks: the single-person human parsing, the single-person pose estimation, the multi-person human parsing, multi-person video parsing, multi-person pose estimation benchmark, and clothes virtual try-on benchmark. This third LIP challenge mainly extends the second LIP challenge in CVPR 2017 and CVPR 2018 by additionally covering a video human parsing challenge and the 2D/3D clothes virtual try-on benchmark. For the single-person human parsing and pose estimation, we will provide 50,000 images with elaborated pixel-wise annotations with comprehensive 19 semantic human part labels and 2D human poses with 16 dense key points. For the multi-person human parsing competition task, we will also provide another 50000 images of crowded scenes with 19 semantic human part labels. For video-based human parsing, 3000 video shots with 1-2 minutes will be densely annotated with 19 semantic human part labels. For multi-person pose estimation, the dataset contains 25,828 images (ave. 3 persons/image) with 2D human poses with 16 dense key points (each key point has a flag indicating whether it is visible-0/occluded-1/out of image-2) and head & instance bounding boxes . Our new image-based clothes try-on benchmark targets at fitting new in-shop clothes into a person image and generate one try-on video to show different clothes viewpoints on the person. The benchmark will contain around 25,000 front-view pictures and top clothing image pairs for training and 3000 clothes-person pairs for testing. In terms of the quality of image-based virtual try-on, the quantitative performance will be given via a human subjective perceptual study. In terms of the quality of video-based virtual try-on, the benchmark will be evaluated via AMT human evaluation. The images collected from the real-world scenarios contain humans appearing with challenging poses and views, heavily occlusions, various appearances and low-resolutions. Details on the annotated classes and examples of our annotations are available at this link https://vuhcs.github.io/ . This challenge will be released before January, 2019 to enable participants to evaluate their techniques. The challenge is conjunction with CVPR 2019, Long Beach, CA. Challenge participants with the most successful and innovative entries will be invited to present on this workshop.
Regarding the viability of this workshop, the topic of this workshop is attractive and active. It is very possible that many active researchers would like to attend this workshop (actually the expected number of attendees is 100 from a conservative estimation based on the past publication record on related topics). It is related to yet still clearly different from past workshops as explained below. In addition, we have got confirmation from many renowned professors and researchers in this area and they are either glad to give a keynote speech (as listed in the program) or kindly offer help. We believe this workshop will be a very successful one and it will indeed benefit the progress of this research area significantly.
Time |
Schedule |
---|---|
Location: 102A | Date: Sunday, 16 Jun 2019 from 8:30AM to 17:15PM |
08:30-08:40 | Opening remarks and welcome |
08:40-09:00 | The Look Into Person (LIP) challenge introduction and results |
09:00-09:45 | Oral talk 1: Winner of single-person / multi-person / video human parsing challenge |
09:45-10:00 | Oral talk 2: Winner of pose estimation challenge |
10:00-10:30 | Poster session and coffee break |
10:30-11:00 | Invited talk 1: Shiry Ginosar, PHD, UC Berkeley |
11:00-11:30 | Invited talk 2 : Michael Black, Professor, Max Planck Institute |
11:30-12:00 | Oral talk 3: Winner of pose estimation and 2nd place of single person parsing |
12:00-13:30 | Lunch |
13:30-14:00 | Invited talk 3: Alex Schwing, Assistant Professor, UIUC |
14:00-14:30 | Invited talk 4: Jianchao Yang, Director, ByteDance AI Lab. |
14:30-14:45 | Oral talk 4: Winner of image-based multi-pose virtual try-on challenge |
14:45-16:15 | Poster session and coffee break |
16:15-16:45 | Invited talk 5: Katerina Fragkiadaki, Assistant Professor, CMU |
16:45-17:15 | Awards & Future Plans |
Important Dates |
|
|
|
Format Requirements |
Format: Papers that are at most 4 pages *including references* do not count as a dual submission. Workshop papers that are reviewed and longer than 4 pages do count as a publication, including figures and tables, in the CVPR style. |
|
|
|
|
Submission Details |
|
|
|
Important Dates |
http://47.100.21.47:9999/index.php |
Please feel free to send any question or comments to:
donghy7 AT mail2.sysu.edu.cn, xdliang328 AT gmail.com