Introduction

    Developing solutions to comprehensive human visual understanding in the wild scenarios, regarded as one of the most fundamental problems in compute vision, could have a crucial impact in many industrial application domains, such as autonomous driving, virtual reality, video surveillance, human-computer interaction and human behavior analysis. For example, human parsing and pose estimation are often regarded as the very first step for higher-level activity/event recognition and detection. Nonetheless, a large gap seems to exist between what is needed by the real-life applications and what is achievable based on modern computer vision techniques. The goal of this workshop is to allow researchers from the fields of human visual understanding and other disciplines to present their progress, communication and co-develop novel ideas that potentially shape the future of this area and further advance the performance and applicability of correspondingly built systems in real-world conditions.

    To stimulate the progress on this research topic and attract more talents to work on this topic, we will also provide a first standard human parsing and pose benchmark on a new large-scale Look Into Person (LIP) dataset. This dataset is both larger and more challenging than similar previous ones in the sense that the new dataset contains 50,000 images with elaborated pixel-wise annotations with comprehensive 19 semantic human part labels and 2D human poses with 16 dense key points. The images collected from the real-world scenarios contain humans appearing with challenging poses and views, heavily occlusions, various appearances and low-resolutions. Details on the annotated classes and examples of our annotations are available at this link http://hcp.sysu.edu.cn/lip/. The challenge is conjunction with CVPR 2017 , Honolulu, Hawaii. Challenge participants with the most successful and innovative entries will be invited to present on this workshop.

    Regarding the viability of this workshop, the topic of this workshop is attractive and active. It is very possible that many active researchers would like to attend this workshop (actually the expected number of attendees is 100 from a conservative estimation based on the past publication record on related topics). It is related to yet still clearly different from past workshops as explained below. In addition, we have got confirmation from many renowned professors and researchers in this area and they are either glad to give a keynote speech (as listed in the program) or kindly offer help. We believe this workshop will be a very successful one and it will indeed benefit the progress of this research area significantly.


Topics of interest

The submission are expected to deal with human-centric visual perception and processing tasks which include but are not limited to:

  • Multi-person parsing and pose estimation
  • 2D/3D human pose estimations from the single RGB/Depth images or videos
  • Pedestrian detection in the wild scenarios
  • Human action recognition and trajectory recognition/prediction
  • Human re-identification in crowd videos and cross-view cameras
  • 3D human body shape estimation and simulation
  • Human clothing and attribute Recognition
  • Person re-identification, face recognition/verification in surveillance videos
  • Novel datasets for performance evaluation and/or empirical analyses of existing methods
  • Advanced applications of human understanding, including autonomous cars, event recognition and prediction, robotic manipulation, indoor navigation, image/video retrieval and virtual reality.

Tentative SCHEDULE

  • 08:30-08:40           Opening remarks and welcome
  • 08:40-09:00           The Look Into Person (LIP) challenge introduction and results
  • 09:00-09:15           Oral talk 1: Winner of LIP challenge
  • 09:15-10:00           Invited talk 1: Alan Yuille, Johns Hopkins University
  • 10:00-10:30           Poster session and coffee break
  • 10:30-11:15           Invited talk 2: Trevor Darrell, University of California, Berkeley
  • 11:15-12:00           Invited talk 3: Xiaogang Wang, Chinese University of Hong Kong
  • 12:00-13:00           Lunch
  • 13:00-13:45           Invited talk 4: Yaser Sheikh, Carnegie Mellon University
  • 14:45-14:30           Invited talk 5: Shuicheng Yan, National University of Singapore, Qihoo/360
  • 14:30-15:15           Invited talk 6: Abhinav Gupta, Carnegie Mellon University
  • 15:15-15:45           Poster session and coffee break
  • 15:45-16:30           Invited talk 7: Shaogang Sean Gong, Queen Mary University of London
  • 16:30-16:45           Oral talk 2
  • 16:45-17:00           Oral talk 3
  • 17:00-17:15           Awards & Future Plans

Submission

Paper Submission

· Introduction

  1. Double blind review: Paper reviewing is double-blind. So please avoid providing information that may identify the authors in the acknowledgments (e.g., co-workers and grant IDs) and in the supplemental material (e.g., titles in the movies, or attached papers). Avoid providing links to websites that identify the authors. Please read the example paper egpaper_for_review.pdf for detailed instructions on how to preserve anonymity.
  2. Requirements:
    ① Your paper should not be more than 8 pages (excluding references).
    ② The maximum size of the abstract is 4000 characters.
    ③ The paper must be PDF only (maximum 30MB).
    ④ The supplementary material can be either PDF or ZIP only (maximum 100MB).
    ⑤ If your submission has co-authors, please make sure that you enter their email addresses that correspond exactly to
     their account names (assuming they have created accounts). This will ensure that your co-authors can see your
     submission when they log in. Co-authors must also have their conflict domains entered.
  3. All the papers must be submitted in IEEE format using the templates provided. Submitted papers should not have been
    published, accepted or under review elsewhere.
  4. For more detailed instructions for paper submission, please consult CVPR 2017 web page.
  5. All the papers should be submitted at our CMT site.
  3. Deadline:
    ① Paper Submission Deadline: May 5th, 2017.
    ② Paper Acceptance Notification: May 10th, 2017.
    ③ Camera-ready Paper Submission Deadline: May 18th, 2017.

Challenge Submission

· The details of each task and submission format as well as the whole LIP dataset are provided on our evaluation server. The deadline of submission is June 4th, 2017. To guarantee fairness of the LIP challenge of our CVPR'17 workshop, the submitted results can only be seen by the submitters, all of which will be presented on this leaderboard after the submission deadline. Welcome to contact us at lip17-organizers@googlegroups.com if you have any questions for the submission.

· Please visit our LIP website to submit your work.


Main Organizers

img
Xiaodan Liang
 xiaodan1@cs.cmu.edu
img
Shenghua Gao
 gaoshh@shanghaitech.edu.cn
img
Xiaohui Shen
 xshen@adobe.com
img
Wei-Shi Zheng
 wszheng@ieee.org
img
Wanli Ouyang
 wlouyang@ee.cuhk.edu.hk

Co-Chairs

  • Liang Lin, Professor, Sun Yat-sen University, linliang@ieee.org
  • Jiashi Feng, Assistant Professor, National University of Singapore, elefjia@nus.edu.sg
  • Fernando De la Torre, Research Associate Professor, Carnegie Mellon University, ftorre@cs.cmu.edu
  • Timothy Hospedales, Associate Professor, University of Edinburgh, t.hospedales@qmul.ac.uk

Contact

       Please feel free to send any question or comments to lip17-organizers@googlegroups.com