Human-centric Trustworthy Computer Vision

From Research to Applications




In Conjunction with ICCV 2021

October 17 2021, Virtually

News !

  • May 10, 2021:  The website is coming. Call for papers.

  • June 15, 2021:  The workshop date is presented.





Overview


How to define, pursue and evaluate trustworthy technologies for human-centric computer vision tasks?

   With the rapid technical progress in computer vision and the spread of vision-based applications over the past several years, the human-centric computer vision technologies, such as person re-identification, face recognition, action recognition, etc., are quickly becoming an essential enabler for many fields. Although, it brings great value to individuals and society, it is also encounters a variety of novel ethical, legal, social, and security challenges. Particularly, in recent years, the multiple multimedia sensing technologies as well as the large-scale computing and storage infrastructures are stimulating at a rapid velocity a wide variety of human-centric big data, which provides rich knowledge to help the development of these applications. Meanwhile, such data contains a large amount of personal private information, bringing concerns about the safety and trustworthiness of computer vision technologies. Consequently, trustworthy computer vision has been attracting an increasing attention from academia and industry. It focuses on human-oriented, fair, robust, interpretable, and responsible vision technologies, and is also at the core of the next-generation artificial intelligence (AI). The goal of this workshop is to: 1) bring together the state-of-the-art research on human-centric vision analysis for trustworthy AI; 2) call for a coordinated effort to understand the opportunities and challenges emerging in human-centric trustworthy vision technologies; 3) explore the fairness, robustness, interpretability and accountability oriented to human; 4) showcase innovative methodologies and ideas; 5) introduce interesting real-world human-oriented trustworthy systems and applications; 6) give insight into industry’s practice of trustworthy AI for human-centric vision and discuss future directions. We solicit original contributions in all fields of trustworthy human analysis to help us better understand the nature of vision algorithms for real-world applications. We hope the workshop offer a timely collection of research updates to benefit the researchers and practitioners working in the broad computer vision, pattern recognition, and trustworthy AI communities.




Call for papers

   We invite submissions for ICCV 2021 Workshop, Human-centric Trustworthy Computer Vision: From Research to Applications (HTCV'21), that brings researchers together to discuss human-oriented, fair, robust, interpretable, and responsible technologies for human-centric vision analysis. We solicit original research and survey papers from 5 to 8 pages (excluding references and appendices). Each submitted paper will be double-blind peer reviewed by at least three reviewers. All accepted papers will be presented as either oral or poster presentations, with a best paper award, and appear in the CVF open access archive. Papers submission is through HTCV'21 CMT and must follow the same policies and submission guidelines described in ICCV'21 Author Guidelines. Papers that violates the anonymity, do not use the ICCV submission template or have more than 8 pages (excluding references and appendices) will be rejected without review. In submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another workshop or conference during the review period.
  The scope of this workshop includes, but is not limited to, the following topics:

  • Adversarial attack and defense in face recognition and person re-identification
  • Explainable face and body analysis, generation and edition
  • Robust human body and face representation learning
  • Face anti-spoofing and deep-fake detection
  • Robust gait and action recognition
  • Secured federated learning
  • Robustness against evolving attacks in computer vision
  • Fairness analysis for data and models of face or human recognition
  • Trustworthy algorithms, frameworks, and tools for Human-centric Trustworthy Computer Vision



Important Dates

Description Date (Pacific Time)
Submission Deadline August 4, 2021
Decisions to Authors August 18, 2021
Camera-ready Due August 27, 2021
Workshop Date  October 17, 2021



Invited speakers



Prof.
 Mario Fritz
Saarland University, Germany
Prof.
 Angjoo Kanazawa
UC Berkeley, USA
Prof.
 Zhen Lei
NLPR, CASIA, China
Prof.
 Karthik Nandakumar
MUZUAI, Abu Dhabi
Prof.
 Albert Ali Salah
Utrecht University, Netherlands



organizers

Jingen Liu
JD AI Research, USA
Sifei Liu
Nvidia Research, USA
Wu Liu
JD AI Research, China
Nicu Sebe
UniTN, Italy
Hailin Shi
JD AI Research, China



Committee Chairs

Qian Bao
JD AI Research, China
Yibo Hu
JD AI Research, China



PC members



If you have any questions, feel free to contact < huyibo871079699@gmail.com >