Speaker Details

Rama Chellappa

Rama Chellappa

Johns Hopkins University

Bio:
Prof. Rama Chellappa is a Bloomberg Distinguished Professor with joint appointments in the Departments of Electrical and Computer Engineering and the Biomedical Engineering (School of Medicine) at the Johns Hopkins University (JHU). Prior to joining JHU in August 2020, he was a Distinguished University Professor and a Minta Martin Professor of Engineering at the University of Maryland (UMD), College Park, MD. He received the B.E. (Hons.) degree in Electronics and Communication Engineering from the University of Madras, India in 1975 and the M.E. (with Distinction) degree from the Indian Institute of Science, Bangalore, India in 1977. He received the M.S.E.E. and Ph.D. Degrees in Electrical Engineering from Purdue University, West Lafayette, IN, in 1978 and 1981 respectively. During 1981-1991, he was a faculty member in the department of EE-Systems at University of Southern California (USC). During 1991-2020, at UMD, he was a Professor of Electrical and Computer Engineering (ECE) and an affiliate Professor of Computer Science at University of Maryland (UMD), College Park. He was also affiliated with the Center for Automation Research, the Institute for Advanced Computer Studies (Permanent Member) and the Applied Mathematics and Scientific Computing Program. His current research interests are computer vision, machine learning, artificial intelligence with applications in face recognition, 3D modeling from video, image and video-based recognition of objects, events and activities, medical imaging, domain adaptation and generalization.

Keynote Title:
Bias Mitigation in AI Systems.

Keynote Abstract:
While deep learning-based methods have produced high-performing AI systems, their propensity to be biased towards subpopulations and covariates such as age, gender, etc., has raised serious concerns about deployability. In this talk, I will discuss the sources of bias in modern day data-driven AI systems and present adversarial training and knowledge distillation-based methods for bias mitigation. Examples from face recognition and health care will be presented.

David Crandall

David Crandall

Indiana University

Bio:
David Crandall is the Luddy Professor of Computer Science in the Luddy School of Informatics, Computing, and Engineering at Indiana University. He is a member of the Computer Science, Informatics, Cognitive Science, and Data Science programs, and adjunct faculty in Statistics. He is Director of Graduate Studies for the Computer Science Ph.D. and M.S. programs. He is founding director of the Center for Machine Learning, and a member of the Digital Science Center and the Center for Complex Networks and Systems. He obtained the Ph.D. in Computer Science from Cornell University in 2008, and was a Postdoctoral Research Associate at Cornell from 2008-2010. He received the B.S. and M.S. degrees in Computer Science and Engineering from the Pennsylvania State University in 2001, and was a Senior Research Scientist at Eastman Kodak Company from 2001-2003. Since joining IU in 2010, he has been PI or Co-PI on over $17.5 million in research grants and contracts from the National Science Foundation, the Lilly Endowment, Yahoo, Google, Facebook, NVidia, the U.S. Intelligence Advanced Research Projects Activity (IARPA), the U.S. Navy, NASA, the IU Office of the Vice President for Research, the Defense Threat Reduction Agency, the Office of Naval Research, the Electronics and Telecommunications Research Institute, the Air Force Office of Scientific Research, the Indiana Innovation Institute (IN3), and Eastman Kodak Company. He has published over 200 technical articles in top international venues, and has received best paper awards or nominations in CVPR, WWW, CHI, ICCV, and ICDL. He is an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and the IEEE Transactions on Multimedia, and has been an Area Chair for CVPR, ICCV, ECCV, WACV, AAAI, ICML, NeurIPS, and IJCAI. He has received an NSF CAREER award (2013), two Google Faculty Research Awards (2014 and 2020), an IU Trustees Teaching Award (2017), a Grant Thornton Fellowship (2019), and a Luddy Professorship (2021).

Keynote Title:
Training Data Bias, through the Eyes of a Child.

Bernt Schiele

Bernt Schiele

Max Planck Institute for Informatics

Bio:
Bernt Schiele has been Max Planck Director at MPI for Informatics and Professor at Saarland University since 2010. He studied computer science at the University of Karlsruhe, Germany. He worked on his master thesis in the field of robotics in Grenoble, France, where he also obtained the "diplome d'etudes approfondies d'informatique". In 1994 he worked in the field of multi-modal human-computer interfaces at Carnegie Mellon University, Pittsburgh, PA, USA in the group of Alex Waibel. In 1997 he obtained his PhD from INP Grenoble, France under the supervision of Prof. James L. Crowley in the field of computer vision. The title of his thesis was "Object Recognition using Multidimensional Receptive Field Histograms". Between 1997 and 2000 he was postdoctoral associate and Visiting Assistant Professor with the group of Prof. Alex Pentland at the Media Laboratory of the Massachusetts Institute of Technology, Cambridge, MA, USA. From 1999 until 2004 he was Assistant Professor at the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Between 2004 and 2010 he was Full Professor at the computer science department of TU Darmstadt.

Keynote Title:
Addressing Imbalance Problems, Robustness and Interpretability of Deep Learning in Computer Vision.

Keynote Abstract:
Computer Vision has been revolutionized by Machine Learning and in particular Deep Learning. End-to-end trainable models allow to achieve top performance across a wide range of computer vision tasks and settings. While recent progress is remarkable, current deep learning models are data hungry, lack robustness, and are hard to interpret. In this talk I will discuss several lines of work to learn from imbalanced and scarce data, to measure and address robustness of deep learning models, and also how to achieve higher interpretability.