Speaker Details

[More info about the keynote speakers will be updated here]

Aylin Caliskan

Aylin Caliskan

University of Washington

Bio:
Aylin Caliskan is an Assistant Professor at the University of Washington Information School and an Adjunct Assistant Professor at the Paul G. Allen School of Computer Science and Engineering. Previously, Caliskan was an Assistant Professor of Computer Science at George Washington University. Caliskan studies the underpinning mechanisms of information transfer between human society and artificial intelligence (AI). Specifically, Caliskan's research interests lie in AI ethics, AI bias, computer vision, natural language processing, and machine learning. By developing transparency enhancing algorithms that detect and quantify human-like associations and biases learned by machines, Caliskan investigates the reasoning behind AI representations and decisions. Caliskan's publication in Science demonstrated how semantics derived from language corpora contain human-like biases. Her work on machine learning's impact on fairness and privacy received the best talk and best paper awards, and she was selected as a Rising Star in EECS at Stanford University. Caliskan holds a Ph.D. in Computer Science from Drexel University's College of Computing & Informatics and a Master of Science in Robotics from the University of Pennsylvania. Caliskan was a Postdoctoral Researcher and a Fellow at Princeton University's Center for Information Technology Policy. Caliskan was named a Nonresident Fellow in Governance Studies at the Brookings Institution in 2021, and in 2023, she was recognized as one of the 100 Brilliant Women in AI Ethics and honored with an IJCAI Early Career Spotlight.

Keynote Title:
Transparency in AI Ethics.

Keynote Abstract:
Transparency enhancing methods in AI ethics have revealed that language, vision, speech, multi-modal, and generative AI models, trained on sociocultural data, inevitably embed and reproduce implicit biases documented in social cognition, pertaining to gender, race or ethnicity, social class, age, ability, sexuality, nationality, religion, concepts, and intersectional associations. Despite user attempts to counter biases and institutional efforts to implement system safeguards, the perpetuation of biases by text-to-image generators persists. Although ChatGPT incorporates bias mitigation strategies, its translation of the gender-neutral Turkish sentences 'O bir doktor. O bir hemşire' into biased English —'He is a doctor. She is a nurse'— underscores the challenge of addressing biases within AI systems. With easily accessible generative AI models amplifying bias at scale and increasingly influencing life's outcomes and opportunities, it is necessary to advance transparency in AI ethics for the comprehensive analysis and mitigation of biases.

Kai-Wei Chang

Kai-Wei Chang

UCLA

Bio:
Kai-Wei Chang is an associate professor at the University of California, Los Angeles. His research interests lie in designing robust machine learning methods for large and complex data and developing fair and accountable language processing technologies for social good applications. Kai-Wei has published extensively in NLP, AI, and ML, with his work receiving widespread citation and media coverage from outlets such as Wires, NPR, and MIT Tech Review. Kai-Wei's awards include AAAI Senior Member (2023), the Sloan Fellow (2021), EMNLP Best Long Paper Award (2017), KDD Best Paper Award (2010), and research awards from Google (2021), Facebook (2019), Amazon (2019), and the Okawa Foundation (2018). Kai-Wei earned his Ph.D. from the University of Illinois at Urbana-Champaign in 2015 and was a post-doctoral researcher at Microsoft Research in 2016. More information is available at http://kwchang.net.

Keynote Title:
Bias and Exclusivity in Large Language Models.

Keynote Abstract:
The rise of Large Language Models (LLMs) has revolutionized creative writing and personalized interactions. However, these powerful tools carry a hidden risk: amplifying societal biases embedded in their training data. Without adequate measures to quantify and mitigate these biases, the widespread use of these models might inadvertently magnify prejudice or harmful implicit biases associated with sensitive demographic attributes, including gender. This talk will explore metrics and datasets for evaluating gender bias in language generation models. We will review existing bias measurements, demonstrate the inconsistencies between intricate bias metrics and extrinsic ones, and propose a comprehensive evaluation framework to measure bias. Additionally, this presentation will address the challenges of gender exclusivity and the representation of non-binary genders in NLP, alongside the critical examination of gender bias in LLM-generated content such as recommendation letters.

Ming-Ching Chang

Ming-Ching Chang

University at Albany - SUNY

Bio:
Dr. Ming-Ching Chang is an Associate Professor & Co-Director of the CVML Lab., Dept. of Computer Science, College of Nanotechnology, Science, and Engineering (CNSE) at University at Albany, State University of New York (SUNY). His research has been founded by DARPA, IARPA, NIJ, VA, and GE Global Research. He has rich experience in leveraging expertise from multiple domains to accomplish multi-discipline programs and projects. He receives multiple paper awards from international conferences including IEEE MIPR 2023 Best Student Paper Award, AI City Challenge 2017 Honorary Mention Award, IEEE WACV 2012 Best Student Paper Award, and IEEE AVSS 2011 Best Paper Award - Runner-Up. He frequently serves as the program chair, area chair, and referee of leading journals and conferences. He is the core organizer of the AI City Challenge, a multi-year (2017-2023) IEEE CVPR Workshops. He is the program chair of the IEEE AVSS conference (2019, 2024), and General Chair lead (2024) and TPC Chair lead (2022) of the IEEE MIPR conference. He is the Area Chair of IEEE ICIP conferences (2017, 2019-2024) and an outstanding area chair of ICME 2021 conference. He chairs the steering committee of the IEEE AVSS conference since 2022. He has authored more than 130+ peer-reviewed journal and conference publications, 7 US patents and 15 disclosures. He is a senior member of IEEE.

Keynote Title:
Enhanced Learning with Instance-Dependent and Long-Tail Noisy-Label Problems.

Keynote Abstract:
In the evolving landscape of machine learning, effectively utilizing noisy-labeled data is crucial for real-world applications. This presentation discusses two research works, the first addressing instance-dependent noisy labels, and the second tackling challenges in long-tail noisy-label learning problems.

Part 1: Learning with Instance-Dependent Noisy (IDN) Labels

Noisy-Label Learning (NLL) methods often struggle to differentiate between clean and noisy samples, overlooking intricate patterns in clean data that lead to substantial losses. We focus on datasets with IDN labels, where mislabeling probabilities correlate with visual appearance. By explicitly distinguishing between clean and noisy, as well as easy and hard samples, we leverage samples with small losses as easy examples. We next employ anchor hallucination to identify and correct labels for hard samples. The corrected hard samples together with easy samples are used for subsequent Semi-Supervised Learning (SSL). Experiments on synthetic and real-world IDN datasets show superior performance over other state-of-the-art methods.

Part 2: Addressing Long-Tail Noisy Label Learning Problems

Real-world NLL problems often exhibit class imbalance and long-tail distributions, posing challenges for accurate learning. Prior approaches relying on predictions from noisy long-tailed data inevitably introduce errors. Our two-stage approach combines soft-label refurbishing with multi-expert ensemble learning. In the first stage, robust soft label refurbishing acquires unbiased features through contrastive learning, making preliminary predictions using a classifier trained with a Balanced Noise-tolerant Cross-entropy (BANC) loss. The second stage applies our label refurbishment method to obtain soft labels for multi-expert ensemble learning, offering a principled solution to the long-tail noisy-label problem. Experiments across benchmarks, including simulated and real-noise long-tail datasets, demonstrate remarkable accuracy, surpassing existing state-of-the-art methods in handling noisy-label, long-tail datasets.