Dept. of Computer Science, Central Washington University, USA
The objectives of this tutorial are: (1) to present major topics of research in explainable AI models for high-stakes tasks with human- in-the-loop, and (2) to motivate and explain topics of emerging importance in this area to the HCI community. The tutorial materials will be available to the HCI participants online.
Explainable AI/Machine Learning (ML)is much more than allowing humans to understand why certain decisions are made by models, but also allowing to match this understanding with users’ domain knowledge. It requires a human in the loop: (1) to test consistency of the model with the domain knowledge and (2) to ensure reliable and trustworthy models.
It is especially important for high-stakes tasks, for which incorrect decisions lead to significant harm to individuals or society. Examples are medical diagnosis (cancer, heart diseases), financial decision-making (credit risk, fraud detection), autonomous vehicles (pedestrian detection), criminal justice (sentencing), disaster response planning (hurricanes, earthquakes) and others.
The topics of emerging importance in explainable AI/ML models for high-stakes tasks include:
Explanation must be exact (accurately describe the model’s inner workings and decision-making process) and convincing to the user. If one of these properties is lacking then it is only a quasi-explanation not serving the explanation goal. Many current popular AI/ML explanation methods are only quasi-explanations. This tutorial will present current methods with ways to overcome their deficiencies.
The benefits from this tutorial for researchers and students are to become familiar with these new developments and new opportunities to enhance their own research inspired by these methods. For practitioners the benefits are in the opportunities to apply these methods to the real-world tasks.
The target audience for the tutorial include researchers, students, and practitioners with basic knowledge of Machine Learning.
Dr. Boris Kovalerchuk is a professor of Computer Science at Central Washington University, USA. His publications include four books published by Springer: "Data Mining in Finance" (2000), "Visual and Spatial Analysis" (2005), "Visual Knowledge Discovery and Machine Learning" (2018), and “Integrating Artificial Intelligence and Visualization for Visual Knowledge Discovery” (2022), chapters in the Data Mining/Machine learning Handbooks (2006,2010, 2023), and over 200 other publications. His research and teaching interests are in AI, machine learning, visual analytics, visualization, uncertainty modeling, image and signal processing, and data fusion. Dr. Kovalerchuk has been a principal investigator of research projects in these areas, supported by the US Government agencies. He served as a senior visiting scientist at the US Air Force Research Laboratory and as a member of several expert panels at the international conferences and panels organized by the US Government bodies. Dr. Kovalerchuk delivered relevant tutorials at IJCNN 2017, HCII 2018, KDD 2019, ODSC West 2019; WSDM 2020, IV 2020, 2023, IJCAI/PRICAI 2021, 2023. Presenter website, CWU Visual Knowledge Discovery Lab at GitHub