40 years of HCI International. Join us in Washington DC to celebrate

T02: Explainable AI for High-stake tasks with Human-in-the-loop

Saturday, 29 June 2024, 08:30 - 12:30 EDT (Washington DC)
Back to Tutorials' Program


Prof. Boris Kovalerchuk (short bio)

Dept. of Computer Science, Central Washington University, USA



The objectives of this tutorial are: (1) to present major topics of research in explainable AI models for high-stake tasks with human- in-the-loop, and (2) to motivate and explain topics of emerging importance in this area to the HCI community.  The tutorial materials will be available to the HCI participants online.


Content and benefits:

Explainable AI/Machine Learning  (ML)is much more than allowing humans to understand why certain decisions are made by models, but also allowing to match this understanding with users’ domain knowledge. It requires a human in the loop: (1) to test consistency of the model with the domain knowledge and (2) to ensure reliable and trustworthy models. 

It is especially important for high-stakes tasks, for which incorrect decisions lead to significant harm to individuals or society. Examples are medical diagnosis (cancer, heart diseases), financial decision-making (credit risk, fraud detection),  autonomous vehicles (pedestrian detection), criminal justice (sentencing), disaster response planning (hurricanes, earthquakes) and others.

The topics of emerging importance in explainable AI/ML models for high-stakes tasks include:

  1. Methodology of explaining black-box models:
  2. Computational methods to explain AI/ML  models.
  3. Human-in-the-loop.
  4. Visual knowledge discovery as  a major human-in-the-loop approach to build better models and prevent catastrophic errors. 

Explanation must be exact (accurately describe the model’s inner workings and decision-making process) and convincing to the user. If one of these properties is lacking then it is only a quasi-explanation not serving the explanation goal. Many current popular AI/ML explanation methods are only quasi-explanations. This tutorial will present current methods with ways to overcome their deficiencies.

The benefits from this tutorial for researchers and students are to become familiar with these new developments and new opportunities to enhance their own research inspired by these methods. For practitioners the benefits are in the opportunities to apply these methods to the real-world tasks.


Target Audience:

The target audience for the tutorial include researchers, students, and practitioners with basic knowledge of Machine Learning.


Bio Sketch of Presenter:

Dr. Boris Kovalerchuk is a professor of Computer Science at Central Washington University, USA. His publications include four books published by Springer: "Data Mining in Finance" (2000), "Visual and Spatial Analysis" (2005), "Visual Knowledge  Discovery and Machine Learning" (2018), and “Integrating Artificial Intelligence and Visualization for Visual Knowledge Discovery” (2022), chapters in the Data Mining/Machine learning Handbooks (2006,2010, 2023), and over 200 other publications. His research and teaching interests are in AI, machine learning, visual analytics, visualization, uncertainty modeling, image and signal processing, and data fusion. Dr. Kovalerchuk has been a principal investigator of research projects in these areas, supported by the US Government agencies. He served as a senior visiting scientist at the US Air Force Research Laboratory and as a member of several expert panels at the international conferences and panels organized by the US Government bodies. Dr. Kovalerchuk delivered relevant tutorials at IJCNN 2017, HCII 2018, KDD 2019, ODSC West 2019; WSDM 2020, IV 2020, 2023,  IJCAI/PRICAI 2021, 2023. Presenter website, CWU Visual Knowledge Discovery Lab at GitHub


  1. Kovalerchuk B., Visual Knowledge Discovery and Machine Learning, Springer Nature, 2018
  2. Kovalerchuk B, Nazemi K, Andonie R, Datia N, Banissi E, (Eds). Integrating Artificial Intelligence and Visualization for Visual Knowledge Discovery, Springer, 2022
  3. Kovalerchuk B, Ahmad MA, Teredesai A. Survey of explainable machine learning with visual and granular methods beyond quasi-explanations. Explainable artificial intelligence: A perspective of granular computing. Springer, 2021:217-67. https://arxiv.org/pdf/2009.10221
  4. Rizzo M, Veneri A, Albarelli A, Lucchese C, Nobile M, Conati C. A Theoretical Framework for AI Models Explainability with Application in Biomedicine. 2023, https://arxiv.org/pdf/2212.14447
  5. https://github.com/CWU-VKD-LAB, CWU Visual Knowledge Discovery Lab software on GitHub