40 years of HCI International. Join us in Washington DC to celebrate

T15: Aspects of Dependable Machine Learning for Safety Critical Applications in Industrial Settings

Note: Cancelled by the presenters

Back to Tutorials' Program

 

Franziska Wolny (short bio)

Georg Siedel (short bio)

Weija Shao (short bio)

Sven Jacob (short bio)

Seyedfakhredin Musavishavazi (short bio)

Silvia Vock (short bio)

Federal Institute for Occupational Safety and Health
Unit Workplaces, Safety of Machinery, Operational Safety
AI Junior Research Team
Dresden, Germany

 

Objectives:

After the tutorial the participants will:

  • Have a thorough overview of aspects of safe Machine Learning (ML) for industrial applications
  • Be familiar with the regulatory requirements for safety-critical ML applications in industrial automation and the existing standardization landscape
  • Understand how selected safe ML design and assessment methodologies work
  • Be enabled to take action towards the safe and sound developement of ML for industrial automation.

 

Content and Benefits:

The tutorial aims to provide an overview of methods, metrics and actions to ensure the safe and trustworthy design of Machine Learning (ML) applications that are intended to be part of safetycritical functions in machinery and other industrial applications. The safe design and deployment of ML algorithms which can have a potential impact on the safety and health of workers, requires a risk management process throughout the lifecycle of the ML component. Unlike conventional software design, the measures to be taken at different stages of the lifecycle are mostly not standardised yet and some are still under developement.

 

Target Audience:

  • HCI researchers, young ML developers and policy makers interested in the safety aspects of current and future HCI apporaches
  • Safety experts interested in the safe integration of ML in industrial automation

 

Additional platform or tool(s) to be used by the tutors:

Jupyter notebooks (expected: use of colab platform)

 

List of materials or devices required by the participants:

Participants are encouraged to bring their own laptops and follow along.

 

Selected relevant publications

Siedel, Georg, Stefan Voß, and Silvia Vock. "An overview of the research landscape in the field of safe machine learning." ASME International Mechanical Engineering Congress and Exposition. Vol. 85697. American Society of Mechanical Engineers, 2021.

Siedel, Georg, et al. "Utilizing class separation distance for the evaluation of corruption robustness of machine learning classifiers." arXiv preprint arXiv:2206.13405 (2022)

Siedel, Georg, Silvia Vock, and Andrey Morozov. "Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions." arXiv preprint arXiv:2305.05400 (2023)

Ding, S., Morozov, A., Vock, S., Weyrich, M., & Janschek, K. (2020). Model-based error detection for industrial automation systems using LSTM networks. In Model-Based Safety and Assessment: 7th International Symposium, IMBSA 2020, Lisbon, Portugal, September 14–16, 2020, Proceedings 7 (pp. 212-226). Springer International Publishing.

invited talk: Schlicht, L. and Vock, S. Cyber-physical systems and AI – A new challenge for risk assessment? DHM: Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, S106: Legislative and Normative Framework for AI-enabled HCI – Implications and Questions from an OSH Perspective24th International Conference on Human-Computer Interaction 26 June - 1 July 2022

Bio Sketch of Presenters:

Franziska Wolny
wolny.franziska@baua.bund.de
Franziska Wolny is a research assistant at the German Federal Institute for Occupational Safety and Health. She is involved in the regulative aspects of AI in safety critical functions in machinery, such as the Machinery Directive 2023/1230/EU. Her background is in experimental physics and semiconductor physics in both an industrial and research context. Her current research interest is in the field of dependable object recognition in image and LIDAR data.

Georg Siedel
siedel.georg@baua.bund.de
Georg Siedel is a research associate at the German Federal Institute for Occupational Safety and Health and a PhD student with the University of Stuttgart. His research focus is on the robustness of vision models. This includes data augmentation strategies for training robust models and methods for the evaluation of their robustness.

Weija Shao
shao.weija@baua.bund.de
Dr. Weijia Shao is postdoctoral researcher at the Federal Institute for Occupational Safety and Health. He received PhD in Computer Science from Technical University of Berlin and has been actively involved in research projects about adversarial attack and explainable AI. In the upcoming tutorial at HCI, Dr. Shao will delve into the adversarial robustness of machine learning methods.

Sven Jacob
jacob.sven@baua.bund.de
Sven Jacob is part of the junior research group and currently working on drift detection methods for high dimensional time series data. This aims at identifying changes in the underlying data generating process (industrial application) which may result in malfunction.

Seyedfakhredin Musavishavazi
Musavishavazi.seyedfakhredin@baua.bund.de
Seyedfakhredin Musavishavzi, a junior researcher at the Federal Institute for Occupational Safety and Medicine, is actively engaged in cutting-edge research focused on uncertainty quantification in machine learning, aiming to tackle safety challenges in the modern era. His prior studies and research have primarily centered around the realm of supervised machine learning models, with some publications delving into the secure classification of data and enhancing model robustness.

Silvia Vock
vock.silvia@baua.bund.de
Dr. Silvia Vock is team leader of the junior research group „AI reliability“ at the German Federal Institute for Occupational Safety and Health. Her research interests are in the area of ML testing and safe ML design for industrial applications. Her team is working on methods and metrics for assessing the dependability of ML algorithms as a basis for the risk assessment of machines with safety-critical ML components.