Opening Keynote

A photo of Simone Stumpf

Simone Stumpf

Engineering responsible AI

Many AI technologies are now being integrated into everyday life. However, how can we ensure that this AI is "responsible"? In this talk, I will review current efforts at developing responsible AI, focusing on explanations, fairness and auditing, and offer suggestions at how we can improve engineering approaches in this area.

Dr. Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with machine learning systems. Her research includes self-management systems for people living with long-term conditions, developing teachable AI systems for people who don’t have a technical background, and investigating Responsible AI development, including AI fairness. Her work has contributed to shaping the field of Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles to enable better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI effectively.

Closing Keynote

A photo of Jürgen Ziegler

Jürgen Ziegler

Engineering Interactive Systems in the Age of Uncertainty

The rapidly growing use of intelligent technologies in the development of interactive systems can create new useful functionality for users but also increases uncertainty in their use. While conventional interactive systems create uncertainty for users mainly through inadequately engineered functionality and user interfaces, intelligent systems inherently introduce uncertainty by applying probabilistic methods which mostly operate in opaque black box models. Uncertainty can significantly lower user experience and trust in the system and may create harmful effects at an individual or societal level. In this talk, we will discuss sources of uncertainty and questions of coping with it from a user-centric perspective. Taking the example of recommender systems, principles for bridging the gulf of uncertainty in user-system interaction will be proposed. Specifically, the talk will address approaches for fostering user understanding of the decision space, for enabling user control and exploration, and for progressing from explaining AI functions to a co-evolution of human and intelligent system.

Jürgen Ziegler is a full professor in the Department of Computer Science and Applied Cognitive Science at the University of Duisburg-Essen where he directs the Interactive Systems Research Group. His main research interests lie in the areas of human-computer interaction, human-AI cooperation, recommender systems, information visualization, and health applications.

Jürgen Ziegler holds a diploma degree in electrical engineering and biocybernetics from the University of Karlsruhe and a doctoral degree from the University of Stuttgart. Prior to joining the University, he was head of the Competence Center for Software Technology and Interactive Systems at the Fraunhofer Institute for Industrial Engineering (IAO) in Stuttgart. Among various other scientific functions he was founding editor and editor-in-chief of i-com - Journal of Interactive Media (De Gruyter) from 2001 until 2021. He is currently chair of the German Special Interest Group on User-Centred Artificial Intelligence.