Many AI technologies are now being integrated into everyday life. However, how can we ensure that this AI is "responsible"? In this talk, I will review current efforts at developing responsible AI, focusing on explanations, fairness and auditing, and offer suggestions at how we can improve engineering approaches in this area.
Dr. Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with machine learning systems. Her research includes self-management systems for people living with long-term conditions, developing teachable AI systems for people who don’t have a technical background, and investigating Responsible AI development, including AI fairness. Her work has contributed to shaping the field of Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles to enable better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI effectively.
The rapidly growing use of intelligent technologies in the development of interactive systems can create new useful functionality for users but also increases uncertainty in their use. While conventional interactive systems create uncertainty for users mainly through inadequately engineered functionality and user interfaces, intelligent systems inherently introduce uncertainty by applying probabilistic methods which mostly operate in opaque black box models. Uncertainty can significantly lower user experience and trust in the system and may create harmful effects at an individual or societal level. In this talk, we will discuss sources of uncertainty and questions of coping with it from a user-centric perspective. Taking the example of recommender systems, principles for bridging the gulf of uncertainty in user-system interaction will be proposed. Specifically, the talk will address approaches for fostering user understanding of the decision space, for enabling user control and exploration, and for progressing from explaining AI functions to a co-evolution of human and intelligent system.
Jürgen Ziegler is a senior full professor in the Faculty of Computer Science at the University of Duisburg-Essen where he is conducting research in the field of Intelligent Interactive Systems. His main research interests lie in the areas of human-computer interaction, human-AI cooperation, recommender systems, social media, and information visualization. Among numerous scientific functions he was founding editor and editor-in-chief of i-com - Journal of Interactive Media from 2001 until 2021. He is founding chair and current vice-chair of the German Special Interest Group on User-Centred Artificial Intelligence.