The ACM Symposium on Engineering Interactive Computing Systems (EICS) is the primary venue for research contributions at the intersection of Human-Computer Interaction (HCI) and Software Engineering. EICS 2025 is the seventeenth edition of the EICS conference. Starting as a sponsored ACM SIGCHI conference in 2009, its journey as a scientific venue is one of the longest in the field of Human-Computer Interaction. EICS was created as a continuation and merge of a set of series of conferences, symposiums and workshops, including: CADUI (Computer-Aided Design of User Interfaces), CCL, the IFIP Working Conference on Command Languages, DSV-IS (Design Specification and Verification of Interactive Systems, EHCI, the IFIP International Conference on Engineering for Human-Computer Interaction and TAMODIA (Tasks, Models and Diagrams).
EICS has become the primary venue for rigorous contributions, and dissemination of research results, in the intersection of user interface design, software engineering and computational interaction. EICS 2025 welcomed contributions in the form of full papers, technical notes, late-breaking results, demos, posters, workshops, and a doctoral consortium. There were three separate submissions rounds for full papers: the first in July, the second in October, and the third in February. Other contributions had submission dates in March and April. All accepted contributions are invited to present at the conference in June. While full papers are published as journal articles in the Proceedings of the ACM on Human-Computer Interaction (PACM HCI, EICS series), these proceedings include contributions received as late-breaking results and workshops. The EICS 2025 technical program features keynotes by Simone Stumpf (University of Glasgow, UK) on “Engineering responsible AI” and Jürgen Ziegler (University of Duisburg-Essen) on “Engineering Interactive Systems in the Age of Uncertainty”.
We would like to express our sincere gratitude to everyone who supported EICS 2025 in a range of ways, starting with the Steering Committee of EICS for accepting our proposal to host EICS in Trier, Germany. Many thanks to the members of the program committee, reviewers and chairs for their work in selecting and organising this year’s contributions. The conference could not have taken place without the hard work and commitment of the organising committee and we express our sincere thanks to them for their work.
Many AI technologies are now being integrated into everyday life. However, how can we ensure that this AI is ‘responsible’? In this keynote, current efforts at developing responsible AI, focusing on explanations, fairness and accountability, are reviewed. I will offer suggestions at how we can improve engineering approaches in this area.
The rapidly growing use of intelligent technologies in the development of interactive systems can create new useful functionality for users but also increases uncertainty in their use. Uncertainty can significantly lower user experience and trust in the system and may create harmful effects at an individual or societal level. In this talk, we will discuss sources of uncertainty and questions of coping with it from a user-centric perspective. Taking the example of recommender systems, principles for bridging the gulf of uncertainty in user-system interaction will be proposed. Specifically, the talk will address approaches for fostering user understanding of the decision space, for enabling user control and exploration, and for providing user-centric explanations. Furthermore, we will discuss LLM-based simulation as a promising approach for studying phenomena in complex social systems.
The creation of interactive systems is a deeply collaborative process that requires a shared understanding of both the system under consideration and the design process. External design representations play an important role in developing and considering different perspectives on the design problem and possible solutions. This paper examines user interface design activities through the lens of such mediating design artifacts. It presents an exploratory study that was conducted in two companies. The results from interviews and artifact analysis suggest the influence of factors such as the actual distribution of work in a team, the participants’ background, and their history of collaboration on created design artifacts and their effective use. The meaning which project members assign to design artifacts also shapes their perception of the problem and solution space. Implications of the study for HCI engineering and education are discussed.
Large-scale User Interface (UI) data is essential for developing Artificial Intelligence (AI)-driven tools that can support designers in creating interfaces. However, many publicly available datasets are either manually annotated, a time-consuming and costly process that limits their scale or lack crucial structural information, such as semantic labels and hierarchical relationships, necessary for effective design assistance. Moreover, no existing dataset offers a standard format designed for seamless integration of AI models into real-world design tools. In this work, we introduce a pipeline that automatically converts any HTML content into structured, Figma-compatible representations. To validate our pipeline, we apply it to WebUI, a large-scale HTML-based dataset, and conduct a comparative evaluation by training five state-of-the-art layout generation models on our data and the manually annotated Rico dataset. Experimental results demonstrate that the models achieve comparable performance across both datasets and suggest that our pipeline can effectively produce high-quality data suitable for training AI models integrable into design workflows.
Personalization is a well-known need in human-computer interaction. The concept of meta-UI has been proposed to allow end-users to adapt their systems. However the implementations of this concept are, so far, specific to one application, which has been developed with the meta-UI. To ease meta-UI expansion, this paper proposes to add a meta-UI to a type of applications used by workers in their daily work, web based Product Management (PLM) applications. The proposed meta-UI takes advantage of the similarity in the structure of web based PLM applications to be weaved inside the PLM. It has been designed with classic human-computer interaction design techniques by considering the PLM professional context of use. A proof-of-concept has been implemented on top of one PLM in JavaScript thanks to the injection mechanism.
Interactive systems which incorporate 3D interaction techniques, such as virtual or augmented reality, lack appropriate methods for formalisation to verify and validate their properties. In safety-critical contexts, such as virtual environments used for training users in medical settings, ensuring systems behave as expected is crucial to enabling positive user experiences. In this paper we propose a formalisation that will allow us to express 3D virtual environments, end users and their interactions. This work represents a first step towards creating formalisation methods for virtual reality applications, enabling model checking techniques to ensure systems behave as expected, leading to improved reliability, safety and better user experience in virtual environments.
With the rapid expansion of extended reality (XR) technologies in vehicular contexts, there is a growing need for innovative interaction methods that enhance user experience without compromising safety. In a research project, we explored the potential of a thigh-worn wearable device that allows passengers to interact with the onboard system and operate a roof-mounted surveillance camera. This includes actions such as navigating menus, selecting items, controlling media playback, and adjusting camera orientation, zoom, or recording. We conducted a Gesture Elicitation Study (GES) in a simulated vehicular XR environment with 24 referents, where 21 participants (N = 21) interacted with our prototype, ThighTouchI. This empirical study identified 79 unique gestures from 504 proposals, revealing that users favored ThighTouchI as a pad for selection tasks, using single-finger gestures for simple actions and multi-finger gestures for complex tasks. Additionally, a consensus gesture set was derived, and mapping of the tactile surface showed that \(88.7\%\) of the gestures were proposed in the central area. Participants utilized different zones of the touch surface depending on the referent, underscoring its ergonomic importance.
The literature is replete with studies that have explored, analyzed, and tested different techniques for ensuring the adaptivity of a menu in graphical user interfaces, mainly by maintaining the format of a menu bar, its pull-down menus, and its sub-menus, while varying its presentation. Instead of restricting adaptivity to this format, adaptivity could be ensured by exploiting the entire display space available in full-screen or intermediate-screen mode. Inspired by fractal geometry, we motivate and define the concept of a fractal adaptive menu, a graphical adaptive menu that benefits from three properties useful for adaptivity: self-similarity, recursive navigation, and hierarchical menu organization. We explain how to engineer this menu for graphical user interfaces and validate this approach with a case study in machine control for a steel mill. In addition to being adaptive to the screen resolution, a fractal adaptive menu is particularly beneficial when menu navigation is repetitive or frequent in a deep hierarchical organization. Proprioceptive memory can make it easier to remember which menu areas to select during a long crossing of the hierarchy.
Although deformable objects are not typically designed for digital interaction, they offer a largely unexplored potential—any such object could be repurposed as a medium for controlling digital content. While existing approaches embed sensors into deformable objects to enable interaction, this limits scalability and practicality of such systems. An alternative is to perform gesture recognition on deformable objects using a wrist-worn radar sensor. However, when analysing reflected radar signals it is difficult to separate reflections originating from the continues deformations of the object shape and those from the user’s hand and fingers. Additionally, the continuous shape changes of deformable objects introduce changes in radar cross-section, affecting signal variability. Furthermore, user ergonomics—such as variations in hand size, finger dexterity, and strength—are likely to influence the degree of object deformation during interaction. In this paper, we explore whether radar sensing can be used for robust gesture detection on deformable objects, focusing on how well does a system generalize to previously unseen users and what can we do to improve such generalisability. In pursuit of this goal, we record a dataset of 4.3k labelled gestures with Google Soli millimeter-wave radar sensor on a plush toy and demonstrates robust classification performance, achieving accuracy of up to 90% on a five-gesture set. Furthermore, we investigate model generalizability and show that transfer learning improves recognition for previously unseen users, yielding performance gains of up to 20%. These findings highlight the potential of radar-based sensing for spontaneous and practical interaction with deformable objects.
Advancements in generative Artificial Intelligence (AI) hold great promise for automating radiology workflows, yet challenges in interpretability and reliability hinder clinical adoption. This paper presents an automated radiology report generation framework that combines Concept Bottleneck Models (CBMs) with a Multi-Agent Retrieval-Augmented Generation (RAG) system to bridge AI performance with clinical explainability. CBMs map chest X-ray features to human-understandable clinical concepts, enabling transparent disease classification. Meanwhile, the RAG system integrates multi-agent collaboration and external knowledge to produce contextually rich, evidence-based reports. Our demonstration showcases the system’s ability to deliver interpretable predictions, mitigate hallucinations, and generate high-quality, tailored reports with an interactive interface addressing accuracy, trust, and usability challenges. This framework provides a pathway to improving diagnostic consistency and empowering radiologists with actionable insights.
Sensory substitution and augmentation technologies redefine the boundaries of human perception by enabling new sensory experiences. Traditional sensory substitution devices (SSDs) transform information from one sensory modality into another, facilitating cross-modal perception and neuroplastic adaptation. Beyond rehabilitation applications, emerging sensorimotor devices leverage these mechanisms for sensory augmentation, extending human perception beyond biological constraints. The ThermalSense system suggests a method for representing thermal information through visual-to-auditory sensory substitution. By doing so, it extends users’ visual experience to the range of infrared frequencies using sound. In this interactive demonstration, participants will be presented with images alongside matching thermal soundscapes in a designated protocol, giving the opportunity to train on the system and experience thermal perception through audition in scenes lacking visual thermal information. This first-hand experience in using ThermalSense demonstrates the system’s potential to broaden human perception and extend sensory experience.
This paper presents InFL-UX, an interactive, proof-of-concept browser-based Federated Learning (FL) toolkit designed to integrate user contributions into the machine learning (ML) workflow. InFL-UX enables users across multiple devices to upload datasets, define classes, and collaboratively train classification models directly in the browser using modern web technologies. Unlike traditional FL toolkits, which often focus on backend simulations, InFL-UX provides a simple user interface for researchers to explore how users interact with and contribute to FL systems in real-world, interactive settings. InFL-UX bridges the gap between FL and interactive ML by prioritising usability and decentralised model training, empowering non-technical users to actively participate in ML classification tasks.
To understand and quantify the quality of mixed-presence collaboration around wall-sized displays, robust evaluation methodologies are needed, that are adapted for a room-sized experience and are not perceived as obtrusive. In this paper, we propose our approach for measuring joint attention based on head gaze data. We describe how it has been implemented for a user study on mixed presence collaboration with two wall-sized displays and report on the insights we gained so far from its implementation, with a preliminary focus on the data coming from one particular session.
This workshop proposal is the third edition of a workshop which has been organised at EICS 2023 and EICS 2024. This edition aims to bring together researchers and practitioners interested in the engineering of interactive systems that embed AI technologies (as for instance, AI-based recommender systems) or that use AI during the engineering lifecycle. The overall objective is to identify (from experience reported by participants) methods, techniques, and tools to support the use and inclusion of AI technologies throughout the engineering lifecycle for interactive systems. A specific focus is on guaranteeing that user-relevant properties such as usability and user experience are accounted for. Contributions are also expected to address system-related properties, including resilience, dependability, reliability, or performance. Another focus is on the identification and definition of software architectures supporting this integration.
The rapid evolution of augmented and mixed reality (AR/MR) technologies, coupled with the integration of multimedia experiences, is reshaping how users interact with their environments. While traditional handheld displays like smartphones and tablets have democratized access to AR/MR applications, new technologies such as Apple’s Vision Pro, smart wearables, and tactile interfaces are setting the stage for enhanced multimodal interactions. These advancements come with their own set of challenges, including developing intuitive interaction techniques, creating seamless cross-device user experiences, and ensuring the effective integration of multimedia elements like sound, visuals, and haptics. Addressing these challenges, following our successful first workshop on ”Experience 2.0 and Beyond” at EICS 2024, this second version of our workshop “Experience 2.0 and Beyond” provides a platform for researchers, developers, and practitioners to explore innovative solutions for engineering AR/MR applications that span devices and modalities from visuals to multimedia, fostering a future where immersive, interactive, and multimedia experiences are accessible to all.
Integrating Artificial Intelligence (AI) into preventive healthcare can fundamentally transform how individuals interact with their own health by shifting attention from reactive treatments toward proactive, personalized, and engaging health management. AI systems leverage continuous interaction with users - through wearable technologies, ambient sensing, and adaptive interfaces - to provide timely, tailored health insights and recommendations. Such interactive AI solutions hold the promise of empowering individuals to actively manage and improve their health behaviors, potentially reducing healthcare burdens. However, successful adoption depends significantly on how users perceive and interact with these systems. Techniques such as gamification and multisensory interaction have emerged as compelling approaches for enhancing user engagement. Yet, to ensure sustainable and equitable outcomes, it is critical to address ethical dimensions, including user privacy, transparency of AI decision-making processes, and potential biases in personalized recommendations. This workshop at EICS 2025 will explore interactive AI for preventive health, focusing on proactive monitoring, adaptive personalization, gamification, multisensory interaction, social AI, ethics, and evaluation methods. By bringing together researchers and practitioners from human-computer interaction, AI, healthcare, and behavioral science, we aim to identify challenges, design principles, and evaluation frameworks that enhance engagement and trust in AI-driven health systems. The workshop will feature interactive discussions and expert talks to foster cross-disciplinary collaboration.