June 30th, Morning
Interactive systems increasingly operate within ecological contexts involving both human and non-human actors. Recent discussions in HCI and design research explore approaches that move beyond strictly human-centred design, emphasising multispecies and more-than-human perspectives. However, the community still lacks practical tools to translate these theoretical discussions into design practice. This tutorial introduces the BACT framework (Beings, Activities, Context, Technologies) [2], an extension of Benyon’s PACT framework [1], as a structured lens for analysing multispecies interaction situations. Participants will apply the framework in a guided design activity where they analyse a multispecies interaction scenario and generate an initial design concept.
The tutorial concludes with a discussion reflecting on how frameworks such as BACT can support designers in recognising non-human actors and reasoning about complex interaction ecologies.
[1] David Benyon, Phil Turner, and Susan Turner. 2005. Designing Interactive Systems: People, Activities, Contexts, Technologies. Pearson Education.
[2] Theodora Chamaidi and Modestos Stavrakis. 2024. A Multispecies Interaction Design Approach: Introducing the Beings Activities ContextTechnologies (BACT) Framework. Multimodal Technologies and Interaction 8, 9: 77. https://doi.org/10.3390/mti8090077
June 30th, Morning
The goal of this tutorial is to introduce newcomers to the field to the main aspects of designing and engineering gesture user interfaces, while it also targets experienced practitioners, designers, and researchers that wish to increase their knowledge and stay up to date with the most recent developments in gesture recognition, analysis, and design approaches and methodologies. To this end, the tutorial is structured as a suite of 3 complementary modules:
June 30th, Afternoon
With the rapid advancement of microgesture research, a growing number of recognition devices have been proposed, differing in sensor modalities (e.g., force-sensitive sensors, inertial measurement units), form factors (e.g., gloves, rings), and recognition algorithms (e.g., ad hoc approaches, random forests). As this ecosystem expands, enabling device interoperability and flexible adaptation of microgesture vocabularies becomes increasingly important for researchers and developers.
In this tutorial, we introduce μPoly, the first microgesture recognition toolkit built upon the μGlyph microgesture notation. μPoly takes as input μGlyph descriptions of elementary microgestures and composes them to recognize more complex microgestures defined using the same notation. By relying on a unified representation, μPoly provides a standardized event abstraction for microgesture recognition, enabling consistent interpretation across heterogeneous sensing devices.
We demonstrate how μPoly enables seamless switching across heterogeneous recognition devices. In addition, μPoly enables integration with external applications through WebSocket communication, facilitating rapid prototyping of microgesture-based interfaces. Participants will also have the opportunity to connect μPoly to their own applications, exploring practical workflows for integrating microgesture recognition into interactive systems.
June 30th, Afternoon
User interaction data such as keystrokes, mouse movements, and touch events are increasingly used as input for machine learning (ML) and artificial intelligence (AI) approaches in areas such as user modeling, stress detection, emotion recognition, and behavioral biometrics. However, before ML algorithms can be applied, interaction data must be recorded, processed, segmented, and transformed into meaningful metrics. These preparation steps are rarely standardized and often insufficiently reported, even though they strongly influence the resulting data quality and the validity of downstream analyses.
This tutorial addresses the methodological foundations of working with interaction data. It introduces participants to best practices for recording interaction events, preprocessing raw interaction logs, slicing data into meaningful segments, and calculating reliable interaction metrics. Particular attention is given to subtle effects that occur during preprocessing, such as the influence of slicing strategies on derived metrics.
Participants will learn how different data preparation choices affect interaction metrics and why transparent reporting of these decisions is essential for reproducibility and validity in ML-based research. The tutorial provides conceptual guidance, practical examples, and recommendations for reporting interaction data processing pipelines. It equips researchers and practitioners with the methodological baseline required to prepare interaction datasets that can reliably serve as input for ML and AI analysis.
Should you have any doubt about Tutorials please contact the chairs (tutorials2026@eics.acm.org)