EICS '22 Companion: Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems

Full Citation in the ACM Digital Library

SESSION: Keynote Talks

Building Virtual and Augmented Reality Passenger Experiences

Virtual and Augmented Reality (together XR) headsets enable the rendering of virtual content intermixed with reality. They have the capacity to allow passengers to break free from small physical displays in constrained environments such as cars, trains and planes, allowing them to escape to new experiences. They can allow passengers to make better use of their time by making travel more productive and enjoyable, supporting both privacy and immersion. This is of particular note given the predicted adoption of autonomous vehicles. The ViAjeRo project (www.viajero-project.org) is conducting breakthrough research in HCI and neuroscience to enable passenger usage of XR headsets, with the underlying goal of making more effective, comfortable and productive use of travel time. This paper sets out the limitations of current state-of-the-art and how ViAjeRo is opening new possibilities for passenger XR experiences.

Engineering Interactive Geospatial Visualizations for Cluster-Driven Ultra-high-resolution Wall Displays

Ultra-high-resolution wall-sized displays feature a very high pixel density over a large physical surface, typically covering a few square meters. They provide effective support for collaborative work sessions that involve the visualization of large, heterogeneous datasets. But the development of interactive visualizations for ultra-high-resolution wall displays raises significant challenges. These range from the design of input techniques adapted to such surfaces, to the design of visualizations that effectively leverage their extreme display capacity. Challenges lie not only in the design but in the technical realization of these visualizations as well, as they run on computer clusters and thus require dedicated software frameworks for the distribution and synchronization of data and graphics. In this talk, I will essentially focus on challenges that relate to the engineering of interactive visualizations for cluster-driven wall displays, discussing different approaches that we explored over the last fourteen years to create geospatial visualizations and the associated multi-scale interaction techniques.

SESSION: Late-breaking Results

(Semi-)Automatic Computation of User Interface Consistency

Many measures exist to (semi-)automatically compute the quality of a graphical user interface, such as aesthetic metrics, visual metrics, and performance metrics. These measures are mostly individual as they apply to a single graphical user interface at a time. Unlike these measures, consistency requires evaluating a number of screens within the same application (intra-application consistency) or across applications (inter-application consistency). This paper presents a formula, a method, and supporting software for computing this consistency and its counterpart, inconsistency, either completely automatically when the interface segmentation is performed by the software or semi-automatically when the interface segmentation is performed manually by the end-user.

Design and Evaluation of AR-Assisted End-User Robot Path Planning Strategies

Nowadays, robots play an increasingly important role but still usually have to be programmed by highly skilled professionals. Therefore, end-user solutions supporting users in solving (simple) robot programming tasks without expert knowledge are a promising research field. A possibility for these solutions is the inclusion of Augmented Reality (AR) to enable users to work in the robot space, reducing the amount of mentally taxing coordinate space conversions. Approaches for this mainly rely on the waypoint-based robot path programming strategy. To explore an alternative solution, we propose an AR-assisted approach with different possible path-planning strategies, such as drawing paths or selecting single waypoints directly in the real world. This enables end-users without a programming background to program paths for a wheeled mobile robot. They can also see and edit their programmed paths in a Blockly-like representation. Furthermore, we offer AR in-place program simulation and direct building of finished programs to the real robot. We evaluated our approach regarding usability compared to existing non-AR end-user software as well as comparing the two different path-planning strategies to each other. The evaluation showed that our approach is more usable and faster than the conventional method, while the differences between the path-planning methods are more nuanced and show qualities in both versions.

Exploring Needs and Design Opportunities for Virtual Reality-based Contour Delineations of Medical Structures

Contour delineation, a critical phase in radiotherapy planning, refers to the process of identifying and segmenting malignant tumors and/or healthy organs from medical images. Today’s contouring software requires oncologists to inspect and contour target of interests by analyzing a stack of planar medical images, which is lengthy, tedious and sometimes error prone. Such design is also in contrast to the stereoscopic nature of medical images. Therefore, it is intuitive to consider bringing contouring into immersive Virtual Reality (VR) space. We present an exploratory study that uses iterative design to understand needs and opportunities to bring contour delineation into an immersive 3D space, such as the one enabled by today’s head-mounted VR displays. We report on the interactions with three medical professionals and three engineering & design experts, and demonstrate the potential for VR–based 3D contouring while studying the benefits of using 3D immersive spaces to augment the process of contour in 2D. Through needs–finding interviews and co–design workshops, we evaluated our initial iterations and proof–of–concept prototypes. We believe that our preliminary findings will benefit researchers and practitioners who are attempting to bring today’s contour delineation processes into an immersive 3D space.

Gestural-Vocal Coordinated Interaction on Large Displays

On large displays, using keyboard and mouse input is challenging because small mouse movements do not scale well with the size of the display and individual elements on screen. We present “Large User Interface” (LUI), which coordinates gestural and vocal interaction to increase the range of dynamic surface area of interactions possible on large displays. The interface leverages real-time continuous feedback of free-handed gestures and voice to control a set of applications such as: photos, videos, 3D models, maps, and a gesture keyboard. Utilizing a single stereo camera and voice assistant, LUI does not require calibration or many sensors to operate, and it can be easily installed and deployed. We report results from user studies where participants found LUI efficient, learnable with minimal instruction, and preferred it to point-and-click interfaces.

Interactive Story Box for Children with Cerebral Palsy

Children with cerebral palsy (CP) tend to have difficulty in speech communication, geometric cognition, and motion control. They need to go through intensive rehabilitation exercises to develop and enhance their capabilities for daily living. The training aids or tools currently used cannot attract such a group of children to participate and persist in long-term exercise. This study aims to develop an Interactive Story Box to facilitate rehabilitation exercises of speech interaction, geometric cognition, and upper limb motion control under a playful and joyful rehabilitation environment. This box is designed to offer diverse geometric shape matching using puzzles with cartoon characteristics, and then a series of stories and speech interactions are generated based on the specific matching results. The multiple shapes of graphic puzzles, cartoon characters, intelligent voice synthesis, and audio-visual feedback are applied in this box design. Preliminary user testing in Suzhou BenQ Medical Center suggests that this box is warmly welcomed, easy to follow, manipulate and interact with, and children are better motivated to participate in the rehabilitation exercise.

Sans Tracas: A Cross-platform Tool for Online EEG Experiments.

Driven by a desire to democratize electroencephalography (EEG) research, we created Sans Tracas - a cross-platform web application for running EEG experiments online. A collaborative effort between cognitive neuroscientists and HCI researchers, the platform is designed via a multidisciplinary lens to be easy to use by researchers and study participants alike. For researchers, the platform allows users to augment any behavioural study deployed on online platforms with EEG data recordings from the commercially-available InteraXon Muse EEG device. For end-users who have access to the Muse, the platform is focused on having them perform entire EEG studies on their own from the comfort of their homes. We conducted a pilot study to test the usability of our developed platform. The results suggest that participants found the platform easy to use, useful, and had fun participating in EEG experiments independently and are now more interested in EEG and Brain Computer Interface (BCI) research than before. We contribute to HCI by presenting the design, development, and preliminary evaluation of a cross-platform application that allows users to conduct EEG experiments online using low-cost, commercially-available devices. This platform is a first step towards enabling greater access to research involving electroencephalography around the world.

Style-Aware Sketch-to-Code Conversion for the Web

While sketching a graphical user interface (GUI) is a necessary step towards the creation of a Web application, its transformation into a coded GUI, with the proper styles, is still a tedious and time-consuming task that a designer should perform. Recently, a set of Machine Learning techniques has been applied to automatically generate code from sketches to ease this part of the design process. These techniques effectively convert the sketches into a skeleton structure of the GUI but are not designed to consider the styles to be applied to the generated HTML page. Moreover, having the possibility to explore different styles, starting from a sketch, might further support the designer in their work. In this paper, we move the first steps to enable this opportunity by proposing a method that allows the designer to input the sketch of the Web-based GUI and select a reference style to be applied. Our method automatically injects the reference styles into the sketch components and then uses an automatic code generation technique to obtain the final code. Preliminary experiments carried out with the navigation bar component show the effectiveness of the proposed method.

Towards a Domain-Specific Language to Specify Interaction Scenarios for Web-Based Graphical User Interfaces

The communication gap between software developers and subject-matter experts is one of the foremost long-standing problems in software development. The level of formality of the user requirements specification has a strong impact on the ability of these two groups to communicate effectively. Domain-Specific Languages (DSLs) are seen as one of the potential solutions to address this issue by raising the abstraction level of the software specification while keeping the necessary formalism to allow for software analysis, design, and verification. This paper discusses the ongoing development of a high-level DSL and its rich editing environment to allow the specification of consistent and testable interaction scenarios as user requirements for web-based graphical user interfaces. The language grammar has been developed based on the Gherkin syntax that supports Behaviour-Driven Development (BDD). Results of a preliminary evaluation regarding the consistency of actions and states of interaction elements specified for web user interfaces showed that the grammar is able to support a consistent specification of BDD scenarios as user requirements at the interaction level.

SESSION: Panel

Engineering Awareness in Interfaces: Focus on Automation and Visualization

SESSION: Tutorials

Automated Usability Smell Detection in VR Application with AutoQUEST

The quality of software products is not only measured by their set of functionality and features, the User Experience (UX) and the quality of the applications’ User Interface (UI) gets more and more important. In order to measure and improve the usability of UIs, it is often required to perform user tests with potential future users. Performing these user tests, taking notes meanwhile and analyzing all the collected data to derive some meaningful results can be a time-consuming task. An automated approach to evaluate the usability of UIs may save time and help to improve the UX. In this tutorial, we present AutoQUEST, a set of tools to automatically record user interaction data and perform automated usability evaluation. The evaluation is done by detecting common interaction structures and assessing them with respect to patterns representing known UX issues. We will demonstrate this usability evaluation technique on an existing study of a Virtual Reality (VR) application of our own.

Creating Virtual Prototypes of Technical Devices using Vivifly

When developing technical devices, such as home appliances, their user interfaces must be evaluated with respect to usability and user experience. For this, companies create expensive real world prototypes of these devices, ask users to interact with them, and record any issues the users have. Virtual Prototypes (VPs) provided in eXtended Reality (XR) may serve the same purpose with the advantage of being cheaper and more widespread available. Unfortunately, the creation of VPs, especially for multi-platform XR, is nowadays challenging and requires game programming skills. In this tutorial, we present Vivifly and Vivian, two tools for configuring and running simple VPs in different variants of XR. Through this, companies can easily create VPs for their devices under development and test them with users in mobile Augmented Reality (AR), in Virtual Reality (VR), or in Mixed Reality (MR) using a single configuration.

SESSION: Workshops

Engineering Interactive Computing Systems for People with Disabilities

The advances in the area of interactive systems are unquestionable. New multi-modal, multi-user, multi-device/screen interaction and interaction techniques, new development methods and processes to improve the development of interactive systems, and so on, have been widely proposed by the community. Using these approaches in the development of interactive systems for people with disabilities can be challenging and requires adapting, customizing, evolving and even defining new approaches. This is even more evident when advocating user-centered design. This workshop aims to present and discuss the design, development, implementation, verification and validation of interactive systems for users with disabilities, whether permanent (visual, hearing, mobility impairments,...), evolutive (in the case of degenerative diseases such as Alzheimer and Parkinson) or temporary (situationally impaired people).

Methods, Tools and Techniques for Trustworthy Autonomous Systems (TAS) Design and Development

This workshop focuses on methods, tools, and techniques to design and develop Trustworthy Autonomous Systems (TAS). TAS is an emerging area of interactive systems that is expanding the scope and remit of engineering. At every scale, making autonomous systems trustworthy is a collective task that requires a multidisciplinary team to work together to understand trust design requirements and provide effective and creative solutions. TAS introduce unique challenges in the design and development of interactive systems because they may have the capacity to learn and evolve, they may need to make decisions or take actions independently with little or no human oversight, and they will be deployed in quite different cultural and regulatory environments. TAS engineers need robust design methods, tools, and techniques to meet diverse TAS requirements and objectives. Our prior research argued for TAS engineers to develop skills in three core areas: soft, strategic, and technical [1]. However, little has been done to flesh out the specific methods, tools, and techniques that TAS engineers should draw on. This workshop intends to invite interactive systems experts to contribute promising design methods, tools, and techniques – particularly in the area of user/actor and design requirements modelling. The workshop aims to present innovative modelling techniques, test these approaches through discussion, think about the main challenges, refine TAS required skills and steer the overarching strategy in this new field for the future.