Welcome to EICS, the Engineering Interactive Computing Systems community, PACMHCI/EICS journal, and annual conference! In this short article, we introduce newcomers to the field and to our community with an overview of what EICS is and how it positions with respect to other venues in Human-Computer Interaction, such as CHI, UIST, and IUI, highlighting its legacy and paying homage to past scientific events from which EICS emerged. We also take this opportunity to enumerate and exemplify scientific contributions to the field of Engineering Interactive Computing Systems, which we hope to guide researchers and practitioners towards making their future PACMHCI/EICS submissions successful and impactful in the EICS community.
Prior research has demonstrated that users are increasingly employing multiple devices during daily work. Currently, devices such as keyboards, cell phones, and tablets remain largely unaware of their role within a user's workflow. As a result, transitioning between devices is tedious, often to the degree that users are discouraged from taking full advantage of the devices they have within reach. This work explores the device ecologies used in desk-centric environments and complies the insights observed into SMAC, a simplified model of attention and capture that emphasizes the role of user-device proxemics, as mediated by hand placement, gaze, and relative body orientation, as well as inter-device proxemics. SMAC illustrates the potential of harnessing the rich, proxemic diversity that exists between users and their device ecologies, while also helping to organize and synthesize the growing body of literature on distributed user interfaces. An evaluation study using SMAC demonstrated that users could easily understand the tenants of user- and inter-device proxemics and found them to be valuable within their workflows.
Eye-tracking has a very strong potential in human computer interaction (HCI) as an input modality, particularly in mobile situations. However, it lacks convenient action triggering methods. In our research, we investigate the combination of eye-tracking and fixed-gaze head movement, which allows us to trigger various commands without using our hands or changing gaze direction. In this instance, we have proposed a new algorithm for fixed-gaze head movement detection using only scene images captured by the scene camera equipped in front of the head-mounted eye-tracker, for the purpose of saving computation time. To test the performance of our fixed-gaze head movement detection algorithm and the acceptance of triggering commands by these movements when the user's hands are occupied by another task, we have designed and developed an experimental application known as EyeMusic. The EyeMusic system is a music reading system, which can play the notes of a measure in a music score that the user does not understand. By making a voluntary head movement when fixing his/her gaze on the same point of a music score, the user can obtain the desired audio feedback. The design, development and usability testing of the first prototype for this application are presented in this paper. The usability of our application is confirmed by the experimental results, as 85% of participants were able to use all the head movements we implemented in the prototype. The average success rate of this application is 70%, which is partly influenced by the performance of the eye-tracker we use. The performance of our fixed-gaze head movement detection algorithm is 85%, and there were no significant differences between the performance of each head movement.
This paper presents iMPAcT tool that tests recurring common behavior on Android mobile applications. The process followed combines exploration, reverse engineering and testing to automatically test Android mobile applications. The tool explores automatically the App by firing UI events. After each event fired, the tool checks if there are UI patterns present using a reverse engineering process. If a UI pattern is present, the tool runs the corresponding testing strategy (Test Pattern). During reverse engineering the tool uses a catalog of UI Patterns which describes recurring behavior (UI Patterns) to test and the corresponding test strategies (Test Patterns). This catalog may be extended in the future as needed (e.g., to deal with new interaction trends). This paper describes the implementation details of the iMPAcT tool, the catalog of patterns used, the outputs produced by the tool and the results of experiments performed in order to evaluate the overall testing approach. These results show that the overall testing approach is capable of finding failures on existing Android mobile applications.
Gesto is a system that enables task automation for Android apps using gestures and voice commands. Using Gesto, a user can record a UI action sequence for an app, choose a gesture or a voice command to activate the UI action sequence, and later trigger the UI action sequence by the corresponding gesture/voice command. Gesto enables this for existing Android apps without requiring their source code or any help from their developers. In order to make such capability possible, Gesto combines bytecode instrumentation and UI action record-and-replay. To show the applicability of Gesto, we develop four use cases using real apps downloaded from Google Play-Bing, Yelp, AVG Cleaner, and Spotify. For each of these apps, we map a gesture or a voice command to a sequence of UI actions. According to our measurement, Gesto incurs modest overhead for these apps in terms of memory usage, energy usage, and code size increase. We evaluate our instrumentation capability and overhead using 1,000 popular apps downloaded from Google Play. Our result shows that Gesto is able to instrument 94.9% of the apps without any significant overhead. In addition, since our prototype currently supports 6 main UI elements of Android, we evaluate our coverage and measure what percentage of UI element uses we can cover. Our result shows that our 6 UI elements can cover 96.4% of all statically-declared UI element uses in the 1,000 Google Play apps.
A gesture elicitation study, as originally defined, consists of gathering a sample of participants in a room, instructing them to produce gestures they would use for a particular set of tasks, materialized through a representation called referent, and asking them to fill in a series of tests, questionnaires, and feedback forms. Until now, this procedure is conducted manually in a single, physical, and synchronous setup. To relax the constraints imposed by this manual procedure and to support stakeholders in defining and conducting such studies in multiple contexts of use, this paper presents Gelicit, a cloud computing platform that supports gesture elicitation studies distributed in time and space structured into six stages: (1) define a study: a designer defines a set of tasks with their referents for eliciting gestures and specifies an experimental protocol by parameterizing its settings; (2) conduct a study: any participant receiving the invitation to join the study conducts the experiment anywhere, anytime, anyhow, by eliciting gestures and filling forms; (3) classify gestures: an experimenter classifies elicited gestures according to selected criteria and a vocabulary; (4) measure gestures: an experimenter computes gesture measures, like agreement, frequency, to understand their configuration; (5) discuss gestures: a designer discusses resulting gestures with the participants to reach a consensus; (6) export gestures: the consensus set of gestures resulting from the discussion is exported to be used with a gesture recognizer. The paper discusses Gelicit advantages and limitations with respect to three main contributions: as a conceptual model for gesture management, as a method for distributed gesture elicitation based on this model, and as a cloud computing platform supporting this distributed elicitation. We illustrate Gelicit through a study for eliciting 2D gestures executing Internet of Things tasks on a smartphone.
Mainstream presentation tools such as Microsoft PowerPoint were originally built to mimic physical media like photographic slides and still exhibit the same characteristics. However, the state of the art in presentation tools shows that more recent solutions start to go beyond the classic presentation paradigms. For instance, presentations are becoming increasingly non-linear, content is quickly evolving beyond simple text and images and the way we author our presentations is becoming more collaborative. Nevertheless, existing presentation content models are often based on assumptions that do not apply to the current state of presentations any more, making them incompatible for some use cases and limiting the potential of end-user presentation solutions. In order to support state-of-the-art presentation functionality, we rethink the concept of a presentation and introduce a conceptual framework for presentation content. We then present a new content model for presentation solutions based on the Resource-Selector-Link (RSL) hypermedia metamodel. We further discuss an implementation of our model and show some example use cases. We conclude by outlining how design choices in the model address currently unmet needs with regards to extensibility, content reuse, collaboration, semantics, user access management, non-linearity, and context awareness, resulting in better support for the corresponding end-user functionality in presentation tools.
This paper introduces a new Graphical User Interface (GUI) and Interaction framework based on the Entity-Component-System model (ECS). In this model, interactive elements (Entities) are characterized only by their data (Components). Behaviors are managed by continuously running processes (Systems) which select entities by the Components they possess. This model facilitates the handling of behaviors and promotes their reuse. It provides developers with a simple yet powerful composition pattern to build new interactive elements with Components. It materializes interaction devices as Entities and interaction techniques as a sequence of Systems operating on them. We present Polyphony, an experimental toolkit implementing this approach, and discuss our interpretation of the ECS model in the context of GUIs programming.
Scientific Workflow Management Systems (SWfMSs) have become popular for accelerating the specification, execution, visualization, and monitoring of data-intensive scientific experiments. Unfortunately, to the best of our knowledge no existing SWfMSs directly support collaboration. Data is increasing in complexity, dimensionality, and volume, and the efficient analysis of data often goes beyond the realm of an individual and requires collaboration with multiple researchers from varying domains. In this paper, we propose a groupware system architecture for data analysis that in addition to supporting collaboration, also incorporates features from SWfMSs to support modern data analysis processes. As a proof of concept for the proposed architecture we developed SciWorCS - a groupware system for scientific data analysis. We present two real-world use-cases: collaborative software repository analysis and bioinformatics data analysis. The results of the experiments evaluating the proposed system are promising. Our bioinformatics user study demonstrates that SciWorCS can leverage real-world data analysis tasks by supporting real-time collaboration among users.
Progressive Web App (PWA) is a new approach to the development of mobile applications (apps) which was proposed by Google in 2015. It combines technology resources of both web and native apps. Meta-design is an End-User Development (EUD) approach from which end-users participate actively in a system's design process. Yet, PWAs are a recent technology and the impacts of associating EUD and PWAs has been little exploited. As the traditional PWA approach is limited regarding users acting as co-designers, we propose the PWA-EU approach, an extension of the traditional PWA architecture that includes EUD concepts. PWA-EU provides contributions in two lenses. First, the proposal was designed to be used by developers on the design/development time. Second, the app developed using PWA-EU approach will allow end-users to select design preferences, which makes them participants of the app's design. This active participation of end-users on the design is possible due to the meta-design concepts present on the PWA-EU approach. In this article, we present the PWA-EU approach and its evaluation in the perspective of developers/designers. For the evaluation, we grouped participants according to their professional background. The results also indicate that novice developers had a reasonable performance with only one hour of training.We conclude that even novice developers could achieve better performance in a real-life environment, in which they would have more time.
We present SAPIENS, a software architecture designed to support engineering of interactive systems featuring peripheral interaction in the context of smart environments. SAPIENS introduces dedicated components for user and device tracking, attention detection, priority management for devices, tasks, and notifications, context-awareness inference, user interruptibility prediction, and device interchangeability that can be instantiated at will according to the needs of the application. To implement these components effectively, SAPIENS employs event-based processing by reusing the core engine of a recently introduced software architecture, Euphoria (Schipor et al., 2019), that was specifically designed for engineering interactions in smart environments with heterogeneous I/O devices, and relies entirely on web standards, protocols, and open data-interchange formats, such as JavaScript, WebSockets, HTTP, and JSON. This inheritance makes SAPIENS flexible and adaptable to support implementation of diverse application scenarios for peripheral interaction and for a wide variety of smart environments, devices, platforms, data formats, and contexts of use. We present our design criteria for SAPIENS regarding (1) event handling techniques, (2) quality, (3) contextual, and (4) attention-related properties, and describe its components and dataflows that make SAPIENS a specialized software architecture for peripheral interaction scenarios. We also demonstrate SAPIENS with a practical application, inspired and adapted from Bakker's (2013) classical example for peripheral interaction, for which we provide an online simulation tool that researchers and practitioners can readily use to consult actual JavaScript code implementing the inner logic of selected components of our architecture as well as to observe live JSON messages exchanged by the various components of SAPIENS.
When task descriptions are precise they can be analysed to yield a variety of insights about interaction, such as the quantity of actions performed, the amount of information that must be perceived, and the cognitive workload involved. Task modelling notations and associated tools provide support for precise task description, but they generally provide a fixed set of constructs, which can limit their ability to model new and evolving application domains and technologies. This article describes challenges involved in using fixed notations for describing tasks. We use examples of recognized tasks analysis processes and their phases to show the need for customization of task notations, and through a series of illustrative examples, we demonstrate the benefits using our extensible task notation and tool (HAMSTERS-XL).
Process Mining is a famous technique which is frequently applied to Software Development Processes, while being neglected in Human-Computer Interaction (HCI) recommendation applications. Organizations usually train employees to interact with required IT systems. Often, employees, or users in general, develop their own strategies for solving repetitive tasks and processes. However, organizations find it hard to detect whether employees interact efficiently with IT systems or not. Hence, we have developed a method which detects inefficient behavior assuming that at least one optimal HCI strategy is known. This method provides recommendations to gradually adapt users' behavior towards the optimal way of interaction considering satisfaction of users. Based on users' behavior logs tracked by a Java application suitable for multi-application and multi-instance environments, we demonstrate the applicability for a specific task in a common Windows environment utilizing realistic simulated behaviors of users.
Biological networks analysis has become a systematic and large-scale phenomenon. Most biological systems are often difficult to interpret due to the complexity of relationships and structural features. Moreover, existing primarily web-based interfaces for biological networks analysis often have limitations in usability as well as in supporting high-level reasoning and collaboration. Interactive surfaces coupled with tangible interactions offer opportunities to improve the comparison and analysis of large biological networks, which can aid researchers in making hypotheses and forming insights. We present Tangible BioNets, an active tangible and multi-surface system that allows users with diverse expertise to explore and understand the structural and functional aspects of biological organisms individually or collaboratively. The system was designed through an iterative co-design process and facilitates the exploration of biological network topology, catalyzing the generation of new insights. We describe a first informal evaluation with expert users and discuss considerations for designing tangible and multi-surface systems for large biological datasets.
We introduce "Life-Tags," a wearable, smartglasses-based system for abstracting life in the form of clouds of tags and concepts automatically extracted from snapshots of the visual reality recorded by wearable video cameras. Life-Tags summarizes users' life experiences using word clouds, highlighting the "executive summary" of what the visual experience felt like for the smartglasses user during some period of time, such as a specific day, week, month, or the last hour. In this paper, we focus on (i) design criteria and principles of operation for Life-Tags, such as its first-person, eye-level perspective for recording life, passive logging mode, and privacy-oriented operation, as well as on (ii) technical and engineering aspects for implementing Life-Tags, such as the block architecture diagram highlighting devices, software modules, third-party services, and dataflows. We also conduct a technical evaluation of Life-Tags and report results from a controlled experiment that generated 21,600 full HD snapshots from six indoor and outdoor scenarios, representative of everyday life activities, such as walking, eating, traveling, etc., with a total of 180 minutes of recorded life to abstract with tag clouds. Our experimental results and Life-Tags prototype inform design and engineering of future life abstracting systems based on smartglasses and wearable video cameras to ensure effective generation of rich clouds of concepts, reflective of the visual experience of the smartglasses user.
Many scientific publications report on computational results based on code and data, but even when code and data are published, the main text is usually provided in a separate, traditional format such as PDF. Since code, data, and text are not linked on a deep level, it is difficult for readers and reviewers to understand and retrace how the authors achieved a specific result that is reported in the main text, e.g. a figure, table, or number. In addition, to make use of new the opportunities afforded by data and code availability, such as re-running analyses with changed parameters, considerable effort is required. In order to overcome this issue and to enable more interactive publications that support scientists in more deeply exploring the reported results, we present the concept, implementation, and initial evaluation of bindings. A binding describes which data subsets, code lines, and parameters produce a specific result that is reported in the main text (e.g. a figure or number). Based on a prototypical implementation of these bindings, we propose a toolkit for authors to easily create interactive figures by connecting specific UI widgets (e.g. a slider) to parameters. In addition to inspecting code and data, readers can then manipulate the parameter and see how the results change. We evaluated the approach by applying it to a set of existing articles. The results provide initial evidence that the concept is feasible and applicable to many papers with moderate effort.
This paper presents MoCaDiX, a method for designing cross-device graphical user interfaces of an information system based on its UML class diagram, structured as a four-step process: (1) a UML class diagram of the information system is created in a model editor, (2) how the classes, attributes, methods, and relationships of this class diagram are presented across devices is then decided based on user interface patterns with their own parametrization, (3) based on these parameters, a Concrete User Interface model is generated in QuiXML, a lightweight fit-to-purpose User Interface Description Language, and (4) based on this model, HTML5 cross-device user interfaces are semi-automatically generated for four configurations: single/multipledevice single/multiple-display on a smartphone, a tablet, and a desktop. From the practitioners' viewpoint, a first experiment investigates effectiveness, efficiency, and subjective satisfaction of three intermediate and three expert designers, using MoCaDiX on a representative class diagram. From the end user's viewpoint, a second experiment compares subjective satisfaction and preference of twenty end users assessing layout strategies for interfaces generated on two devices.
We introduce AB4Web, a web-based engine that implements a balanced randomized version of the multivariate A/B testing, specifically designed for practitioners to readily compare end-users' preferences for user interface alternatives, such as menu layouts, widgets, controls, forms, or visual input commands. AB4Web automatically generates a balanced set of randomized pairs from a pool of user interface design alternatives, presents them to participants, collects their preferences, and reports results from the perspective of four quantitative measures: the number of presentations, the preference percentage, the latent score of preference, and the matrix of preferences. In this paper, we exemplify the AB4Web tester with a user study for which N=108 participants expressed their preferences regarding the visual design of 49 distinct graphical adaptive menus, with a total number of 5,400 preference votes. We compare the results obtained from our quantitative measures with four alternative methods: Condorcet, de Borda count starting at one and zero, and the Dowdall scoring system. We plan to release AB4Web as a public tool for practitioners to create their own A/B testing experiments.
Modern User Interfaces (UIs) are increasingly expected to be plastic, in the sense that they retain a constant level of usability, even when subjected to context (platform, user, and environment) changes at runtime. Adaptive UIs have been promoted as a solution for context variability due to their ability to automatically adapt to the context-of-use at runtime. However, evaluating end-user satisfaction of adaptive UIs is a challenging task, because the UI and the context-of-use are both constantly changing. Thus, an acceptance analysis of UI adaptation features should consider the context-of-use when adaptations are triggered. Classical usability evaluation methods like usability tests mostly focus on a posteriori analysis techniques and do not fully exploit the potential of collecting implicit and explicit user feedback at runtime. To address this challenge, we present an on-the-fly usability testing solution that combines continuous context monitoring together with collection of instant user feedback to assess end-user satisfaction of UI adaptation features. The solution was applied to a mobile Android mail application, which served as basis for a usability study with 23 participants. A data-driven end-user satisfaction analysis based on the collected context information and user feedback was conducted. The main results show that most of the triggered UI adaptation features were positively rated.
Feedback is commonly used to explain what happened in an interface. What if questions, on the other hand, remain mostly unanswered. In this paper, we present the concept of enhanced widgets capable of visualizing their future state, which helps users to understand what will happen without committing to an action. We describe two approaches to extend GUI toolkits to support widget-level feedforward, and illustrate the usefulness of widget-level feedforward in a standardized interface to control the weather radar in commercial aircraft. In our evaluation, we found that users require less clicks to achieve tasks and are more confident about their actions when feedforward information was available. These findings suggest that widget-level feedforward is highly suitable in applications the user is unfamiliar with, or when high confidence is desirable.
Operating power tools over extended periods of time can pose significant risks to humans, due to the strong forces and vibrations they impart to the limbs. Telemanipulation systems can be employed to minimize these risks, but may impede effective task performance due to the reduced sensory cues they typically convey. To address this shortcoming, we explore the benefits of augmenting these cues with the addition of audition, vibration, and force feedback, and evaluate them on users' performance in a VR mechanical assembly task employing a simulated impact wrench. Our research focuses on the utility of vibrotactile feedback, rendered as a simplified and attenuated version of the vibrations experienced while operating an actual impact wrench. We investigate whether such feedback can serve to enhance the operator's awareness of the state of the tool, as well as a proxy for the forces experienced during collisions and coupling, while operating the tool an actual impact wrench. Results from our user study comparing feedback modalities confirm that the introduction of vibrotactile, in addition to auditory feedback can significantly improve user performance as assessed by completion time. However, the addition of force feedback to these two modalities did not further improve performance.
Describing gestures in detail has various advantages for project teams: communication is simplified, interaction concepts are documented, and technical decisions are supported. Common gesture notations focus on textual or graphical elements only, but we argue that hybrid approaches have various advantages, especially because some gesture traits are easier to describe with text and others with arrows or icons. We present GestureCards, a hybrid gesture notation mixing graphical and textual elements we developed to describe multi-touch gestures. To evaluate our approach, we compared how users perceive and are affected by different notations. First, we compared GestureCards with a textual notation and observed advantages in terms of speed, correctness, and confidence. Second, we asked participants to compare GestureCards, a textual, and a graphical notation and rate them. The results indicate that the participants' perception depends on the gesture, but GestureCards received consistently good ratings. Third, we monitored several participants working with GestureCards solving practical development tasks for gesture-based applications and they felt well supported by GestureCards.