Situationally induced impairments and disabilities (SIIDs) can compromise people's use of mobile devices. Factors like walking, divided attention, cold temperatures, low light levels, glare, inebriation, fear, loud noises, or rainwater can make using a device in off-desktop environments challenging and even unsafe. Unfortunately, today's mobile devices know almost nothing about their users' situations, contexts, or environments, instead employing many of the same interaction concepts found on desktop systems from the 1980s. This article presents a decade's worth of work from 2008 - 2018 in making mobile devices more situationally aware and capable of improving interaction for users experiencing SIIDs. Also presented is a categorized list of factors that can cause SIIDs, and a two-dimensional space for characterizing impairments. Seven specific research projects are summarized, which variously address walking, hand grips, divided attention, distraction, inebriation, and rainwater interference. A "sense-model-adapt" design pattern emerges from many of these projects for addressing SIIDs. Taken together, these projects demonstrate how mobile devices can be made more situationally aware and better capable of supporting users' interactions on-the-go.
Accessibility is usually the last feature taken into account when designing interactive systems (in case that it is considered at all) [4]. The most important barriers to accessibility are frequently embedded in the own structure of the system and cannot be removed without a painful reengineering process [1]. Too frequently designers decide to skip deep changes arguing that accessibility is expensive, time consuming, and only sporadically necessary [2]. If the objective is to produce accessible interactive systems, using design methods that take into account the accessibility from the conceptualization of the system can save time and money [5]. In this talk I will present arguments to embrace accessibility as an important feature of the design [3], illustrated with examples of good and bad practices of the design for accessibility.
Quid is a web modeling tool aiming to create a complete modeling environment for prototyping Web Components in the browser. The approach presented introduces a concise DSL based on minimal syntax and indentation to implement containment relationships. A WYSIWYG editor is also provided to allow users to explore in real-time the design been constructed. Model-driven techniques and code generation are used to explore different implementation choices with different Web Components frameworks.
Notifications have become a core component of the smart-phone as our ubiquitous companion. Many of these only require minimal interaction, for which the smartwatch is a helpful companion device. However, its design and placement is influenced by its traditional ancestors. For applications where the user is constrained because of a specific usage situation, or performs tasks with both hands simultaneously, interaction with the smartwatch can be cumbersome. In this paper, we propose a wearable armstrap for minimal interaction in long-lived tasks. Placed around the elbow, it is outside the hands' proximal working space which reduces interference. Its flexible e-ink display provides screen space to provide overview information at minimal energy consumption for longer uptime. We designed the wearable for a professional use-case, meaning that is can easily be placed above protective clothing as its flexible round shape easily adjusts to various diameters. Capacitive touch sensing allows gesture input even under rough conditions, e.g., with gloves.
The IVY workbench is a model-based tool that supports the formal verification of interactive computing systems. It adopts a plugin-based architecture to support a flexible development model. Over the years the chosen architectural solution revealed a number of limitations, resulting both from technological deprecation of some of the adopted solutions and a better understanding of the verification process to support. This paper presents the redesign and implementation of the original plugin infrastructure, originating a new version of the tool: IVY 2. It describes the limitations of the original solutions and the new architecture, which resorts to the Java module system in order to solve them.
Simple username/password logins are widely used on the web, but are susceptible to multiple security issues, such as database leaks, phishing, and password re-use. Two-factor authentication is one way to mitigate these issues, but suffers from low user acceptance due to (perceived) additional effort.
We introduce SecuriCast, a method to provide two-factor authentication using WebBluetooth as a secondary channel between an unmodified web browser and the user's smart-phone. Depending on the usage scenario and the desired level of security, no device switch and only minimal additional interaction is required from the user. We analyse SecuriCast based on the framework by Bonneau et al., briefly report on results from a user study with 30 participants demonstrating performance and perceived usability of SecuriCast, and discuss possible attack scenarios and extensions.
We introduce GestMan, a cloud-based GESTure MANagement tool to support the acquisition, design, and management of stroke-gesture datasets for interactive applications. GestMan stores stroke-gestures at multiple levels of representation, from individual samples to classes, clusters, and vocabularies and enables practitioners to process, analyze, classify, compile, and reconfigure sets of gesture commands according to the specific requirements of their applications, prototypes, and interactive systems. Our online tool enables acquisition of 2-D stroke-gestures via a HTML5-based user interface as well as 3-D touch+air and webcam-based gestures via dedicated mappers. GestMan implements five software quality characteristics of the ISO-25010 standard and employs a new mathematical formalization of stroke-gestures as vectors to support efficient computation of various gesture features.
Production lines are increasingly defined by smaller lot sizes that require workers to memorize frequent changes of assembly instructions. Previous research reports positive results of using assistive systems that compensate increments of workload by providing "just-in-time" instructions. However, there is rare evidence to which degree workload is alleviated by using assistive technologies. This work explores the potential of electrodermal activity (EDA) as a real-time monitoring tool for workload that is placed by two different assistive systems during manual assembly. In a preliminary user study (N=18), participants were induced with temporal and mental workload while conducting an assembly task with two different assistive systems: paper instructions and in-situ projections. Our preliminary findings indicate that EDA measures and working performance correlate to workload levels when using both assembly systems. Based on our results, we discuss future research in the area of smart factories that implicitly evaluate workload through EDA in real-time to adapt assistive technologies at workplaces individually during manual assembly.
Connected cars can create, store, and share a wide variety of data reported by in-vehicle sensors and systems, but also by mobile and wearable devices, such as smartphones, smart-watches, and smartglasses, operated by the vehicle occupants. This wide variety of driving- and journey-related data creates ideal premises for vehicular logs with applications ranging from driving assistance to monitoring driving performance and to generating content for lifelogging enthusiasts. In this paper, we introduce a design space for vehicular lifelogging consisting of five dimensions: (1) nature and (2) source of the data, (3) actors, (4) locality, and (5) representation. We use our design space to characterize existing vehicular lifelogging systems, but also to inform the features of a new prototype for the creation of digital content in connected cars using a smartphone and a pair of smartglasses.
Context-oriented programming languages allow programmers to develop context-aware systems that can adapt their behaviour dynamically upon changing contexts. Due to the highly dynamic nature of such systems and the many possible combinations of contexts to which such systems may adapt, developing such systems is hard. Feature-based context-oriented programming helps tackle part of this complexity by modelling the possible contexts, and the different behavioural adaptations they can trigger, as separate feature models. Tools can also help developers address the underlying complexity of this approach. This paper presents a visualisation tool that is intricately related to the underlying architecture of a feature-based context-oriented programming language, and the context and feature models it uses. The visualisation confronts two hierarchical models (a context model and a feature model) and highlights the dependencies between them. An initial user study of the visualisation tool is performed to assess its usefulness and usability.
This paper presents ongoing research work on developing a protocol framework for human motion recognition using complex and continuous 3D motion data into more intuitive 2D trajectory representation based-on the quaternion visualization. Quaternions are very compact and free from gimbal lock for representing orientations and rotations of objects in 3D space. In this study, the focus is only on the arm orientation and not the position. In our pilot experimental evaluation, we examine our approach to visually recognize several biceps curl using quaternions data collected using wireless inertial sensors attached to the human arm. The results of the analysis indicate that the proposed framework makes it possible to represent 3D motion data in the form of a 2D trajectory for continuous motion patterns.
Internet of Things is a technology paradigm that enables the interaction of devices and communications technologies with embedded software, integrating different areas and multidisciplinarity. It is built from smart objects, that relies on things interaction and information exchange, which can lead to development challenges. This paper presents research towards the definition of a framework to support the engineering of IoT software systems. From a literature review, we introduce six IoT facets representing knowledge areas and topics to consider while engineering IoT software systems. The proposed framework uses them to have a multifaceted perspective of the IoT problem domain. Three steps going from Project Characterization to a strategy to support decision-making for development compose the framework. The article presents a real case scenario of a shrimp farm used to illustrate its use.
Modern User Interfaces (UIs) are increasingly expected to be plastic in the sense that they retain a constant level of usability, even when subjected to context changes at runtime. Adaptive UIs (AUIs) have been promoted as a solution for context variability due to their ability to automatically adapt to the context-of-use at runtime. However, development of AUIs is a complex task as different aspects such as context monitoring and UI adaptation have to be supported. In previous work, model-driven engineering approaches were proposed to support the development of AUIs in a systematic and efficient manner. However, existing model-driven development approaches for AUIs face challenges regarding flexibility, reusability, and compatibility to de facto standard UI frameworks like Angular, which hinder their industry-wide usage and adoption in practice. To address this problem and explore an alternative approach, we propose a component-based development framework for AUIs (CoBAUI). CoBAUI defines a modular framework for supporting the development of AUIs and consists of various components to cover aspects like context monitoring and UI adaptation at widget level. The CoBAUI framework was implemented based on Angular and aims to support the development of AUIs through highly reusable and flexible components. We demonstrate the benefit of our CoBAUI framework based on a case study of an AUI for a library web application.
We present ComPat, an open source visual editor enabling end users to compose graphical user interfaces based on the composite pattern, a common software engineering design pattern: any widget or group of widgets is treated the same way as a single instance of the same type of widget. ComPat exploits a grid of rows and columns, where each cell, regulated by layout constraints, is populated either by direct import of widgets from a palette or by pattern application. In order to compose graphical user interfaces, any portion could be cut, copied, pasted, and treated as a single object thanks to the composite pattern, thus facilitating reusability. Any portion becomes a pattern that can be applied either by direct instantiation or by rewriting. ComPat automatically generates a Java Swing graphical user interface corresponding to the composition and stores its definition in a User Interface Description Language based on a XML Schema.
Software systems engineering involves many engineers, often from different engineering disciplines. Efficient collaboration among these engineers is a vital necessity. Tool support for such collaboration is often lacking, especially with regards to consistency between different engineering artifacts (e.g., between model and code or requirements and specifications). Current collaboration tools, such as version control systems, are not able to address these cross-artifact consistency concerns. The consequence is unnecessarily complex consistency maintenance during engineering. This paper explores consistent handling of engineering artifacts during collaborative engineering. This work presumes that all engineers collaborate using a joint, cloud-based engineering environment and engineering artifacts are continuously synchronized with this environment. The artifacts can be read and modified by both engineers and analysis mechanisms such as a consistency checker. The paper enumerates different consistency checking scenarios that arise during such collaboration.
Performing diagnosis or exploratory analysis during the training of deep learning models is challenging but often necessary for making a sequence of decisions guided by the incremental observations. Currently available systems for this purpose are limited to monitoring only the logged data that must be specified before the training process starts. Each time a new information is desired, a cycle of stop-change-restart is required in the training process. These limitations make interactive exploration and diagnosis tasks difficult, imposing long tedious iterations during the model development. We present a new system that enables users to perform interactive queries on live processes generating real-time information that can be rendered in multiple formats on multiple surfaces in the form of several desired visualizations simultaneously. To achieve this, we model various exploratory inspection and diagnostic tasks for deep learning training processes as specifications for streams using a map-reduce paradigm with which many data scientists are already familiar. Our design achieves generality and extensibility by defining composable primitives which is a fundamentally different approach than is used by currently available systems.
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
Promotion and demotion are a typical adaptive navigation technique making a page or a link easier to select by emphasizing it or de-emphasizing it depending on its popularity. This technique, which was successfully applied to adaptive web sites, is now generalized to mainstream graphical user interfaces by introducing bimotion user interfaces, which constantly and dynamically perform adaptivity by promoting the most predicted widgets and demoting the least predicted ones either in context or in a separated prediction window. Promoted widgets that are less frequently used become demoted, demoted widgets that are more frequently used become promoted.
Smartwatches are gaining popularity on market with a set of features comparable to smartphones in a wearable device. This novice technology brings new interaction paradigms and challenges for blind users, who have difficulties dealing with touchscreens. Among a variety of tasks that must be studied, text entry is analyzed, considering that current existing solutions may be unsatisfactory (as voice input) or even unfeasible (as working with tiny QWERTY keyboards) for a blind user. More specifically, this paper presents a study on possible solutions for composing a Braille cell on smart-watches. Five prototypes were developed and different feedback features were proposed. These are confronted with seven specialists on an evaluation study that results in a qualitative analysis of which strategies can be more useful for blind users in a Braille text entry.
The Internet of Things (IoT) represents a promising paradigm for the integration of communication devices and technologies leading to a shift from the classical view of development. Their engineering presents challenges since it enables different system interaction and connectivity among things. Therefore, it is necessary to revisit our way of engineering software systems and begin to consider the particularities required by these new types of software systems. The goal of our research is to build evidence-based software technologies to support multidisciplinary decision-making in engineering IoT applications.
Business Process Model Notation focuses on functional processes; so, the design of the interface generally depends on the subjective experience of the analyst. This thesis proposes a new method to generate interfaces from BPMN models. The idea is to identify rules from BPMN to interfaces in existing real projects. We have analyzed 7 Bizagi projects to generalize a list of rules. It has been done considering five BPMN patterns. Apart from BPMN primitives, there are rules that depend on elements of Class Diagrams to know how to generate the interfaces. When the rules have several alternatives to generate the interfaces, we need an unambiguous semantics to specify which alternative we are going to use. We propose extending the BPMN model with new stereotypes to specify when using each alternative. Which alternatives could improve the usability among all the possibilities is also a target of study in the thesis.
Workshops are a great opportunity for identifying innovative topics of research that might require discussion and maturation. This paper summarizes the outcomes of the workshops track of the 11th Engineering Interactive Computing Systems conference (EICS 2019), held in Valencia (Spain) on 18-21 June 2019. The track featured three workshops, one half-day, one full-day and one two-days workshop, each focused on specific topics of the ongoing research in engineering usable and effective interactive computing systems. In particular, the list of discussed topics include novel forms of interaction and emerging themes in HCI related to new application domains, more efficient and enjoyable interaction possibilities associated to smart objects and smart environments, challenges faced in designing, developing and using interactive systems involving multiple stakeholders.