Jacob O. Wobbrock, University of Washington
Situationally Aware Mobile Devices for Overcoming Situational Impairments
The computer user of today operates in situations very unlike those of the computer user from the 1980s, when PCs sat atop desks in staid office environments that provided ample lighting, comfortable seating, controlled temperatures, and minimal noise or distraction. Computer users of today, by contrast, are likely to use a touch screen device, perhaps while on-the-go, perhaps while outside, perhaps while surrounded by attention-grabbers like people, traffic lights, curbs, and signs. Users might be trying to interact while carrying luggage, wearing gloves, squinting in bright sunlight, or wiping rainwater from their screens. Unfortunately, however, today's mobile devices know almost nothing about these challenging situations, and offer even less to users by way of help or support for interaction. A useful perspective is to view these challenges through the lens of ability, disability, and accessibility, given that these notions involve the interplay of personal, environmental, and social factors. In this view, people can be "situationally impaired," as their abilities and resources for action are diminished by context. In this talk, I present the conceptual and historical foundations for situationally induced impairments and disabilities, including the rightly controversial aspects of this notion. I define a space of impairments that broadens accessibility to include everyone, not just people with disabilities. Having established the foundations for this perspective, I present four projects in which mobile devices are given enhanced situation- and user-awareness (without adding custom sensors), resulting in new capabilities and improved interactions. I demonstrate how, by increasing devices' situation awareness, interfaces can better support users in mobile contexts. By the end of my talk, I hope to have convincingly motivated the need for our mobile devices to become more situationally aware, while acknowledging the privacy and ethical challenges that such awareness raises.
Jacob O. Wobbrock is a Professor of human-computer interaction (HCI) in the Information School and an Adjunct Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington in Seattle, WA, USA. His work seeks to scientifically understand people's interactions with computers and information, and to improve those interactions through design and engineering, especially for people with disabilities. His specific research topics include input & interaction techniques, human performance measurement & modeling, HCI research & design methods, mobile computing, and accessible computing. Prof. Wobbrock has co-authored over 140 peer-reviewed publications and received 23 paper awards, including 7 best papers and 8 honorable mentions from ACM CHI. For his work in accessible computing, especially his development of Ability-Based Design, he received the 2017 SIGCHI Social Impact Award. He also will be inducted into the ACM CHI Academy at CHI 2019 in Glasgow, Scotland. His work has been covered in The New York Times, The Seattle Times, The Huffington Post, M.I.T. Technology Review, USA Today, and other outlets. He is the recipient of an NSF CAREER award and 7 other National Science Foundation grants. He serves on the editorial board of ACM Transactions on Computer-Human Interaction. His doctoral advisees have been hired at Harvard, Cornell, Colorado, Washington, Brown, Simon Fraser, and elsewhere. Prof. Wobbrock is also an entrepreneur—he was the venture-backed founding CEO of AnswerDash for nearly three years (www.answerdash.com). Prof. Wobbrock received his B.S. with Honors in Symbolic Systems and his M.S. in Computer Science from Stanford University in 1998 and 2000, respectively. He received his Ph.D. in Human-Computer Interaction from Carnegie Mellon University in 2006. Upon graduation, he received CMU’s School of Computer Science Distinguished Dissertation Award.
Julio Abascal, University of the Basque Country
Engineering Inaccessible Computing Systems
Accessibility is usually the last feature taken into account when designing interactive systems (in case that it is considered at all). The most important barriers to accessibility are frequently embedded in the own structure of the system and cannot be removed without a painful reengineering process. Too frequently designers decide to skip deep changes arguing that accessibility is expensive, time consuming, and only sporadically necessary.
If the objective is to produce accessible interactive systems, using design methods that take into account the accessibility from the conceptualization of the system can save time and money. In this talk I will present arguments to embrace accessibility as an important feature of the design, illustrated with examples of good and bad practices of the design for accessibility.
Julio Abascal, has a BSc in Physics (Universidad de Navarra, 1978) and a PhD in Informatics (Universidad del País Vasco-Euskal Herriko Unibertsitatea, 1987). He is a Professor at the Computer Architecture and Technology Department of the University of the Basque Country (Spain) where he works since 1981. In 1985 he co-founded the Egokituz Laboratory of Human-Computer Interaction for Special Needs.
His research activity is focussed on the application of Human-Computer Interaction methods and techniques to the Assistive Technology, including the design of ubiquitous, adaptive and accessible user interfaces. He is interested in Assistive Human-Robot Interaction for Alternative and Augmentative Mobility and Manipulation. He also leads a research group aiming to develop methods and tools to enhance sensory, physical and cognitive accessibility to the web.
Since 1991 he is the Spanish representative in the IFIP Technical Committee 13 on HCI and the former and founder chairman (1993-99) of the IFIP WG 13.3 “HCI and Disability”. He served as a member of the Management Committee of COST 219 ter “Accessibility for All to Services and Terminals for Next Generation Networks” and previously of the COST 219 bis “Telecommunications: Access for Disabled and elderly People”. Since 1990 he has served as an advisor, reviewer and evaluator for diverse EU research frameworks (TIDE, TAP, IST, etc.).
Pedro J. Molina, Metadev
Modeling and producing User Interfaces with Web Components in Quid
In the last 30 years, many tools has been created for building UIs. Desktop, Web, mobile, IoT devices, or Augmented Reality demand different approaches for prototyping & construction. Models are a natural way to describe UIs. On the other hand, many UI architectures has been explored in commercial products. Web Components are an emerging standard for browsers leaded by the W3C.
Device fragmentation is a big problem nowadays forcing extra cost on developement. Multi-channel and onmi-channel experiences allows users to complete their day-to-day task using different devices, also jumping from one to another one till completing their desired tasks. In this context, Quid will be presented as a DSL for prototyping abstract User Interfaces
Pedro J. Molina is the founder of Metadev S.L. a Sevilla based company devoted to create tools for developers using DSLs and code generation techniques. He holds a PhD in Computer Sciences specialized in Conceptual Modeling and Code Generation for User Interfaces (Technical University of Valencia, 2003). Has published more than 20 research publications in the field, 2 books and 3 patents in the USPTO. With 20 years working in software he has experience as CTO, Research & Development, Software Architect, and as a developer working for companies like Icinetic and Capgemini.