Bristol Interaction and Graphics

Recent Seminars

Reality-Based Interaction, Next Generation User Interfaces, and Brain-Computer Interfaces

I will begin with the notion of Reality-Based Interaction (RBI) as a
unifying concept that ties together a large subset of the emerging
generation of new, non-WIMP user interfaces.  It attempts to connect
current paths of research in HCI and to provide a framework that can
be used to understand, compare, and relate these new developments.
Viewing them through the lens of RBI can provide insights for
designers and allow us to find gaps or opportunities for future
development.  I will then discuss work in my research group on a
variety of next generation interfaces such as tangible interfaces, eye
movement-based interaction techniques, and, particularly, our current
work on brain-computer interfaces.


When Neurotechnologies meet Human-Computer Interaction

Brain-Computer Interfaces (BCI) are a type of neurotechnology that enable to estimate a user's mental state from a measure of his/her brain activity, typically ElectroEncephaloGraphy (EEG). BCI first appeared as a new type of human-computer interface (HCI), that enabled users to interact with computers by means of brain activity only. This is notably very promising  for severely paralyzed users, to provide them with improved communication and control options, but also for healthy users, e.g., for BCI-based gaming or virtual reality control. More recently, BCI have been shown to be a promising companion to HCI in two other directions. First, they can be used as a new tool to assess objectively the ergonomic quality of a given HCI, to identify where and when are the pros and cons of this interface, based on the user's mental state during interaction. Second, HCI and BCI can be used to design new tools to visualize brain activity in real-time, and to enable people to understand more about how brain activity works. This presentation will cover these different aspects of the encounter of neurotechnologies and HCI, illustrated by some of our work, and hilight some promising research directions.


Allocating Hand Function for Mobile Input

Mobile interaction introduces various physical object manipulation tasks that affect mobile input because tradeoffs must be made in the motor resources of the user’s hands. Daily activities such as carrying a shopping bag, holding a coffee mug, handling a wallet, and even simply carrying the mobile device impose constraints on hand function for input. In this talk I will present two studies in mobile input examining the effects of 1) gripping the mobile device and 2) manipulating other physical objects related to the user’s daily activities. The studies systematically vary these constraints on hand function to examine the importance of the hand’s motor resources for mobile input. The first study presents a model for the functional area of the thumb that predicts the reachable interface elements on a mobile touchscreen, and the second one a manual multitasking test that can inform why input performance differs on seemingly similar interface designs because of different physical demands for hands.


Managing Attention in Ubiquitous Computing Environments

Within the Ubiquitous Computing paradigm the management of attention plays an important role, i.e. given that users have to interact with potentially many devices, that may compete for the user's attention. Designing for the capabilities and limitations of human attention seem to be crucial for effective and pleasant interfaces. Surprisingly there has not been too much work so far addressing this problem specifically. In my talk, I will report on our ongoing and increasing (and thus not so mature) activities to include issues of human attention in mobile and ubiquitous systems. I will provide examples from our research on "interruptions on mobiles", "interaction with large screens and media facades", and "managing attention in situated interaction".


InfiniFace and Largible: end-user configuration of distributed gaming interfaces and large tangible interfaces.

In this talk, two research ideas are presented. INFINIFACE is an application to create custom gaming systems by interconnecting several phones, tablets, gamepads or laptops. Games designed to run in Infiniface, define different roles; each role has associated inputs and outputs. For instance, steering, pointing or buttons are inputs whereas views or listeners are outputs.  Players can select their roles and distribute inputs and outputs across the available devices; the created custom console is greater than its constituting parts. LARGIBLES or large tangible interfaces employ an entire room as the interaction space. Furthermore, actuated levitating pieces are simulated with flying balls. Advantages of this approach are: support for dozens of tangibles, multitudinous collaboration and use of spatial memory.