Bristol Interaction and Graphics

Recent Seminars

Peer-to-peer finance: Design rhetorics and the limits of financialization

While information and communication technologies (ICT) have from their earliest days been applied to banking and financial processes, the convergence of emerging ICT innovations and social media data-sharing practices is yielding a transformation of financial processes at the individual and local levels. The prevalence of mobile systems, advanced peer-to-peer cryptographic tools, and innovations in technological and financial literacy practices are enabling experimentation driven by a broad range of ideologies and business models. Citizens taking up social media’s tools and practices may be on the brink of transforming the financial system by demanding greater transparency, enabling peer to peer assessments of risk and return, and challenging top-down corporate financial information flows; or they may simply be generating data for the benefit of globalized financial institutions.

Recently, peer-to-peer (“p2p”) lending systems have emerged as a popular vehicle for unsecured consumer and small-business lending. Based on a study of Zopa Limited, a leading UK p2p lending firm, we demonstrate how design rhetorics and user experience (UX) structures created to appeal to technologically sophisticated early adopters were abandoned in order to attract a larger, more mainstream, less technologically literate user base. This study suggests that successful alternative financial ventures will likely forego systems which require significant technological and financial literacy via design rhetorics intended to convey messages of user empowerment, but rather create assemblages and messages stressing stability, trustworthiness and ease of use, by reducing transparency and streamlining the UX.

We argue that this transformation of design rhetorics and business practices, while deeply implicated in processes of financialization, reflects limitations on those processes within the contemporary UK context. Zopa’s shift away from transparency in its users toolkit and reintermediation of the firm as a locus of expertise may be seen as lessening the impact of  financialization on its middle-class lending base, by using business processes and the UX to shift knowledge burdens from the individual investor back to the firm. This analysis suggests that the “financialization of daily life” is neither uni-directional nor uniform, as firms must shape their user experience, design and marketing to reflect local levels of trust and competence with both technological and financial innovations. 


Spherical Displays for Public Spaces

Spherical touch-sensitive displays create new opportunities for social interaction in public spaces. The shape of a spherical display allows users to face each other during interaction. There is no intrinsically defined front or centre of the display, and displays can be placed in the centre of a flow of pedestrian traffic. 

This talk will explore the possibilities for spherical displays in two divergent contexts; as an information display and as a digital interactive art installation.  Through these two public deployments, the talk will discuss the motivation for bringing spherical displays to public spaces, the challenges for evaluating displays in public spaces, methods for combining qualitative and quantitative data, and the exciting implications of non-planar surfaces for social interaction.


Reality-Based Interaction, Next Generation User Interfaces, and Brain-Computer Interfaces

I will begin with the notion of Reality-Based Interaction (RBI) as a
unifying concept that ties together a large subset of the emerging
generation of new, non-WIMP user interfaces.  It attempts to connect
current paths of research in HCI and to provide a framework that can
be used to understand, compare, and relate these new developments.
Viewing them through the lens of RBI can provide insights for
designers and allow us to find gaps or opportunities for future
development.  I will then discuss work in my research group on a
variety of next generation interfaces such as tangible interfaces, eye
movement-based interaction techniques, and, particularly, our current
work on brain-computer interfaces.


When Neurotechnologies meet Human-Computer Interaction

Brain-Computer Interfaces (BCI) are a type of neurotechnology that enable to estimate a user's mental state from a measure of his/her brain activity, typically ElectroEncephaloGraphy (EEG). BCI first appeared as a new type of human-computer interface (HCI), that enabled users to interact with computers by means of brain activity only. This is notably very promising  for severely paralyzed users, to provide them with improved communication and control options, but also for healthy users, e.g., for BCI-based gaming or virtual reality control. More recently, BCI have been shown to be a promising companion to HCI in two other directions. First, they can be used as a new tool to assess objectively the ergonomic quality of a given HCI, to identify where and when are the pros and cons of this interface, based on the user's mental state during interaction. Second, HCI and BCI can be used to design new tools to visualize brain activity in real-time, and to enable people to understand more about how brain activity works. This presentation will cover these different aspects of the encounter of neurotechnologies and HCI, illustrated by some of our work, and hilight some promising research directions.


Allocating Hand Function for Mobile Input

Mobile interaction introduces various physical object manipulation tasks that affect mobile input because tradeoffs must be made in the motor resources of the user’s hands. Daily activities such as carrying a shopping bag, holding a coffee mug, handling a wallet, and even simply carrying the mobile device impose constraints on hand function for input. In this talk I will present two studies in mobile input examining the effects of 1) gripping the mobile device and 2) manipulating other physical objects related to the user’s daily activities. The studies systematically vary these constraints on hand function to examine the importance of the hand’s motor resources for mobile input. The first study presents a model for the functional area of the thumb that predicts the reachable interface elements on a mobile touchscreen, and the second one a manual multitasking test that can inform why input performance differs on seemingly similar interface designs because of different physical demands for hands.