BIG

Bristol Interaction and Graphics

Mistable

Title and description will appear here

Changibles

Title and description will appear here

Sensabubble

Title and description will appear here

Brain-Computer Interaction

Title and description will appear here

chronotape

Title and description will appear here

morphees

Title and description will appear here

MUSTARD

Title and description will appear here

PiVOT

Title and description will appear here

ultrahaptics

Title and description will appear here

About


Bristol Interaction and Graphics is united by a common interest in creative interdisciplinarity. We act as a hub for collaboration between social scientists, artists, scientists and engineers to combine efficient and aesthetic design. We are particularly interested in areas which couple the design of devices with deployment and evaluation in public settings. Members of the group have expertise in research areas spanning human-computer interaction, visual and tactile perception, imaging, visualisation and computer-supported collaboration.

Recent News


1st Prize award for “Yo” in the SPHERE Dress-Sense competition

Team Members

BIG researcher Themis Omirou recently participated as part of a team that won the 1st prize award in the SPHERE Dress-Sense competition. The winning project was called “Yo – a system to support and enable users to self-manage symptoms of mental illness and modify behaviours, during and beyond a course of Cognitive Behavioural Therapy”. Yo devices promote user-awareness to break their negative thought cycles and behaviour patterns, resulting in a positive change of mood.  The concept storyboard illustrated how the Yo devices encourage self-reflection, human interaction and incremental changes to the user’s daily activities.

Yo is comprised by the Yo-band and the Yo-bot. The Yo-band continuously collects data about a person’s daily activity, recording when they are still and when they are moving. The wearer of the Yo-band can also tap the band when they experience a negative thought. The number of button taps and the activity levels are relayed via Bluetooth to the Yo-bot when in range. The data is then combined and reflected in the well-being of the Yo-bot via a graphic image of a sunrise. In effect the Yo-bot mirrors the well-being of the individual and, depending on their state, it will then suggest activities for them.

The team was composed of 7 team members : Themis Omirou (BIG), Annie Lywood (UWE), Antonis Vafeas (UOB), Egho Ireo (Bath), Michal Kozlowski (UOB) , Kimberly Higgins and Olivia Tiley (Red Maids’ School). The 5000 pound award was given by the Mayor of Bristol, George Ferguson, in a ceremony  that took place in Watershed.

 

device

3D Haptic Shapes at SIGGRAPH Asia

pub_shotV3
 
Dr Benjamin Long presented a paper on creating 3D haptic shapes that can be felt in mid-air today at SIGGRAPH Asia. The paper will be published in ACM Transactions on Graphics.
 
The method uses ultrasound, which is focussed onto hands above the device and that can be felt. By focussing complex patterns of ultrasound, the air disturbances can be seen as floating 3D shapes. Visually, the ultrasound patterns have been demonstrated by directing the device at a thin layer of oil so that the depressions in the surface can be seen as spots when lit by a lamp.
 
Dr Ben Long, Research Assistant from the Bristol Interaction and Graphics (BIG) group in the Department of Computer Science, said: “Touchable holograms, immersive virtual reality that you can feel and complex touchable controls in free space, are all possible ways of using this system.
 
“In the future, people could feel holograms of objects that would not otherwise be touchable, such as feeling the differences between materials in a CT scan or understanding the shapes of artefacts in a museum.”

Youtube video

Link to paper

VideoHandles receives honorable mention at SUI 2014

PaperBannerMain
YouTube: VideoHandles

Jarrod Knibbe, Sue Ann Seah and Mike Fraser presented their work on VideoHandles at SUI 2014 and received an honorable mention for best short paper.

VideoHandles is a novel interaction technique for searching through action-camera (e.g. GoPro) video collections. Action-cameras are designed to be mounted, switched on and then ignored as they record the entirety of the wearer’s chosen activity. This results in a large amount of footage that may only include a small number of interesting moments. When reviewing the footage later, these interesting moments are hard to locate.

VideoHandles presents a novel solution to this problem, enabling the wearer to search through the footage by replaying actions they performed during the initial capture. For example, a diver goes for a long dive and communicates with their buddy throughout using a series of hand gestures. On one occasion, the diver sees a puffer fish and gestures ‘puffer fish’ to their buddy (a fish swimming motion, followed by a mimicked inflation) so that they can see it too. Later, using VideoHandles, the diver repeats the puffer fish gesture in front of the camera in order to locate that exact moment in their footage. Alongside returning the puffer fish footage, VideoHandles also returns all other moments including the ‘fish swimming’ gesture – a key component of all fish gestures in scuba diving.

The VideoHandles work explored action-camera use across a large collection of footage and presented a number of interaction styles and usage scenarios, both enthusiast and professional, including biking, windsurfing and archaeological excavation.

Through the combining glass

Youtube:  Through the combining glass

Diego Martinez Plasencia, Florent Berhaut and Sriram Subramanian will present their paper on how semi transparent mirrors blend together the spaces in front and behind them. The paper investigates this further and highlights a whole new range of interactive experiences enabled by it.

In a museum, people in front of a cabinet would see the reflection of their fingers inside the cabinet overlapping the exact same point behind the glass. By directly pointing at the exhibit with their reflection, instead of pointing at them through the glass, people could easily discuss the features of the exhibits with other visitors. Pop-up windows can also show additional information about the pieces being touched.

Combining this approach with different display technologies offers interesting possibilities for interaction systems. By placing a projector on top of the cabinet, fingertips could work as little lamps to illuminate and explore dark and sensitive objects. When a hands reflection cuts through the object, the projections on visitorsí hands could be used to reveal the inside of the object, which would be visible to any user.

We also demonstrated artistic installations that combine this approach with volumetric displays. Musiciansí record loops in their digital mixers and these appear as floating above the digital mixer. Musicians could then grab these representations, to play them or tweak them with different musical effects.

Figure1v2

Green Hackathon Win at ICT4S: Hacking for Sustainability

Daniel Schien and Christopher Weeks were part of a group awarded joint first place at the recent Green Hackathon held as part of the ICT4S (ICT for Sustainability) conference in Stockholm.  Spending a day underground at a dismantled nuclear reactor at KTH, the teams competed to develop a project around the theme of “food”.  Britons throw away the equivalent of 6 meals a week, leading to over 7.2 million tonnes of household food waste a year.

The winning project was “Eat Exchange”, an app to allow people to share their not-quite-past-date food with others.  Just about to go on holiday but have a nearly full container of milk in the fridge?  Or maybe you stocked up on a 2 for 1 offering last week, but it’s now about to go out of date?  The app allows you offer the item to a network of trusted friends, family, and neighbours, and get text notifications in return when something is being offered.

Although currently in the design phase, watch this space – perhaps a fully functioning prototype will make its way to ICT4S 2015!

Paper accepted for Frontiers of Human Neuroscience

Congratulations to Hannah Limerick whose first paper as a PhD student has been accepted for publication in the journal Frontiers of Human Neuroscience:

  • Limerick, H., Coyle, D. & Moore, J.W. (2014). The Experience of Agency in Human-Computer Interactions: A ReviewFront. Hum. Neurosci. 8:643. doi: 10.3389/fnhum.2014.00643.

Hannah has also given presentations on agency in speech interfaces at two international conferences on cognitive science: ASSC 18 and ICON 2014.

University of Bristol Wins The Best Artwork Award in ISMB 2014

ISMB 2014 has just announced the winner of The Best Artwork Award of this year goes to the ‘supraHex’ by Dr. Hai Fang and Prof. Julian Gough from Department of Computer Science, University of Bristol. Full details of the winner are available here.

The Intelligent Systems for Molecular Biology (ISMB) is the world’s largest bioinformatics/computational biology conference. ISMB 2014 was held on Boston, attracting top computational biology researchers from around the world. As part of this annual conference, the Art & Science Exhibition displays images and videos (called ‘artworks’) that are supposed to be results of creative efforts that involve scientific concepts or tools. This exhibition aims to open our eyes and minds, both scientifically and aesthetically.

ISMB 2014

Based on real-world genome-wide expression data, the artwork ‘supraHex’ is automatically produced by an open-source R/Bioconductor package under the same name. This artwork is inspired by the prevalence of natural objects such as a honeycomb or at Giant’s Causeway, also capturing mechanistic nature of these objects: formation probably in a self-organising manner.

Apart from the artwork itself, the package can do more, outlined as follows: i) the supra-hexagonal map trained via a self-organising learning algorithm; ii) visualisations at and across nodes of the map; iii) partitioning of the map into gene meta-clusters; iv) sample correlation on 2D sample landscape; and v) overlaying additional data onto the trained map for exploring relationships between input and additional data. It is freely available at http://supfam.org/supraHex.

BIG at the Royal Society

BIG lab members proudly demonstrated Ultrahaptics at the prestigious Royal Society Summer Science Exhibition as part of ‘The hidden world of ultrasonic waves’. The exhibit is a collaboration with the Department of Mechanical Engineering at University of Bristol and Electromechanical Research group at the University of Southampton.

The Royal Society Summer Science Exhibition is an annual display of the most exciting cutting-edge science and technology in the UK.  This year’s exhibition which ran from 1 to 6 July attracted over 10,000 members of the public and 2,000 school students. The team also presented at the Royal Society’s Evening Soirée which is an invite-only black-tie event for VIPs and distinguished fellows from the Royal Society.

royalsociety

 

BIG Lab goes to Founders Forum

banner_founders_forum

The BIG lab was in esteemed company as Ultrahaptics and Sensabubble were demonstrated at Founders Forum in London.

Founders Forum is a community for the best global entrepreneurs, select inspiring CEOs and key investors, in media and technology. Their invite-only forums bring together over 3000 of the world’s best, brightest and most dynamic digital entrepreneurs, who engage in brainstorming, discussion and experiencing demos of futuristic technology.

This year’s guestlist included Eric Schmidt of Google, HRH The Duke of York and many more pioneers in their fields. The BIG demos certainly impressed the audience and fitted in well amongst the fast cars, quadcopters and telepresence robots.

Digital Sustainability: IEEE Computer features our work with The Guardian newspaper

This months IEEE Computer magazine green column features our work on sustainability of digital services that we carried out with The Guardian. It summarises our modelling work of the end-to-end energy consumption of digital services carried out in the (Sympact) project. These models are used to estimate the carbon footprint of servers, networks and user devices for text or video services and have served The Guardian to become the first organisation to use such findings in its annual sustainability report.