Bristol Interaction and Graphics

Future Seminars

An overview on Sentiment Analysis: from text to videos

The emergence of online social networks has made available an enormous amount of data containing users’ opinions about the most varied subjects (e.g., products or services, political parties, etc), yet in varied forms, ranging from simple text snippets as in Twitter to images and even motion (e.g., YouTube). In this scenario, Opinion mining/Sentiment Analysis emerges as a means to determine the users’ opinions polarity, classifying them as positive or negative. As an outcome, it is possible to determine the average approval/rejection of an entity (a product, a service, a company or even of elections candidates) based on the opinion of several users. Note that this analysis can also focus on users’ moods, feelings or emotions, and yet on how people influence each other in virtual online communities.

Sentiment Analysis (SA) has been traditionally based on text. However, in recent years, this area has expanded to multimodal SA (text, images and/or videos), still a very challenging issue. In this talk I will introduce basic concepts and applications of textual SA (my research area), briefly discussing some works on multimodal SA.


Recent Seminars

Design for Peripheral Interaction

Interactive devices such as mobile phones play an important, but often needlessly obtrusive role in everyday life. This can be prevented when people could interact with these devices without focused attention. This talk will address ‘peripheral interaction design’: interaction design which can effortlessly be used as part of people’s everyday routines without inappropriately attracting attention. I will present a number of peripheral interaction design examples which were developed at the Industrial Design department at the Eindhoven University of Technology in the Netherlands.


Displays in our world are but a canvas to our imagination

With apologies to Henry Thoreau, the world is seeing new uses of displays all around us. These displays are on and around our body, fixed and mobile, bleeding into the very fabric of our day to day lives. Displays come in many forms such as smart watches, head-mounted displays or tablets and fixed, mobile, ambient and public displays. However, we know more about the displays connected to our devices than they know about us. Displays and the devices they are connected to are largely ignorant of the context in which they sit including knowing physiological, environmental and computational state. They don’t know about the physiological differences between people, the environments they are being used in, if they are being used by one or more people.

In this talk we review a number of aspects of displays in terms of how we can model, measure, predict and adapt how people can use displays in a myriad of settings. With modeling we seek to represent the physiological differences between people and use the models to adapt and personalize designs, user interfaces. With measurement and prediction we seek to employ various computer vision and depth sensing techniques to better understand how displays are used. And with adaptation we aim to explore subtle techniques and means to support diverging input and output fidelities of display devices. The talk draws on a number of studies from recent UMAP, IUI, AVI and CHI papers.

Our ubicomp user interface is complex and constantly changing, and affords us an ever changing computational and contextual edifice. As part of this, the display elements need to be better understood as an adaptive display ecosystem blending with our world rather than simply pixels.


Motion synthesis by spatial relationship descriptors

In the area of computer animation and robotics, synthesizing movements such as tangling limbs, passing through constrained environments, wrapping and winding cloth or ropes around objects, are considered as difficult problems. Making use of descriptors based on the spatial relationships is essential for synthesizing such movements.  In this talk, I will describe about the research done in our group for synthesizing complex movements by using spatial relationship descriptors. These include synthesizing winding and knotting movements using Gauss linking numbers, synthesizing wrapping movements using electrostatic flux, retargeting movements using Laplacian coordinates, and classifying and recognizing scenes using medial axis.  I will then describe about the future work that I am planning to do including motion planning and physical simulation based on these descriptors and physically simulating movements that involve close interactions.


Making senses: an enactive approach to interaction

The enactive movement of philosophy of mind hinges on the idea that our understanding of the world is based on our bodily interaction with the world. This idea - that we are embodied actors making sense of our surroundings directly through our actions and experiences – moves away from more traditional views of cognition. In enactivism, knowledge is constructed through sensorimotor interactions: sensorimotor skills must be acquired by active, self-initiated exploration.

In what way do sensorimotor contingencies shape our interactions, particularly when interacting with the virtual rather than the real? Can we learn new sensorimotor skills that can replace or even augment our existing senses? What is the role of neuroplasticity in interaction? And can this be harnessed in order to create more useful and meaningful interfaces? This talk will explore the current evidence and suggest avenues for exploration.


Stochastic Sampling for Signal Reconstruction, Integration and Representation

I will summarise my understanding and analysis of two classes of problems whose manifestations are encountered across various engineering disciplines, including branches of computer graphics and vision. The first of these two classes, which I will allude to as "the reconstruction problem", involves hypothesising functions over multi-dimensional domains given a limited sets of observations (or samples) of the functions. I will motivate a 'gray-box' analysis technique [1] for light transport simulation, which exploits knowledge of the underlying physical (optics) processes.

The second type of problem is numerical integration, using evaluations of the signal (integrand) at stochastically determined locations.  I will present results of my 'black-box' analysis [2,4] of numerical integration for potentially high-dimensional, non-stationary, discontinuous integrands. These analyses do not make any special assumptions about the processes that the integrand stems from.

In addition to the above, I will also briefly present some work [3] on using stochastic sampling to build alternative representations for images which lend themselves to targeted applications such as user-assisted soft-selection in images.


  1. Belcour, L., Soler, C., Subr, K., Holzschuch, N., & Durand, F. (2013). 5D covariance tracing for efficient defocus and motion blur. ACM Transactions on Graphics (TOG), 32(3), 31.
  2. Subr, K., & Kautz, J. (2013). Fourier Analysis of Stochastic Sampling Strategies for Assessing Bias and Variance in Integration. ACM TRANSACTIONS ON GRAPHICS, 32(4).
  3. Subr, K., Paris, S., Soler, C., & Kautz, J. (2013, May). Accurate binary image selection from inaccurate user input. In Computer Graphics Forum (Vol. 32, No. 2pt1, pp. 41-50). Blackwell Publishing Ltd.
  4. Subr, K., Nowrouzezahrai, D., Jarosz, W., Kautz, J., & Mitchell, K. (2014, July). Error analysis of estimators that use combinations of stochastic sampling strategies for direct illumination. In Computer Graphics Forum (Vol. 33, No. 4, pp. 93-102).