- Friday, May 29, 2015, 5-8pm
- 2024 Elings Hall, UC Santa Barbara
- Information and Directions at http://ilab.cs.ucsb.edu


At the "Four Eyes" Lab, directed by Matthew Turk and Tobias Höllerer, we pursue research in the four I's of Imaging, Interaction, and Innovative Interfaces.


Our open house this year is fully integrated with the Media Arts and Technology End of the Year show Open Sources.


As part of this show, members of the Four Eyes Laboratory will be describing and demonstrating several ongoing research projects. Feel free to drop by any time between 5-8pm and have a look at any projects that might interest you, talk to the lab's faculty, students, and visitors, and partake of some refreshments. 



List of Presented Projects and Presenters:

Surround-View Augmented Stereo Panoramas and Allosphere Unity Integration

Donghao Ren, Tibor Goldschwendt, Tobias Höllerer, Matthew Wright, JoAnn Kuchera-Morin

Work by Four Eyes students Donghao Ren and Tibor Goldschwendt will be featured as part of Allosphere demo sessions that will start every half hour. Come early to claim tickets for a particular time slot! Four Eyes work to be shown in the Allosphere includes integration of an interactive information visualization tool, iVisDesigner, interactive surround scenes rendered by the Unity game engine, Panoramic Video support, and Volume Rendering demonstrations.


Chris Sweeney, Benjamin Nuernberger

We will demonstrate simple mobile augmented reality running on mobile Android devices such as a Tango tablet or Google glass. With our algorithms, a moving device is able to determine its precise location in a room in a manner that allows the user to interact with virtual objects that appear world-stabilized on the display.

Diner's dilemma

James Schaffer and John O'Donovan

This demo is the latest in a series of studies about the abstract trust game "Diner's Dilemma". Visitors can play the live version of the game and see if they can gain the trust of their co-diners, without losing too many points.

StreamScope: Social Stream Analyzer

Byungkyu (Jay) Kang, Tobias Höllerer and John O'Donovan

Modern Internet technology has triggered new paradigm in human communication. Nowadays, many people converse with others including strangers online as much as they do offline. The "StreamScope" is an interactive web-based visualization that represents a landscape of topic-specific conversations and real-world trending events on microblog streams. This framework is designed to help data analysts gain insights into social media data as an apparatus for visual analytic tasks. The key contribution of this work is to provide a time-variant summarization of real-time message stream through which users can understand data with interactive user interfaces.

Botivizm: Using Online Bots for Social Good

Saiph Savage

"Are you trying to fight Racism? Misogyn or Corruption? But just don't have enough people to help you?" We present BotiviZm, the online platform that helps you assemble crowds of volunteers that can help you fight the social problem of your choice! Come use BotiviZm and transform the world!

Participatory Stoves:Designing Renewable Energy Technologies for the Rural Sector

Saiph Savage

Wood represents a form of renewable energy that is widely available. In rural Mexico it represents the main source of energy. It is used not only for cooking, but also to heat houses and to provide lighting. Most Mexican villagers use wood for their stoves, but most of these appliances pose health hazards and are harmful for the environment. Modern stoves can bring better combustion, reduce the amount of smoke, and overall make the transfer of heat more efficient. However, the social adoption of efficient stoves is non-trivial. Our study shows that by using appropriate technology, we can: 1) successfully understand the cultural and social aspects of how villagers use renewable energy; 2) design technology that by considering a region's traditions, is used long term.

iVisDesigner: Expressive and Interactive Design of Information Visualizations

Donghao Ren, Tobias Höllerer

iVisDesigner stands for Information Visualization Designer, which is a platform in which users can interactively create customized information visualization designs for a wide range of data, without resorting to textual programming.

Multi-level Image Segmentation

Xiaoli Zhao, Matthew Turk

Multilevel image segmentation is a technique that divides images into multiple objects and background. In order to improve the effectiveness and efficiency of multilevel image thresholding segmentation, we propose an algorithm based on 2D K-L divergence combined with Particle Swarm Optimization.

Facial expression synthesis

Lei Xiong

We developed an algorithm to synthesize facial expressions. Our work is able to specify many more kinds of expression than the commonly used 6 basic expressions, and we also consider people's own individual characteristics.

A New Human-Readability Infrastructure

Christopher Hall

Here is an upgrade for text that allows abstract structural relationships to be typed in directly alongside standard characters. This infrastructure can be used to bring human-readability to software data structures, deconstructability to user interfaces, and absorb syntactic portions of formal (programming) language design.

Seam Carving

Kuo-Chin Lien

The major objective of image retageting algorithms is to preserve a viewer's perception of a depicted scene while adjusting the size or aspect ratio of an image. This means that an ideal retargeting algorithm has to be able to preserve high-level semantics and avoid generating low-level image distortion. Stereoscopic image retargeting is a more challenging problem in that the 3D perception has to be preserved as well. We present a stereo pair retargeting algorithm to simultaneously guarantee 2D as well as 3D quality.

Disambiguating 2D Gesture Sketching for 3D Scenes

Benjamin Nuernberger

We present a system that is able to disambiguate 2D annotations viewed from arbitrary angles in a 3D space. Due to the inherent differences between 2D and 3D spaces, drawing in 2D and subsequent rendering in 3D is ill-defined and thus creates ambiguities. Here, we present our novel approach that begins to address this problem. Our solution is especially useful for augmented reality enhanced remote and asynchronous collaboration.

Face Interfaces

Julia Kuosmanen, Oleg Spakov, Veikko Surakka, Outi Tuisku, Matthew Turk

Vision-based perceptual user interfaces (VBIs) leverage the ability of computers to "see" people and the surrounding environment. VBIs utilize computer vision to actively and unobtrusively analyze the user's facial expressions, gaze direction, eye movements, head position and orientation as well as hand and body movements and gestures. The focus of the current work is to investigate usability characteristics of VBIs and develop guidelines for user-centered interface design of future generation VBIs. Two example applications are presented: hands-free text entry and video gaming. In the applications, real-time face tracking is used to point at the graphical elements (aka camera mouse) and facial expressions are utilized as activation commands (e.g. mouse single-click).

Interactive Music Recommender

Ivana Andjelkovic. John O'Donovan

We will show a live demo of our cool new tool that gives you personalized music recommendations, tailored to whatever mood you happen to be in. Explore a huge space of musical artists in a mood-based layout. Use navigation trails to fine-tune your recommendations.

Eye Gaze Correction

Yalun Qin, Kuo-Chin Lien

In traditional video conferencing systems, it is impossible for users to have eye contact when looking at the conversation partner's face displayed on the screen, due to the disparity between the locations of the camera and the screen. In our project, we present a gaze correction system that can automatically maintain the eye contact by using computer vision and computer graphics technology. Our system does not require any hardware apart from a web-camera, and supports arbitrary camera positions relative to the screen.

Attention-aware User Interfaces

Mathieu Rodrigue

Attention-aware user interfaces detect attention states in users during learning tasks. Eye tracking and electroencephalography (EEG) monitor the user during reading and provide a classifier with data to decide the user's attention state. The multimodal data informs the system where the user was distracted spatially in the user interface and when the user was distracted in time, allowing the option for future real-time systems to facilitate learning when appropriate.

Mixed Reality Simulation

Mathieu Rodrigue, Tibor Goldschwent, Donghao Ren, Drew Waranis, Tim Wood, Tobias Höllerer

By means of a large (surround) virtual reality display augmented reality (AR) applications and devices are simulated. By doing so, user studies -- evaluating the interaction methods of the AR applications -- as well as the exploration of fundamental AR characteristics can be facilitated. Furthermore, even the simulation, and thus the investigation, of innovative, currently unrealizable AR applications is enabled.

Novel View Generation / User-perspective Magic Lens

Domagoj Baričević

Generating novel views of a live scene (i.e. views that have not been directly observed) is important to many augmented reality scenarios. This work explores this problem in general, and presents a specific example of the user-perspective magic lens.


Carrie Segal and John O'Donovan

SideStories is a story illustration algorithm which places annectent images alongside the authored pages of text. Colors and adjective-noun pairs are used to find words representing things, and a one-of-a-kind handmade website is queried to find photographs. The completed illustration is a digital book page with a border of images and colors.

Choice Explorer

Nicholas Davis, John O'Donovan

We present a general, interactive, web-based visualization that allows a user to explore connections and recommendations produced by collaborative filtering.


Nataly Moreno, Mohit Hingorani

A mobile application for the iPhone 5 using the FLIR infrared camera to aid in sports medicine by detecting inflamed joints. Inflamed joints tend to be about one degree hotter than joints that are not injured. The application guides the user to photograph the patient and it compares the symmetrical body parts to detect inflammation.

Motivating Crowds to Volunteer Neighborhood Data

Nataly Moreno, Saiph Savage

We present two web interfaces using Google Maps and Foursquare's APIs. The interfaces were designed to encourage crowds to contribute geographic data through different motivators. Each interface was designed around a different motivator; we explored whether crowds were more motivated by games or by helping others in the community.

Simple Facial Expression Detection

Jungah Son

We show initial results on a BCI (Brain-computer interface) system that detects facial expression change. Different types of detected facial expressions will be converted to different images on the screen.