Four Eyes Lab Open House

Monday, February 28, 2011, 4-7pm

Location: 2024 Elings Hall, UCSB Campus

(Second Floor of Elings Hall, next to the Allosphere)

Information and Directions at http://ilab.cs.ucsb.edu

 

 

At the "Four Eyes" Lab, directed by Matthew Turk and Tobias Höllerer, we pursue research in the four I's of Imaging, Interaction, and Innovative Interfaces. During the open house, we will be describing and demonstrating several ongoing research projects. Feel free to drop by any time between 4-7pm and have a look at any projects that might interest you, talk to the lab's faculty, students, and visitors, and partake of some refreshments.

 

List of Presented Projects and Presenters:

 

The Mixed Reality Simulator Project:

Cha Lee

It is extremely challenging to run controlled studies comparing multiple Augmented Reality (AR) systems. We use an AR simulation approach, in which a Virtual Reality (VR) system is used to simulate multiple AR systems. In this approach, a high-fidelity VR system is used to simulate a lower-fidelity AR system. The current work of this project is to investigate the validity of results derived from experiments run in simulation. To empirically validate this approach we replicate a small set of experiments from the literature and show the results are comparable, and do direct comparisons of our own experiments run in simulation and the real world.


 

Evaluating Visual Tracking & other Enabling Technologies for Augmented Reality

Steffen Gauglitz, Jonathan Ventura

We present our recent efforts in evaluating vision-based enabling technologies for augmented reality: we designed an evaluation framework and dataset for various visual tracking algorithms and conducted an extensive evaluation of the state-of-the-art of a particular type of visual tracking (namely, detector-descriptor-based visual tracking). We also designed a 3D model which is useful to a wide spectrum of augmented reality research: as the first of its kind, it exists both virtually (i.e. as digital model) and physically (as paper models) and may easily be replicated and customized by researchers around the world — the "City of Sights".
 


'I'm feeling LoCo', A Location-Based Context-Aware Recommendation System

Norma Saiph Savage

Current search tools for finding new places to visit are generally not personalized, systems that are personalized require the user to fill out extended surveys, which is cumbersome and impose cognitive burden. Our solution is a system which automatically learns a user's preferences and provides meaningful places to visit. The information that is used for recommendation is the history of places a user has visited, his or her current location, mood and form of transportation. The history of places is retrieved from a user's foursquare profile. The form of transportation is inferred through the use of a decision tree along with a discrete hidden Markov model whose features are GPS speed and accelerometer variance. The mood is obtained directly from the user interface. The system was implemented on the Nokia N900 smart phone.
 


Graph Visualization on Stereo Displays

Basak Alper


This work explores the utility of stereo imaging for graph visualization purposes. Stereo displays are advancing rapidly and becoming ubiquitous. The extra space provided by stereo displays can be exploited to reduce edge overlaps, which is a significant challenge in graph visualization. Interactive highlighting of visual query results on node-link diagrams have proven to be effective. In this work, we propose making use of stereo to highlight portions of a graph by bringing subgraph to the front. We compare using depth as a highlighting method to the static visual cues. We also propose an interactive software with visual querying methods utilizing depth and stereo displays.


TranslatAR

Victor Fragoso, Steffen Gauglitz, Shane Zamora, Jim Kleban

We present a mobile augmented reality (AR) translation system, using a smartphone's camera and touchscreen, that requires the user to simply tap on the word of interest in order to produce a translation, presented as an AR overlay. The translation seamlessly replaces the original text in the live camera stream, matching background and foreground colors estimated from the source images. For this purpose, we developed an efficient algorithm for accurately detecting the location and orientation of the text in a live camera stream that is robust to perspective distortion, and we combine it with OCR and a text-to-text translation engine.
 


Generalized Autofocus

Daniel Vaquero

We present a method for efficient capture and generation of all-in-focus images on computational cameras, by capturing multiple images focused at different distances and fusing them. The method aims to minimize the number of captured images in a scene-adaptive fashion. We have implemented the technique on a mobile computational photography platform, using a Nokia N900 smartphone and the Frankencamera architecture.
 


WiGis: Web-based Interactive Graph Interfaces

B. Gretarsson, S. Bostandjiev, J. O'Donovan, C. Hall:    http://www.wigis.net/

The WiGis project centers around visualization of large-scale, highly interactive graphs in a user's web browser. Our software is delivered natively in your web browser and does not require any plug-ins or add-ons. Our method produces clean, smooth animation in a browser through asynchronous data transfer (AJAX), and access to rich server side resources without the need for technologies such as Flash, Java Applets, Flex or Silverlight. We believe that our new techniques have broad reaching potential across the web.


WiGipedia: Eliciting Semantic Feedback through Visual Analysis of Context in Wikipedia

Svetlin (Alex) Bostandjiev, John O'Donovan, Chris Hall:     http://wigipedia-online.com

Large numbers of Wikipedia users are working together to produce more structured information in the online encyclopedia. For example, the information found in tables, categories and infoboxes. Infoboxes contain key-value pairs, manually appended to articles based on the unstructured text therein. WiGipedia is a web based interactive visualization tool designed to simplify the elicitation of semantically structured information from the average Wikipedia user, and to boost the consistency of structured Wikipedia information. By leveraging structured data in DBpedia, we generate an interface which is embedded on every Wikipedia article as an interactive graph visualization of a collection of entities with typed connections between them. The interface supports single-click editing of structured information in Wikipedia and dynamic infobox attribute suggestions from a range of sources.


TopicNets: Interactive Topic-based Data Exploration

B. Gretarsson, J. O'Donovan, S. Bostandjiev, C. Hall, Laura Devendorf:      http://www.wigis.net/wigi/index.php/topicnets

In collaboration with the Center for Machine Learning and Intelligent Systems at UC Irvine, we have developed TopicNets. The tool is an application of the core WiGis framework to the task of information discovery across large document sets. TopicNets works by extracting "Topics"-- sets of associated terms with probability and confidence values, from large documents or document sets. An interactive graph is then generated, showing document-topic (and/or section-topic) relationships across the large data set.


SmallWorlds: Visual Recommender for Facebook

B. Gretarsson, J. O'Donovan, S. Bostandjiev, C. Hall, Byungkyu Kang, Sujay Sundaram: http://apps.facebook.com/smallworlds/

Deployed as a Facebook application, SmallWorlds, is a visual interactive graph-based interface that allows users to specify, refine, and build item-preference profiles in a variety of domains. The interface facilitates expressions of taste through simple graph interactions and these preferences are used to compute personalized, fully transparent item recommendations for a target user. Predictions are based on a collaborative analysis of preference data from a user's direct friend group on a social network. We find that in addition to receiving transparent and accurate item recommendations, users also learn a wealth of information about the preferences of their friends through interaction with our visualization. Such information is not easily discoverable in traditional text based interfaces.


Interactive Visualization of Uncertain Data in a Mouse's Retinal Astrocyte Image

Mock Suwannatat:   http://cs.ucsb.edu/~mock/retinaproject/

A visualization system is designed to help biology researchers visually and interactively explore a mouse's retina (astrocyte network). Techniques include focus-n-context visualization, graph diagram, animation, glyphs, and probability density function visualization. CREDITS: Raw biological images were collected by Gabe Luna in Professor Steve Fisher's laboratory. Brian Ruttenburg computed cell segmentations and a Markov Random Field spatial model of the cell area distribution. Synthetic biological images are produced by Rama Hoetzlein. Contributions were also made by Rotem Raviv.


Virtual Keyframes for Environment Map Capturing

Sehwan Kim, Christopher Coffin

The acquisition of surround-view panoramas using a single hand-held or head-worn camera relies on robust real-time camera orientation tracking and relocalization. We present robust methodology for camera orientation relocalization, using virtual keyframes for online environment map construction. Instead of solely using real keyframes from incoming video, our approach employs virtual keyframes which are distributed strategically within completed portions of an environment map.


Evaluating Re-Localization Resolution in Augmented Reality Search Tasks

Chris Coffin, Cha Lee

Natural feature tracking systems for augmented reality are highly accurate, but can suffer from lost tracking. When tracking is lost or when a mobile device is in a new location, re-localization or recovery must occur. We demonstrate how the density of useable localization points influences the time it takes for users to recover their pose. We evaluate re-localization and recovery both with and without model information. Our results can be used to aid the design and the evaluation of future recovery solutions.

ARToolkit and Interaction on a Mobile Phone

Wendy Chun

Augmented Reality enables interactive purposes in business, medical, education, and entertainment, recently increasingly on smart phone platforms. We demonstrate marker-based (ARToolkit) tracking on the Nokia N900 platform. However, in order to interact with the content displayed on an ARToolkit marker, we would have to use another marker for tracking purpose. In this project, we want to support real time interaction with virtual objects without resorting to a second marker. The project is implemented on a Nokia N900 mobile phone and it is still ongoing.
 

NSRealm: Cybersecurity Situational Awareness

Shane Zamora, Arvin Faruque, Nichole Stockman

Visualization is a powerful tool for problem solving and decision making. Network Security Realm (NSRealm) is a visualization tool for cybersecurity situational awareness currently being developed for use in the UCSB AlloSphere, a three story tall immersive spherical display. NSRealm displays a three dimensional representation of networks. Developers can write plug-ins for NSRealm that annotate and augment these networks. Plugins currently in development include plugins for the visualization of SSH requests and game-theoretic network security problems.