Four Eyes Lab Open House

Tuesday, May 29, 2012, 6-9pm

Location: 2024 Elings Hall, UCSB Campus

(Second Floor of Elings Hall, next to the Allosphere)

Information and Directions at http://ilab.cs.ucsb.edu

 

 

At the "Four Eyes" Lab, directed by Matthew Turk and Tobias Höllerer, we pursue research in the four I's of Imaging, Interaction, and Innovative Interfaces. During the open house, we will be describing and demonstrating several ongoing research projects. Feel free to drop by any time between 6-9pm and have a look at any projects that might interest you, talk to the lab's faculty, students, and visitors, and partake of some refreshments. See also our handout.

 

List of Presented Projects and Presenters:

 

The Mixed Reality Simulator Project:

Cha Lee

It is extremely challenging to run controlled studies comparing multiple Augmented Reality (AR) systems. We use an AR simulation approach, in which a Virtual Reality (VR) system is used to simulate multiple AR systems. In this approach, a high-fidelity VR system is used to simulate a lower-fidelity AR system. The current work of this project is to investigate the validity of results derived from experiments run in simulation. To empirically validate this approach we replicate a small set of experiments from the literature and show the results are comparable, and do direct comparisons of our own experiments run in simulation and the real world.

Interactive Verification for Directed Social Queries

Saiph Savage and Angus Forbes

The list of "friends" of many social network users can be extremely large. This creates several challenges when users seek to canalize certain social interactions to friends that share a particular interest. For example: "Show my cat pictures to cat lovers only." We present a novel system for the classification and verification of social network data. We introduce a novel algorithm for modeling friends' interest and also present an interactive visualization that exposes the results of our model and enables a human-in-the-loop approach for result analysis and verification.
 


Improving Image-feature Matching

Victor Fragoso

We present several techniques that provide useful information for improving the performance of several image matchers. These matchers are key components in several applications such as visual tracking, image stitching, and others.
 


Magic Lens with User-Perspective Rendering

Domagoj Baričević

Current augmented reality magic lens implementations render the augmented scene from the point of view of the camera on the hand-held device. The perspective of that camera is very different from the perspective of the user so what the user sees does not align with the real world. A true magic lens would show the scene from the point of view of the user, not the device. The technology needed to make this possible has only recently become available. We have now developed the first prototype of a hand-held AR magic lens with user-perspective rendering.


Visual Interaction for Remote Collaboration

Steffen Gauglitz

Current telecommunication/teleconference systems are largely successful when dealing with verbal communication and digital data (such as presentation slides), but they hit severe limitations when real-world objects are involved. We present a paradigm and prototype that aims at significantly increasing the interactivity of remote collaboration systems and thus their applicability, by leveraging novel computer vision and augmented reality techniques.


TasteWeigths: Interactive Music Recommendation

Svetlin Bostandjiev

TasteWeights is a visual interactive hybrid recommender system designed to personalize information flow from multiple social and semantic web resources such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit hybridization preferences from the end user.


Mice cells visualization

Panuakdet Suwannatat

Astrocytes are an important type of glial cells present in retinas that play an important role in diseases and injuries. We present an interactive and scalable system that visualizes a whole retina that has been imaged with a laser scanning confocal microscope. The visual analysis tools are designed to aid the neuroscientists in discovering new knowledge about these cells.


Composition Context Photogrphy

Daniel Vaquero

A "composition context camera" uses contextual information given by viewfinder images, their capture parameters, and inertial sensor data collected while the user is framing a photograph to compute a wide variety of interesting and compelling photo choices to present to the photographer, such as panoramas, high dynamic range images, and compositions using moving objects. We expect this capability to expand the photographic possibilities for casual and amateur users, who often rely on automatic camera modes.


Modeling Credibility in Twitter

Byungkyu Kang and John O'Donovan

We present an evaluation of three computational models for recommending credible topic-specific information in Twitter. In this study, we crawled and analyzed 8 different sets of twitter data. Focusing on the particular topic('#Libya'), we conducted predictive experiment with real-world Twitter data set, and it showed that the social model performs best, followed by the hybrid and the content models respectively. By carefully observing feature candidates obtained from the data crawler, 5 social features and 19 content features are chosen for the exploratory data analysis. As a result, the social model predicted credibility in tweet level with 88.17% accuracy.


Light source responsive object rendering

Byungkyu Kang

We have been experiencing unrealistic composition of both rendered image and real image frame. For example, crispy 3D or 2D object on a blurry camera image and bright augmented object in a dark surrounding are the most common unrealistic compositions. We provide a lighting mechanism interacting with real background light sources in OpenGL rendering loop.


SIGMA and Inter graph

Greg Meyer and John O'Donovan

Sigma which stands for "Statistical Interface for Graph Manipulation and Analysis" presents a statistical approach for gaining deep understanding of a graph visualization. The approach follows Shneiderman's vision that "visualizations simplify the statistical results, facilitating sense-making and discovery of features such as distributions, patterns, trends, gaps and outliers." Thus, the addition of statistical metrics within a graph visualization tool efficiently improves exploratory data analysis and allow analysts to discover new interesting relation- ships between entities. In addition, we believe that a statistical interface can play a role as a navigation control for a large graph visualization.


Cubia

Yun Teng and Theodore Kim

Animating realistic characters requires a tremendous amount of manual input from artists. In order to reduce this effort, physically-based techniques have been proposed that automatically generate realistic deformations. The Cubica toolkit generates such deformations by performing efficient finite element simulations that contain both geometric and material non-linearities. Its main feature is its use of subspace methods, also known as dimensional model reduction or reduced order methods, which accelerate simulations by several orders of magnitude and achieve interactive time-stepping rates.


Vizualization Tool for Cybersecurity Situation Awareness

Nichole Stockman

Pictured is a vizualization tool developed to aid security professionals in maintaining situational awareness during execution of various cybersecurity missions. Developed using Java and Processing, it allows the user to explore and monitor the state of their missions as well as the status of any resources required to complete the missions and any attacks that have targeted those resources. The dataset being visualized comes from the 2011 International Capture the Flag competition hosted by the UCSB Security Lab.


Modeling and Localization for Mobile Augmented Reality

Jonathan Ventura

We propose a system for modeling an outdoor environment using an omnidirectionalcamera and then continuously estimating a camera phone's position with respect to the model. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared urban environment.