- Thursday, May 23rd, 2012, 5:00 - 8:00 pm
- Location: 2024 Elings Hall, UCSB Campus
(Second Floor of Elings Hall, next to the Allosphere)
- Information and Directions at http://ilab.cs.ucsb.edu

 

At the "Four Eyes" Lab, directed by Matthew Turk and Tobias Höllerer, we pursue research in the four I's of Imaging, Interaction, and Innovative Interfaces. During the open house, we will be describing and demonstrating several ongoing research projects. Feel free to drop by any time between 6-9pm and have a look at any projects that might interest you, talk to the lab's faculty, students, and visitors, and partake of some refreshments. See also our handout.

 

List of Presented Projects and Presenters:

Capitalizing from Directed Social Interactions

Saiph Savage

The friend list of many social network users can be extremely large. This creates several challenges when users seek to canalize social interactions to friends that share a particular interest, or to friends whose social capital could benefit the user. We present a novel online system for precise targeting of online audiences that cover the user’s different social and information needs.
 


Data OS

Christopher Hall

An architecture for metadata-friendly personal computing environments – acting simultaneously as a text editor, command line interface, programming IDE, interactive interpreter, and runtime GUI for software with or without any UI code. Data OS hopes to bring serialized data out of the 60s, using a universal binary syntax based meta-format with all the recursion, binary encoding, and human readability benefits undelivered by XML. By fostering an explicitly structure aware ecosystem, ‘files’ go from being passive opaque monoliths of bytes to active transparent data structures with automatic management of the inlining and factoring of dependencies as pieces are migrated across systems or contexts.
 


Diner’s Dilemma

John O’Donovan, James Schaffer

To help us understand how trust in information content is formed, and to understand the factors that affect it, we have developed an interactive version of the classic “3 person Diner’s Dilemma game”. This game has been well studied by researchers in cognitive science and psychology fields. We are particularly interested in ways that the user interface can influence trust decisions in the abstract game, and further, in ways that this can be propagated to real world decision-making scenarios.

The game itself is simple: given that you will be repeatedly dining with 2 others over the period of a year, and always splitting the bill at the end of each meal, you have a choice to order a low-quality and low cost item such as a hotdog, or a high-quality expensive item such as a lobster. When we look in detail at dining behavior over time in this simple cooperate or don’t cooperate scenario, interesting things happen!


Fluo

James Schaffer

Finding reliable information on the web is becoming increasingly more difficult and more important because of the increasing amounts of user-provided content. As an example, during the aftermath of the tragic events at the 2013 Boston marathon, a large amount of Twitter feeds tagged the perpetrators as having roots in the Czech Republic, confusing it with Chechnya, and worse still, that because of this the perpetrators were in fact “Republicans”. This confusion was so extensive that it prompted the Ambassador of the Czech Republic to post a corrective note on their official web page. As another example, the Twitter account of the Associated Press was recently hacked. The hackers posted a fake tweet about an attack on the white house in which president Obama was injured. The resulting pandemonium in social media was so great that it caused a short but sharp drop in the US stock markets.

Fluo is a tool developed at the Four Eyes Lab to help address these problems by allowing information analysts to identify credible information quickly. The tool combines inputs from a variety of credibility-mining algorithms that leverage content-based and dynamic network-based analytics to determine credibility of a message or a message author. Results are presented in an interactive user interface that provides explanation and control of the underlying algorithms to the end user.


Interactive Remote Collaboration using Augmented Reality

Steffen Gauglitz

Current telecommunication/teleconference systems are largely successful when dealing with verbal communication and digital data, but they hit severe limitations when the physical environment is involved. We present a paradigm and various research efforts that aim at significantly increasing the interactivity of remote collaboration systems and thus their applicability, by leveraging novel computer vision and augmented reality techniques. Specifically, rather than forcing a remote user to passively watch a video stream, our paradigm allows him/her to control his/her viewpoint, and place live annotations into the scene.


LinkedVis: A Virtual Career Guidance Tool

John O’Donovan

You just might find the job of your dreams using our LinkedVis recommendation tool. It analyses the information in your LinkedIn network to recommend companies and roles.

LinkedVis works by scanning your network for people whose career path, skill set and qualifications resemble yours, then ranks those people in order of similarity to you. Finally, it uses their current position and company to suggest career paths for you.

You can also ask the software to explore “what if” scenarios – for example, how getting a PhD or learning a new language would affect your prospects.


Mixed Reality Simulation for Outdoor Environments

Mathieu Rodrigue

Mixed Reality simulation makes it possible to evaluate Augmented Reality systems that couldn’t otherwise be implemented because of either funding limitations or technological pitfalls. Researchers in the Four Eyes Lab are extending this project to the AlloSphere in order to simulate outdoor Augmented Reality with the use of mobile devices. This demo will showcase the preliminary results of outdoor Mixed Reality simulation with a single mobile device. Further, the demo will illustrate our plans to implement the simulator into the AlloSphere.


Non-Visual Navigation with Audio and Haptic Feedback

Emily Fujimoto

One common use for smartphones is pedestrian navigation. However, most navigation applications rely on visual information, distracting the user from obstacles and hazards that may be in their path. There have been numerous attempts to alleviate this problem through either audio or haptic feedback, but relatively little work done on combining the two modalities. The work that has been done either requires custom-made hardware or is outdated, using very simplistic combinations. For this project, I am looking into combining the simplest of navigational information, direction and distance, to see if a multimodal interface can outperform a unimodal one.


Scalable Interactive Analysis of Retinal Astrocyte Networks

Panuakdet Suwannatat

Retinal astrocytes are one of two types of glial cells found in the mammalian retina. In mice, these highly planar cells are located in the innermost retinal layer termed the nerve fiber layer. We are developing an in-depth visual analysis of the astrocytes across the entire retina. Using laser scanning confocal microscopy, whole retinal datasets were captured at high resolution and assembled into seamless montages. Astrocytes are then segmented using a Random Walk method and visualized in our system. An interactive visual analysis can be done on the relevant data such as cell sizes, density, distribution, and network formation.


SNS Skyscrapers: A Social Network Information Visualization

Byungkyu (Jay) Kang

As the impact of social media such as Facebook or Twitter grows on our daily life, they are considered as crucial tools for various objectives. For instance, most of the major companies use them to develop aggressive marketing strategies. Moreover, recently, Twitter has been widely used as a tactical medium for political campaigns in many countries. In this project, we present how interactive information visualization can help people understand massive amount of user-generated content in various context.


Wide Area Augmented Reality

Chris Sweeney

In an increasingly mobile world, the task of augmented reality in outdoor settings has become an increasing area of interest. Answering seemingly basic questions such as, “where am I?” and, “what am I looking at?” are exceedingly difficult tasks as our world becomes more and more crowded with new information, buildings, and people. We will give an overview of the current state of wide area AR, and showcase current research projects and goals for the 4 Eyes Lab in this realm.


Light Estimation from Arbitrary Geometry for Home Shopping

Lukas Gruber (Graz University of Technology)

A major objective in augmented reality is the coherent integration of virtual content into real-world content (video image). This also demands the correct application of real world lighting onto virtual objects, which implies the actual estimation of real-world lighting. This demo showcases recent advances in light estimation for AR.

The novelty of this technology is a non invasive light estimation approach from arbitrary geometry, which does not use additionally inserted light probes such as reflective mirror balls. We demo this technology showing a home shopping scenario.