Hand gesture input for Battuta


Hand gesture input for Battuta

Mathias Kölsch, Matthew Turk

Manipulating virtual objects with hand gestures - the hardware.


Novel computer vision methods allow us to recognize hand gestures with high speed and robustness. The live data drives user interfaces, in particular to manipulate objects in Virtual and Augmented Reality applications.


Project Battuta is an interdisciplinary research initiative to investigate the potential of emerging technologies and geospatial information resources to bring new functionalities to mobile field data collection. At the UCSB CS department, we research novel user interfaces for wearable computing environments, in particular vision-based hand gesture input. A head-mounted display (HMD) provides a screen for data output, and a head-mounted camera (HMC) allows for data input via hand gestures, performed in front of the body of the HMC's wearer. Figure 1 is a snapshot of the geographic display interface, and this video (15MB, Windows Media File) gives an idea of our focus for contribution.

Figure 1. Figure 2.

The computer vision (CV) methods that we employ are specifically tailored towards enabling low latency interaction and to work in a robust, user-independent, and mobile manner. After high-confidence detection of the hand within an activation area, it is tracked at high frame rates, despite changing backgrounds, brief occlusions, and hand posture changes. The most likely posture for a given view along with the image coordinates of the occurance are the output of the recognizer. Figure 2 demonstrates that the CV algorithms deliver reliable results despite varied backgrounds and fast-moving objects.