Home

GroundCam: A Tracking Modality for Mobile Mixed Reality

 

GroundCam: A Tracking Modality for Mobile Mixed Reality

Stephen DiVerdi, Tobias Höllerer

The GroundCam hardware setup.

Overview

The GroundCam is part of the Four Eyes Lab's Anywhere Augmentation project. We use a regular firewire camera, attached to the user's torso and pointed at the ground, to operate in a fashion similar to an optical mouse - the ground's texture features are tracked across the camera's field of view, and motion is computed and integrated to create a 2DOF dead reckoning person tracker.

The GroundCam source code is available here (v1.0). Read the included README for information. Our camera calibration (v1.0) program is also available for download.

Details

An overview of the commonly available technologies is presented in Table 1. It is apparent that no single tracking solution exists for the interesting and increasingly common case of wide area, high resolution applications, such as outdoor architectural visualizations.
technology
range
(m)
setup
(hr)
resolution
(mm)
time
(s)
environ
magnetic 1 1 1 inf in/out
ultrasound 10 1 10 inf in
inertial 1 0 1 10 in/out
pedometer 1000 0 100 1000 in/out
optical,
beacons 10 1 1 inf in
passive 10 10 10 inf in
markerless 10 0 10 inf in/out
hybrid 10 10 1 inf in
GPS inf 0 1000 inf out
beacons 100 10 1000 inf in/out
WiFi 100 10 1000 inf in/out
GroundCam 10 0 1 1000 in/out
Table 1: A brief comparison of tracking technologies, for typical setups. range: size of the region that can be tracked within. setup: amount of time for instrumentation and calibration. resolution: granularity of a single output position. time: duration for which useful tracking data is returned (before it drifts too much). environ: where the tracker can be used, indoors or outdoors. All values are expressed roughly accurate to nearest order of magnitude.

In this work, we introduce the GroundCam (consisting of a camera and an orientation tracker - see Figure 4), a local tracking technology for both indoor and outdoor applications. We use the optical flow of a video of the ground to determine velocity, inspired by the workings of an optical mouse. This is related to visual odometry work done in the robotics community, but here we apply it to the much less constrained world of human tracking. By itself, the GroundCam provides high resolution relative position information, but is subject to drift due to integration of error over time. From Table 1, it is clear the GroundCam most similarly resembles an inertial tracker, which measures acceleration and integrates twice to get position. The GroundCam is a significant improvement over inertial tracking because its single integration accumulates error much more slowly, maintaining similar small-scale accuracy for a longer period of time.

To address the GroundCam's long term drift, we use a complementary Kalman filter to combine the GroundCam with a wide area sensor such as a GPS receiver, providing better accuracy over large environments. For wide area indoor operation, we simulate the signal from a beaconbased tracker such as the Cricket or Locust Swarm, to demonstrate the hybrid performance. These wide area trackers provide periodic stable corrections to compensate for the GroundCam's drift while maintaining its fast and high resolution data.

Publications

S. DiVerdi, T. Höllerer
GroundCam: A Tracking Modality for Mobile Mixed Reality.
In IEEE Virtual Reality, Mar. 2007.
(PDF, Slides)

Related Projects

Anywhere Augmentation