The Role of Latency in the Validity of AR Simulation

Cha Lee, Scott Bonebrake, Tobias Höllerer, Doug A. Bowman

Overview

It is extremely challenging to run controlled studies comparing multiple Augmented Reality (AR) systems. We use an AR simulation approach, in which a Virtual Reality (VR) system is used to simulate multiple AR systems. To investigate the validity of this approach, in our first experiment we carefully replicated a well-known study by Ellis et al. using our simulator, obtaining comparable results. We include a discussion on general issues we encountered with replicating a prior study. In our second experiment further exploring the validity of AR simulation, we investigated the effects of simulator latency on the results from experiments conducted in an AR simulator. We found simulator latency to have a significant effect on 3D tracing, however there was no interaction between simulator latency and artificial latency. Based on the results from these two experiments, we conclude that simulator latency is not inconsequential in determining task performance. Simulating visual registration is not sufficient to simulate the overall perception of registration errors in an AR system. We also need to keep simulator latency at a minimum. We discuss the impact of these results on the use of the AR simulation approach.

Experiment 1

The goal for Experiment 1 was to replicate an established AR study within our simulator as a step toward validation of AR simulation. We chose to replicate the second experiment in Ellis et al., which showed that high-precision path tracing is most sensitive to increasing latency. The experimental design included in the published work was highly detailed which made this particular work desirable for our purposes. While our simulator and Ellis et al’s AR system did differ, we attempted to replicate the system performance as closely as possible from the published work and with the aid of Dr. Ellis. Our hypothesis was that we could successfully replicate the results from the original experiment if we were able to replicate the level of immersion present in the original authors’ system. We considered the visual fidelity of the display and end-to-end la- tency to be the two most important immersion components because of the nature of the experiment. In the original experiment, participants were shown a very simple 3D path and ring in gray scale through a head mounted display (HMD). Users were then asked to trace the path with the ring. It was important to restrict the field of view and the resolution to the original hardware used, to fully replicate what participants saw (with respect to the virtual scenes). Although the real world scenery was also shown at this low resolution in our study (unlike the original) we did not consider that as an important component since the objects of interest were all virtual. By restricting the field of view, we also made sure users saw the same amount of the virtual path. Finally, by carefully controlling our own system’s end-to-end latency, we could replicate the same overall latency on the virtual objects studied in the original work.

Large Ring Results

Small Ring Results

Experiment 2

In Experiment 1, we replicated a prior study as closely as possible and obtained similar, although not identical, results. We discussed reasons for the differences between our results and Ellis, and we conclude that this study supports the validity of AR simulation but does not constitute a proof of validity. In general, we want to know the situations an experiment using an AR simulator can be trusted to provide valid results applicable to real AR systems. We want to know what characteristics of the simulator itself might have an unintended effect on results. For Experiment 1, there was an additional latency effect which was not present in Ellis’ experiment: the latency applied to the simulated real world. What effect might this latency have had on the results of our experiments? We examined this in Experiment 2.

To investigate this effect, we separated the end-to-end latency of Experiment 1 into two components: simulator latency and artificial latency. It is simulator latency which makes AR simulators inherently different from real AR systems. This subtle but important point must be well understood. A real see-through AR system would not exhibit any latency for the real world scenery. A video-see-though AR system would have a small but non-zero latency due to the video delay on the real world scenery. In an AR simulator all parts of the scene, including the simulated real world, are subject to the base latency of the simulator.

We define this base latency as simulator latency and it consists of the tracker latency, compute time, render time, and display time. In our simulator, this amounted to 50 ms of latency. Since we wanted to see how this could have affected our results from Experiment 1, we needed to be able to vary this value to evaluate multiple simulator latencies. As shown in Figure 5 we achieved this by simply adding an amount of simulator delay to the base end-to-end latency of our simulator. All simulated real objects would then incur a delay equivalent to the new simulator latency sum. Increasing the simulator latency would cause the simulated real world and simulated real hand to lag and swim and also have an additive effect on the virtual objects.

Artificial latency is the latency difference between virtual objects and the real world. An AR simulator uses artificial latency to simulate the end-to-end latency of the real world AR system. The virtual objects would only incur a latency cost equivalent to the base end-to-end latency of that particular real-world AR system. In this experiment virtual objects incur a latency cost equivalent to the sum of the artificial latency (which is nonzero) and the simulator latency (which is minimally the base system latency). We hypothesized that simulator latency would have a smaller effect on the task in Experiment 1, because we felt that the visual mis-registration between the simulated real hand and the virtual ring (caused by artificial latency) was the main factor influencing performance.

Results

Source Code

The source code can be found here. The code is written using the Vizard Development Environment which is primarily Python. The code for each experiment is placed in two different folders. To run this code, you will need a full license for Vizard 3.0 or above. You will also need to set your display settings to horizontal span.

Models

There were five different objects used in this experiment. Two, the virtual hand and lab model are proprietary and could not be listed here. The two torus and the paths we generated can be found here.




Raw Results

The raw results from both Experiment 1 and 2 can be found here. The files are tab delimited with the first row containing the headers. The meaning of the values should be obvious from the headers.

Publications

C. Lee, S. Bonebrake, T. Höllerer, and D. A. Bowman. A replication study testing the validity of ar simulation in vr for controlled experiments. Proceedings of International Symposium on Mixed and Augmented Reality, 7, 2009.

C. Lee, S. Bonebrake, T. Höllerer, and D. A. Bowman. The Role of Latency in the Validity of AR Simulation. IEEE VR 2010.