Stephen DiVerdi, Sehwan Kim, Taehee Lee, Jonathan Ventura, Jason Wither, Tobias Höllerer
We are conducting many different projects with the goal of Anywhere Augmentation. We have introduced this term for the concept of building an AR system that will work in arbitrary environments with no prior preparation. The main goal of this work is to lower the barrier of broad acceptance for augmented reality by expanding beyond research prototypes that only work in prepared, controlled environments. To this end we are building a framework to unite the resources that are commonly available to any Anywhere Augmentation user rough global position tracking, local sensors, globally available GIS data, and user input to address the two main problem areas hindering widespread applicability of this technology: generality and robustness. We are working in several areas towards are goal of Anywhere Augmentation, including tracking in unprepared environments, interface and interaction design, and application development.
Motivation and Details
Our approach to Anywhere Augmentation is based on combining wearable computing and augmented reality (AR). Instead of embedding computing and display equipment in the environment as in the case of ubiquitous computing, graphical annotations are overlaid on top of the environment by means of the user's own equipment. AR can be shown via optical see-through glasses or video overlay, which works with near-eye displays or hand-held devices such as smartphones.
Mobile and wearable computing technologies have found their way into mainstream industrial and governmental service applications over the past decade. They are now commonplace in the shipping and hospitality industries, as well as in mobile law enforcement, to highlight a few successful examples. However, current mobile computing solutions outside of research laboratories do not sense and adapt to the user's environment and they do not link the services they provide with the physical world.
The big difference between the successful demonstrations of AR in research prototypes and general use is that the prototypes operate in at least partially controlled environments. In order to obtain reliable and accurate registration between the physical world and the augmentations one either needs a model of the environment, or the environment needs to be instrumented, at least passively with registration markers. Both of these preconditions severely constrain the applicability of AR. Instrumentation of environments on a global scale is exceedingly unlikely to take place, and detailed 3D city and landscape models are very cumbersome to create. In addition, even if detailed 3D models of target environments existed on a broad scale, keeping them up-to-date would be a major challenge and they would still not take into account dynamic changes.
The problem with these high initial costs is that they form a barrier to entry for AR applications. Potential users are frequently turned away when it becomes apparent that use of an AR system will require a few days of building, modeling, instrumenting, calibrating, and measuring, assuming all the required pieces of hardware are on hand. To create an active community of potential developers, it is important to foster experimentation with AR technologies by making use as simple a matter as possible. Augmented reality must become a casual technology that people who are not experts in the field can try out themselves before widespread adoption can be expected.
The research we are undertaking does not rely on any equipment in the environment (other than the satellites in the GPS constellation). Its focus is on the incremental generation of the necessary world models on the fly, in real time, while the system is in operation. This means that we need a simple initialization step, for which we take advantage of efficient computer-assisted interaction techniques to help establish the link and registration with the physical world.
Our concept of Anywhere Augmentation precludes us from relying on data that is not likely to become readily available world or at least nationwide in the near future. We do propose to utilize several sources of GIS data for which there are already data repositories with nationwide coverage (e.g. aerial photography, elevation, land use, street maps, and the NGA names database). Our concept of Anywhere Augmentation does not depend on the existence of any of these data sources, but we will consider these sources when available, and their existence will improve the user experience by providing more information and stronger constraints for user interaction.
Ideally, an AR system developed with the goal of Anywhere Augmentation in mind would be able to be used "out of the box" in a new environment with no setup necessary. However, it is reasonable to expect some small amount of initial effort, so long as it does not interfere overall with the experience. Thus, we focus on systems that take preparation time on the order of seconds, or at least no more than a few minutes - enough time for quick calibration or semi-automatic acquisition of environment data, but by far not enough for the careful measurement and setup work required by high-accuracy AR systems today.
We propose the development of efficient, intelligently constrained semi-automatic interaction tools that allow a mobile AR user to easily establish and correct registration between the physical world and virtual augmentations. Furthermore, we propose techniques to automatically model the user's environment on the fly, geometrically and radiometrically, and we will leverage these models for improved AR interactions: matching the appearance of virtual geometry with the natural lighting conditions and controlling the layout and placement of 2D and 3D annotations.
Evaluating Display Types for AR Selection and Annotation
We present a mobile augmented reality system for outdoor annotation of the real world. To reduce user burden, we use aerial photographs in addition to the wearable system's usual data sources (position, orientation, camera and user input). This allows the user to accurately annotate 3D features with only a few simple interactions from a single position by aligning features in both their firstperson viewpoint and in the aerial view. We examine three types of aerial photograph features - corners, edges, and regions - that are suitable for a wide variety of useful mobile augmented reality applications, and are easily visible on aerial photographs. By using aerial photographs in combination with wearable augmented reality, we are able to achieve much higher accuracy 3D annotation positions than was previously possible from a single user location.
J. Wither, S. DiVerdi, T. Höllerer, Annotation in outdoor augmented reality, Computers & Graphics, In Press, Corrected Proof, Available online 17 June 2009
S. DiVerdi, J. Wither, T. Höllerer, All around the map: Online spherical panorama construction, Computers & Graphics, Volume 33, Issue 1, February 2009, Pages 73-84
T. Lee and T. Höllerer. 2009. Multithreaded Hybrid Feature Tracking for Markerless Augmented Reality. IEEE Transactions on Visualization and Computer Graphics 15, 3 (May. 2009), 355-368.
J. Wither, C. Coffin, J. Ventura, and T. Höllerer. Fast Annotation and Modeling with a Single-Point Laser Range Finder. In Proc. for the ACM/IEEE International Symposium on Mixed and Augmented Reality (ISMAR) , Sept. 15-18, 2008, pp. 65-68.
S. DiVerdi and T. Höllerer. Heads Up and Camera Down: A Vision-based Tracking Modality for Mobile Mixed Reality. IEEE Transactions of Visualization and Computer Graphics, 14(3), May/June 2008, pp 500-512.
S. DiVerdi, J. Wither, and T. Höllerer. Envisor: Online Environment Map Construction for Mixed Reality. Proc. IEEE VR 2008 (10th Int'l Conference on Virtual Reality), Reno, NV, March 8-12, pp. 19-26. Best Paper Honorable Mention
T. Lee and T. Höllerer. Hybrid Feature Tracking and User Interaction for Markerless Augmented Reality. Proc. IEEE VR 2008 (10th Int'l Conference on Virtual Reality), Reno, NV, March 8-12, pp.145-152. Best Paper Nominee
Diverdi, S. 2007 Towards Anywhere Augmentation. Doctoral Thesis. University of California at Santa Barbara. (PDF)
T. Lee and T. Höllerer,
Handy AR: Markerless Inspection of Augmented Reality Objects Using Fingertip Tracking.
In Proc. IEEE International Symposium on Wearable Computers (ISWC),
Boston, MA, Oct. 2007
T. Lee and T. Höllerer, Initializing Markerless Tracking Using a Simple Hand Gesture. In Proc. IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR), Nara, Japan, November 2007
Jason Wither, Stephen DiVerdi, and Tobias Höllerer, Evaluating Display Types for AR Selection and Annotation, In Proc. International Symposium on Mixed and Augmented Reality, Nara, Japan, November 2007 (PDF) (Video)
J. Wither, S. DiVerdi, and T. Höllerer, "Using Aerial Photographs for Improved Mobile AR Annotation," International Symposium on Mixed and Augmented Reality, Santa Barbara, CA, Oct. 22-25, 2006. (PDF)
Jason Wither and Tobias Höllerer, Pictorial Depth Cues for Outdoor Augmented Reality Best Paper Nominee, In: Proc. International Symposium on Wearable Computers, October 2005 (PDF)
Jason Wither, Interaction and Annotation at a Distance in Outdoor Augmented Reality, Masters Thesis, September 2005 (PDF)
Jason Wither and Tobias Höllerer, Evaluating Techniques for Interaction at a Distance, In: Proc. International Symposium on Wearable Computers, November 2004 (PDF)