Full text: Technical Commission IV (B4)

tion 
y to 
rto 
the 
d to 
cial 
sher 
ince 
cord 
TOC. 
hing 
and 
ipon 
gital 
III, 
1 III 
ition 
ition 
al of 
phy, 
ation 
dent 
arch, 
mote 
006. 
back 
vada, 
hing 
005: 
Vol. 
VERIFICATION OF IMAGE BASED AUGMENTED REALITY 
FOR URBAN VISUALIZATION 
Takashi. Fuse ® *, Shoya Nishikawa ®, Yuki Fukunishi * 
* Dept. of Civil Engineering, University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo, 113-8656, Japan, - 
fuse@civil.t.u-tokyo.ac.jp 
Commission IV, WG IV/4 
KEY WORDS: Augmented Reality, Visualization, Urban, Close Range, Robotics, Navigation 
ABSTRACT: 
Recently, visualization of urban scenes with various information attracts attention. For the transmission of urban scenes, virtual 
reality has been widely used. Since the virtual reality requires comprehensive and detailed three dimensional models, the manual 
dependent modelling takes a lot of time and effort. On the other hand, it has been tackled that various data is superimposed on the 
scene which the users see at the time instead of comprehensive modelling, which is well known as augmented reality (AR). 
Simultaneous localization and mapping (SLAM) has been attempted using simple video cameras for the AR. This method estimates 
exterior orientation factors of the camera, and three dimensional reconstructions of feature points simultaneously. The method, 
however, has been applied to only small indoor space. This paper investigates the applicability of the popular method of SALM to 
wide outdoor space, and improves the stability of the method. Through the application, the tracked feature points successfully are 
greatly reduced compared with application in indoor environment. According to the experimental result, simple markers or GPS are 
introduced as auxiliary information. The markers gives the stability of optimization, and GPS gives real scale to AR spaces. 
Additionally, feature points tracking method is modified by assigning amplitude of displacement and depth. The effect of the 
markers and GPS are confirmed. On the other hand, some limitations of the method are understood. As a result, more impressive 
visualization will be accomplished. 
1. INTRODUCTION 
Recently, visualization of urban scenes with various 
information attracts attention from the perspective of landscape 
simulation, robot navigation and so on. For the transmission of 
urban scenes, virtual reality has been widely used. Since the 
virtual reality requires comprehensive and detailed three 
dimensional models, the manual dependent modelling takes a 
lot of time and effort. On the other hand, it has been attempted 
that various data is superimposed on the scene which the users 
see at the time instead of comprehensive modelling. The 
technique is well known as augmented reality (AR). 
The AR uses sequential images taken from same view points of 
users as environmental scene, and then reality of visualization 
increase compared with the virtual reality. So far, a popular 
application of AR is tags superimposition on sequential images 
based on GPS and electronic compass. The application cannot 
superimpose three dimensional models such as CAD, CG, or so 
on, because of less accurate exterior orientation factors of the 
platforms. To employ such three dimensional models in the AR, 
expensive magnetic field sensors are installed in the 
environment. The system comes to large scale, and so the 
applicability is restrictive. 
Against the above problem, simultaneous localization and 
mapping (SLAM) has been developed using simple video 
cameras. This method estimates exterior orientation factors of 
the camera, and three dimensional reconstructions of feature 
points simultaneously. The method, however, has been applied 
to only small indoor space. 
This paper investigates the applicability of the method to wide 
outdoor space, and improves the stability of the method. 
2. SIMULTANEOUS LOCALIZATION AND MAPPING 
SLAM has been developed initially in the field of robotics. The 
SLAM problems arise when the robot does not have access to a 
map of the environment, nor does it know its own pose. 
Against the problem, in SLAM, the robot acquires a map of its 
environment while simultaneously localizing itself relative to 
this map (Thrun et al., 2006). 
There are two main forms of the SLAM. One is known as the 
online SLAM: it involves estimating the posterior probability 
over the momentary pose along with the map. Many algorithms 
for the online SLAM are incremental, specifically they discard 
past measurements once they have been processed. Another is 
known as the full SLAM. In full SLAM, we seck to calculate a 
posterior probability over the entire path along with the map, 
instead of just the current pose. Assuming the probability 
distribution is the normal distribution, the estimation of the 
posterior probability becomes least squares method. In the 
sense of bundle adjustment in photogrammetry, online and full 
SLAM are correspond to recursive (or local) and global bundle 
adjustment, respectively. In the field of robotics, real time 
processing is required. Practically, online SLAM has been 
widely used in the field. 
There are two popular techniques for online SLAM: EKF 
SLAM and FastSLAM. The EKF SLAM algorithm is based on 
173 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.