Full text: Close-range imaging, long-range vision

Analytique — 
ition Eyrolles, pp. 
Photogrammetry- 
edition- American 
0. 
n Photogrammétrie 
52, pp. 5-15, juillet 
photogrammétrie, 
pour l’ADS 40; 
mn Française de 
ce — Evaluation of 
Concepts, Testing 
RS commission II 
amy- Résultats de 
> la métrologie 3D 
n°74, Association 
998. 
RECONSTRUCTING TEXTURED CAD MODEL OF URBAN ENVIRONMENT USING 
VEHICLE-BORNE LASER RANGE SCANNER AND LINE CAMERAS 
Huijing Zhao, Ryosuke Shibasaki 
Center for Spatial Information Science, Univ. of Tokyo 
{chou,shiba} @skl.iis.u-tokyo.ac.jp 
Working Group V 
KEY WORDS: Laser Scanning, Mobile, Reconstruction, Urban, CAD, Model 
ABSTRACT 
In this paper, a novel method is presented to generate textured CAD model of out-door urban environment using a vehicle- 
borne sensor system. In data measurement, three single-row laser range scanners and six line cameras are mounted on 
a measurement vehicle, which has been equipped with a GPS/INS/Odometer based navigation system. Laser range and 
line images are measured as the vehicle moves ahead. They are synchronized with the navigation system, so that can be 
geo-referenced to a world coordinate system. Generation of CAD model is conducted in two steps. A geometric model 
is first generated using the geo-referenced laser range data, where urban features like buildings, ground surface and trees 
are extracted in a hierarchical way. Different urban features are represented using different geometric primitives like 
planar face, TIN and triangle. Texture of the urban features is generated by projecting and re-sampling line images on 
the geometric model. An out-door experiment is conducted, and a textured CAD model of a real urban environment is 
reconstructed in a full automatic mode. 
1 INTRODUCTION 
Up to now, many research groups in photogrammetry com- 
munity have been devoted to the analysis of aerial based 
imageries for the reconstruction of 3D urban object (e.g. 
Collins et al. 1994, Gruen 1998). Normally, aerial survey 
can cover relatively wide area, but fail in capturing details 
of urban objects such as sidewall (facade) of buildings. On 
the other hand, most of the existing systems in computer 
vision field have been demonstrated at small scales, us- 
ing simple objects, under controlled light condition. (e.g. 
Chen and Medioni 1992, Higuchi et al. 1995, Shum et 
al. 1994). With the development of automobile navigation 
system, 3D GIS, and applications using virtual and aug- 
mented reality, details of urban out-door objects are found 
to be of importance, as user viewpoints are involved on the 
ground, not in the air. An efficient reconstruction method 
exploiting ground-based survey technique at large scale, 
for complicated and unexpected object geometries, under 
uncontrolled light condition is required. 
1.1 Related Works 
Several systems aiming at generating 3D model of real 
world have been developed during the last few years. Ac- 
cording to the major data source being used for recon- 
structing object geometry, the systems can be broadly di- 
vided into two groups. One is called image-based approach. 
Another is called range-based approach. In the first group, 
3D model of urban scene is reconstructed using still or 
moving images. Image-based approachs are also called 
indirect approachs since object geometries have to be au- 
tomatically or human-assistedly extracted using stereo or 
motion techniques. Debevec, et al. presented an interac- 
tive method of modeling and rendering architectural scenes 
from sparse sets of still photographs, where large archi- 
tectural environment can be modeled with far fewer pho- 
tographs than using other full-automated image-based ap- 
proaches. Bosse et al. 2000 developed a prototype system 
of automatically reconstructing textured geometric CAD 
model of urban environment using spherical mosaic im- 
ages, where camera’s position and orientation of each spher- 
ical image is first initialized using positioning sensors, then 
refined through image matching. Geometric representa- 
tion is extracted either using feature correspondence or by 
identifying vertical facades. Uehara and Zen 2000 pro- 
posed a method of creating textured 3D map from existing 
2D map using motion technique, where a video camera is 
mounted on a calibrated vehicle and the image streams that 
are captured are geo-referenced to the existing 2D map us- 
ing GPS data. Through the above research efforts, it is 
demonstrated that an image-based approach can be used in 
reconstructing 3D models of urban out-door environment. 
Whereas, the difficulties in reliable stereo matching, dis- 
tortion from limited resolution and unstable geometry of 
CCD cameras are the major obstacles to reconstruct a 3D 
model of complicated environment with the necessary ac- 
curacy and robustness. In the second group, 3D model of 
urban scene is reconstructed using range image. Range- 
based approachs are also called direct approachs since ob- 
ject geometry can be directly measured using range scan- 
ners. In recent years, with the development of eye-safe 
laser range scanners, reconstructing relatively large objects 
in urban environment using range data becomes techni- 
cally feasible. Sequeira et al.1999 and El-Hakim et al. 
1998 developed systems on reconstructing indoor environ- 
ment of rather large scale. Stamos and Allen 2000, Zhao 
and Shibasaki 2000 aimed at generating 3D model of ur- 
ban out-door objects. In these systems, range scanners 
are mounted on stationary platforms (called stationary sys- 
tem). Range images produced by the systems are typi- 
cally rectangular grids of range distances (or 3D coordi- 
nates after conversion) from the sensor to the objects being 
scanned. Objects are measured from a number of view- 
points to reduce occlusions, where location and direction 
of viewpoints are unknown or roughly obtained using GPS, 
Gyro sensors and/or other navigation systems. Range data 
—119— 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.