il 2004
due to
ICy in
atively
p of 1
vn to a
1e laser
ages of
imately
/0 MB
II, we
mages,
It took
urch Il.
cording
due to
mpling
and the
et was
ng step
r, but it
enough
can the
another
F4 FLE
record
provide
36.86 x
Around
pictures
ce them
s walls,
raphing
cases,
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
photographs were taken from a slanted angle. All of the
photographs were taken as single, not stereo images since we
had just one set of the DCS Pro Back.
4. DATA PROCESSING
41 3D Models from Point Clouds
To align and merge the multiple 3D images to produce 3D
polygonal models, Polyworks from InnovMetric was used. First,
the images were translated from a local coordinate system to a
global coordinate system based on the coordinates of the
reference. points included in the images using the function
provided by LPMSCAN (software from Riegl) to control the
scanner. The coordinates of the reference points were obtained
from a conventional topographic survey. Secondly, the
translated images were imported into Polyworks and then
merged into one polygonal model. The size of the polygonal
model of Church II in the POL format was about 2,560,000
polygons. The file size was approximately 70 MB. This would
be even larger if the model was saved in VRML format.
Since the size of the point-cloud data for the 3D model of
Church III was huge, we first partitioned the church into several
sections, i.e., the apse, the south wall, and other areas, and then
created a 3D model corresponding to each section. These
separate models were then merged into one. Some section
models were based on the original point-cloud data and others
used the reduced data. The total size of the 3D polygonal model
of Church III was approximately 6,300,000 polygons in the
POL format. The file was about 174 MB in size.
Figure 3. 3D model of Church II: close-up of apse
Figures 3 and 4 show 3D models of Church II and Church III,
respectively. Note that these models have no texture, but color
information is attached to the vertices. This appearance is due
fo vertex color, i.e. color captured by the LPM-25HA scanner,
and considerably differs from the original color.
403
AE
€"
Figure 4. 3D model of Church III: bird's eye-view (above) and
close-up of west wall (below)
4.2 Point Cloud with Color Information from Photographs
Usually, RGB images captured by laser scanners are not as high
a quality as required for many applications, and even worse,
some scanners cannot acquire color images. Therefore, other
methods of enhancing representations of the appearance of a
scanning target are needed. One popular technique is texture
mapping and another is the use of point clouds with color per
vertex.
Point-based rendering and modelling are currently important
research topics in computer graphics (Pauly et al. 2003;
Zwicker et al. 2001; Rusinkiewicz et al. 2000). This is because
laser scanning technology is making it easier to obtain dense
point clouds. Hence, the size of the polygon models being
output has become larger and larger. However, the tools and
infrastructure for handling large 3D polygonal models are not
yet sufficiently developed. Rendering and modeling using point
clouds is attractive since it reduce the data size drastically
compared to polygonal models and would therefore be suitable
for use on the Internet.
We tried to improve the quality of the color information
included in the 3D images scanned by the LPM-25HA laser
scanner with the high-resolution images taken by the DCS Pro
Back. Each image was analyzed using a corresponding 3D
image. The user specified more than six matching points in both
a photograph and a 3D image so that the photograph was given