tituto Tecnico
NRC Canada,
by means of
acy, a photo-
f thie project.
elative simple
surveying and
io perform the
or this project
) team created
RST team and
its texturing.
rveys will be
and rendering
results of a
are presented
laser scanner-
ruments werc
Mpixels digital
Is), has been
ded a limited
fish-eye lens),
valls, 8 for the
. The average
wall per pixel.
purposes it is
ser of required
conditions and
"hen, the Riegl
loyed to create
im is provided
tating mirrors,
of view (FOV
eed of several
nging between
listance) with a
sides X,Y, and
r the reflected
Coordinates of ground control points, as common points for the
comparison between the two models, were measured with a
Leica TCR 705 total station, featuring a EDM with two laser
beams: one, invisible, for conventional measurements to prism
and a second one, visible, for reflectorless surveys. About 70
control points were measured with the total station with an
accuracy of about 5 mm. In order to compare the geo-
referencing of the laser scanner model using both natural and
artificial targets, part of the control points (50) were chosen as
easily recognizable features on the frescoes, whereas further 20
retro-reflective targets were placed mainly at the bottom of the
walls of the room to avoid any damage of the frescoes, and
were surveyed in reflectorless mode (see figure 1). For an
exhaustive overview of both the Leica total station and the
Riegl laser scanner specifications see Tables 1 and 2.
Table 1. Leica Total Station specifications
MANUFACTURER Leica
PRODUCT TCR 705
Angle measurement 5^1 5 mgon
Distance 3000m (with reflector); 2mm * 2ppm
measurement 170m (w/o reflector); 3mm* 2ppm
«1s (with reflector)
Measuring time typical 3-6s (w/o reflector)
Recording 78000 measurements and coordinates
232 interface for external connection
Magnification 30 x
Plummet Laser: located in alidade, turning with the
instrument. accuracy + 0.8mm at 1.5m
Table 2. Riegl laser scanner specifications
MANUFACTURER Riegl USA
PRODUCT LMS 7360
Laser Wavelength (in nm) 904
Laser Power (in W, mW) 1 mw
FDA Laser Classification (Class) !
Beam Diameter at Specified Distance
(0.Y ft at X ft/Ymm at X m)
20 mm at 50 m
Measurement Technique LiDAR
Average Data Acquisition Rate (pps) 8,000
Maximum Data Acquisition Rate (pps) 12,000
Distance Accuracy at Specified Distance
(0.Y ft at X ft/Ymm at X m)
Position Accuracy at Specified Distance
(0.Y ft at X ft/Ymm at X m)
& mm at 200 m
& mm at 100 m
Angular Accuracy (degrees-min-sec) 0.002
Minimum Range (feet/m) im
Maximum Range (feet/m) 300m
Field of View (vertical angle) (degrees-min-sec) 30
Field of View (horizontal angle) (degrees-min-sec) 360
Minimum Vertical Scan Increment
(degrees-min-sec)
Minimum Horizontal Scan Increment
(degrees-min-sec)
0.002
0.002
3. IMAGE-BASED MODELING
The modeling approach adopted at this stage is based on a
semi-automatic technique described in [El-Hakim et al, 2003]
and implemented in ShapeCapture commercial software
[ShapeQuest, 2004]. The bundle adjustment in the software
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
was done with free network, i.e. no control points were used at
this stage and the model scales were determined from a small
number of linear measurements (such as window dimensions)
collected while taking the images. The resulting geometric
models are shown in figure 2.
—Á
N
Room Interior
Entrance
Figure 2: Wire-frame models by image-based method.
4. TEXTURE MAPPING
Achieving high level of photorealism was another goal of the
project, in order to create extremely realistic visual
experiences. Due to the nature of this frescoed room, in this
stage of the project, cffort has been spent to identify the main
factors affecting the visual quality of the models and the
performance of interactive visualization and to select the most
appropriate technology. Basically, the modelling and rendering
approaches used for this project can be summarized as follows:
l. Create geometrically accurate model with an image-based
photogrammetric technique.
Divide the model into efficient size groups of triangles.
Select best image for each group.
Compute texture coordinates of vertices using internal and
external camera parameters.
IS