Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B5-2)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008 
no difference between a view rendered from the model and a 
photograph taken from the same viewpoint, is generally 
required and obtained with the texture mapping phase. This is 
generally referred to as appearance modeling. Photo-realism 
goes much further than simply projecting a static image over 
the 3D geometry. Due to variations in lighting, surface 
specularity and camera settings, colour and intensity of an area 
shown in images taken from separate positions will not match. 
Measurement of surface reflection properties (BRDF) and 
illumination photometric measurements should also be included 
for better texture modeling. The images are exposed with 
whatever illumination existed at imaging time. This 
illumination may need to be replaced by illumination consistent 
with the rendering point of view and the reflectance properties 
(BRDF) of the object. Also the range of intensity, or dynamic 
range, in the scene can sometimes not be captured in a single 
exposure by current digital cameras. This causes loss of details 
in the dark areas and/or saturation in the bright areas, if both 
coexist in the scene. It is thus important to acquire high 
dynamic range (HDR) images to recover all scene details 
(Reinhard et al., 2005), e.g. by taking multiple images using 
different exposure times. 
3.4 Visualisation of the 3D results 
The ability to easily interact with a huge 3D model is a 
continuing and increasing problem. Indeed model sizes are 
increasing at faster rate than computer hardware advances and 
this limits the possibilities of interactive and real-time 
visualization of the 3D results. The rendering algorithm should 
be capable of delivering images at real-time frame rates, at least 
20 frames-per-second, even at full resolution for both geometry 
and texture. For large models, a LOD approach should be used 
to maintain seamless continuity between adjacent frames. 
Luebke et al. (2002) and Dietrich et al. (2007) give a good 
overview of this problem. 
4. IMAGE-BASED MODELING OF THE 
ERECHTHEION 
4.1 Image acquisition and orientation 
The image data were acquired with two SRL digital cameras: (i) 
a Canon 5D (12 MPixel) equipped with a 24 mm lens and 8.2 
microns sensor pixel size; (ii) a Mamiya ZD Digital Back (22 
MPixel) equipped with 45 mm lens and 9 microns sensor pixel 
size. Mamiya was used only for some test sites with a ground 
sampling distance (GSD) of 0.5 - 0.8 mm. The Canon was used 
for imaging and modeling the majority of the monument with a 
quite varying GSD. Both cameras were pre-calibrated in the lab, 
using the software iWitness (www.photometrix.com.au). The 
calibration of Mamiya was quite older than the time of image 
acquisition and thus not so accurate. The Mamiya images were 
employed mainly for modelling of the whole Acropolis from a 
balloon (will not be covered here) and in this work for research 
purposes, to test the potential of large format (48x36 mm) CCD 
cameras and to compare the image matching results from the 
two cameras and with the results from laser scanning. Thus, for 
Mamiya 6 test sites were selected and at each site, 5 images 
were taken, of which only the central 3 were used for matching 
(frontal and two convergent with an angle of about 22.5 deg). 
For the Canon images, only a few signalized and geodetically 
measured control points existed, due to limitations on targeting 
on a historic monument and difficulty to access the highest 
parts of the monument. These points were used to georeference 
the final 3D model. For the Mamiya images, only a signalised 
scale existed (see Figure 8) and the surface models from these 
images were transformed to the model from laser scanning via 
the procedure described in Section 4.3. For practical reasons 
(mainly for manual modeling and use of few images), most of 
the images were acquired with wide baseline and relatively 
large convergent angles. This resulted in significant occlusions 
and light variations between the images. 
4.2 Image quality and pre-processing 
Figure 3. Top: original image with significant 
pattern noise visible in homogeneous areas and 
unnatural colour. Bottom: pre-processed image with 
noise reduction. Also the colour saturation was 
reduced and the brightness increased to generate 
more “natural” images (this however does not have 
an influence on matching). The images show a part 
of the used scale bar. 
Typically, before applying the ETH matcher (Section 4.3) a 
Wallis filter (Wallis, 1976) is used to enhance the texture, 
brighten-up the shadow regions and radiometrically equalize 
the images, making matching easier. If the images are noisy, 
before Wallis, the noise is first reduced. While this was not 
necessary for Canon, Mamiya had significant pattern noise. In 
modem cameras, often sharpening functions are applied to 
make the image visually more appealing, this however increases 
the noise and introduces edge artefacts, both negative for 
automated image-based measurements. Thus, for Mamiya, first 
a strong noise smoothing was applied, but only to the lightness 
component not for the colour. Figure 3 shows a part of the 
original and pre-processed images. After this processing the R, 
G, B channels were inspected visually, and their histogram 
statistics examined. It came out that the blue channel had less 
noise, slightly better contrast and better definition of edges, 
with R being the worst channel. Thus, the B channel was 
selected for further pre-processing and matching.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.