Full text: Proceedings, XXth congress (Part 5)

2.2 Image visibility and occlusion 
Coming next to the image visibility aspect, all 3D triangles are 
now centrally projected, via the corresponding orientation data, 
onto all images involved. For every planimetric XY value of the 
orthoimage and its attributed Z-value, the corresponding image 
xy coordinates on all images are calculated. A similar technique 
as before is followed: among the model triangles intersected by 
a particular image ray, the one closer to the projective center is 
the triangle which has been actually recorded on the image. If 
the ID number of this triangle is not identical with that already 
assigned in the previous phase to the orthoimage pixel, it is in 
fact established that the model point corresponding to this parti- 
cular ortho pixel is occluded on the examined image. If, on the 
contrary, the two triangle IDs coincide, then the model point is 
visible on the particular image, and the RGB values are stored. 
Despite the computational burden, colour values are interpola- 
ted in the present implementation by bicubic convolution, since 
it provides an obviously smoother result. However, it is evident 
that adjacent pixels do not necessarily relate to adjacent model 
points. Although no discernible effects emerged in the applica- 
tions, checks may possibly be considered to omit such pixels. 
2.3 Texture interpolation 
Result of the preceding step for all orthoimage pixels is colour 
values from several images — unless, of course, the correspond- 
ing model point is occluded on all images. In this latter case, a 
specific colour value marks the particular orthoimage pixels as 
undefined. For such regions, ‘hole-filling’ processes can extract 
colour values from the surrounding model areas (Debevec et al., 
1998; Poulin et al., 1998), which has not been examined here. If 
asked, however, the algorithm can create a map which displays 
all orthoimage areas visible on 0, 1, 2 and > 2 source images. In 
this way, additional images, if available, could be introduced in 
the process to fill the gaps. It is remarked that it is also useful to 
know which orthoimage areas are visible in more that 2 images, 
as this allows a test to detect and exclude outliers. 
Indeed, in order to assign a final colour value to the orthoimage 
pixels, outlying values must first be excluded. Generally, these 
could originate not only from model faults, but also from view- 
dependent features — such as specular highlights, transparencies, 
mirrors, refractions, obstacles etc. (Poulin et al., 1998; Rocchini 
et al., 2001). However, more significant for photogrammetry is 
probably the case when one approaches model parts not seen by 
a camera, i. e. borders of occlusion (Neugebauer & Klein, 1999; 
Buehler et al., 2001). In these instances artifacts might appear, 
since even very small orientation or registration — or modeling — 
errors can lead to colour mistakenly derived from an occluding 
or, respectively, an occluded model point (Fig. 1! shows such an 
example; see also Fig. 5 but also Fig. 4). One may possibly eva- 
luate the ‘occlusion risk’ of pixels — for instance, by a compari- 
son of the imaging distance with those of adjacent pixels from 
their own visible surface point. This is a topic of future study. 
Here, a basic statistical test was adopted, provided that a suffi- 
cient number (> 2) of colour values are available for a particular 
orthoimage pixel. Mean (1) and standard deviation (c) of colour 
values are computed each time; individual colour values falling 
outside the range u + Bxo are excluded. It is estimated that the 
value of factor B could be around 1 (indeed, in the test presented 
in the next section using 7 images, it was set B = 1). After this 
procedure, the valid contributing colour values from all images 
are used to generate the final texture of each orhtoimage pixel. 
A weighted mean of all contributing images is finally used for 
texturing each particular orthoimage pixel. In view-independent 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 
362 
    
. Vol XXXV, Part B5. Istanbul 2004 
texture mapping, the main factors influencing colour quality are 
scale (i. e. imaging distance and camera constant) of the source 
image; its viewing angle (i. e. the angle formed by the intersec- 
tion of the image ray and the model triangle); and image resolu- 
tion. In fact, these factors are all combined to yield the size (in 
pixels) of the 2D triangle on each image, which is regarded as a 
good indication of the quality of the extracted colour. Hence, as 
suggested by Poulin et al. (1998), the contribution of all partici- 
pating colour values are weighted here as relative functions of 
the corresponding 2D triangle areas (this weighting scheme has 
also been used by Grün et al., 2001). 
  
  
  
  
  
Figure 1. Due to small modeling, calibration and orientation 
errors, the texture of point B on image 1 may be assigned to A. 
The algorithm was initially developed in MatLab and was final- 
ly implemented in C. In order to verify its performance and also 
increase speed, tests were carried out with synthetic data, using 
images extracted from an existing photo-textured 3D model of a 
building (Kokkinos, 2004). 
3. APPLICATION OF THE ALGORITHM 
The object of the experimental application was the entrance of 
the 11" century church of ‘Kapnikarea’, an important Byzantine 
monument in the centre of Athens. Mainly due to its columns, 
the object is sufficiently complex for the task. 
3.1 Scanning and modeling 
For surface recording, the Mensi GS200 laser scanner was used. 
The device scans at a rate of 5000 points/sec, having a 60° ver- 
tical field of view. Three separate scans were carried out from a 
distance of about 5 m, for which a typical value £1.4 mm of the 
standard deviation is given (the resolution is 4 mm at a distance 
of 10 m). For registration, 6 well distributed target spheres were 
employed, also measured geodetically. The Real Works Survey 
4.1.2 software was used for a target-based registration of scans. 
The precision of geo-referencing was about +2.5 mm. In total, 7 
million surface points were obtained. Subsequently, these were 
spatially down-sampled to provide a final 3D mesh, which con- 
sisted of about 3 million triangles. Special features of the soft- 
ware (smoothing and peak removal) were used to improve the 
3D model. À grayscale intensity map was also obtained (in Fig. 
2 the intensity map of the central scan is seen). 
3.2 Bundle adjustment 
The object has been recorded employing different analogue and 
digital cameras, which will be used in future tests. Here, results 
are given for images (2592x1944) from a Sony 5 MegaPixel ca- 
mera. A total of 7 overlapping images were selected, taken with 
fixed focusing, to keep interior orientation invariant. All images 
used are seen in Fig. 3. 
   
    
   
  
   
   
  
  
   
    
  
  
   
  
  
   
  
  
  
  
   
   
   
   
    
  
   
  
  
  
  
  
   
  
  
  
   
    
    
    
    
    
   
  
   
   
  
   
    
   
  
  
   
   
    
  
  
   
  
All 
the 
am 
as, 
pla 
ort 
alg 
| y 
res 
ma 
Sio 
Fig 
all 
cle 
mi 
fro
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.