Full text: Proceedings, XXth congress (Part 7)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV ‚ Part B7. Istanbul 2004 
4. PROCESSING, MODELLING & TEXTURE be tempted to conclude that the scanner could be of low 
      
S : ; resolution or the modelling software is of poor quality (or both). 
d Processing can be summarized in the following broad It is only through a proper understanding of the complete 
categories: scanner acquisition, 3D modelling with or without measuring and processing chain (user included) that one can 
h texture mapping (Soucy et al. 1996 present a description of the take full advantage of seemingly different but complementary 
y complete processing pipeline: align, merge, edit, compress), technologies. It is interesting to note that many authors are now 
m geo-referencing, inspection/verification (lengths, angles, radii, including 3D cameras error models in the processing pipeline. 
é volumes, barycentre) Callieri et al., 2004; Beraldin etal, 1997, For instance, Okatani et al., 2002 developed a method for fine 
7 CAD drawings (cross-sections, pipe center line) and registration of multiple view range images considering the 
i transformation into derived products (VR representations, camera measurement error properties. Johnson et al., 2002 
nc ortho-photo). This list can be expanded further but we will describe a technique for adaptive resolution surface generation 
al | restrict our discussion to a few examples. based on probabilistic 3D fusion from multiple sensors. 
| ee ae. i 
| 4.1 Scanner acquisition | 
fis As an example, two scans were performed on a mostly specular 
or | surface (bronze) see Figure 7a. The scanner software used could 
es | estimate the surface shape but it did not flag the user that the 
ne uncertainty and spatial resolution were off target or that 
for | saturation occurred in the 3D image (Figure 7b). The user is left 
Ke with the time consuming task of deciding to proceed with a 
he low-resolution 3D scan or remove manually the saturated zone 
ea (Figure 7c). This situation is typical of interface software 
j^ supplied with scanners. User intervention becomes critical in 
0 
d order to achieve the quality goals stated for a project. For a 
an 
   
. novice user, this situation can become quite challenging not to 
liy mention costly in terms of time and money. 
2 
ent 
(ise a) b) 
ing Figure 8. 3D modelling and compression as a function of user 
'IM skills. Same data and modelling software (the figures are not 
still inverted!), a) 18 000-polygon model prepared by a skilled user, 
ides b) 10 000-polygon model prepared by an expert user. 
| on 
one 4.3 Appearance modelling and visual acuity 
best 
002 Appearance modeling includes methods like image perspective 
ers. techniques (IPT) and reflectance modelling. The true 
appearance of an object is the result of the interaction of light 
ition a) b) c) with material. Many mathematical models to describe this 
than Figure 7. Scanning on a highly reflective surface with a phenomenon have been proposed in the literature. The 
| an triangulation-based system, a) bronze sculpture of David by knowledge of such a model is important in order to reproduce 
and Donatello, b) low resolution scan performed at 30 cm standoff, c) hypothetical lighting conditions under varying observation 
main scan at a closer distance, 15 cm, showing better resolution but points. Techniques that map real-scene images onto the 
saturation zone (upper right corner on thigh). geometric model, also known as IPT have gained a lot of 
ss of interest. High-resolution colour images can be precisely 
42 Model building mapped onto the geometric model provided that the camera 
ander Lic incmony fields of endeavour expertis is hard toacauire, position and orientation are known in the coordinate system of 
hos Three-dimensional acquisition und modelling and certainly, 3D Us meme model, The en lng Aue, computine 
: F DS INT Sm accurately lens distortions, estimating 2D camera to 3D-model 
uracy a do not escape from this fact. We cam P pose, dealing with hidden surfaces, incomplete views and poor 
ndard ree main classes of users, i.e. novice, skilled and expect. 
; ; x ; lighting condition for external scenes. 
d There is no standard in this field defining these classes. The 
n * 
world of CMM has a way to do just this, for 3D, well, maybe in 
arget In choosing the level of details required for a VR 
targe the future. Figure 8 shows an example taken from a project on ; ; 
ation hicl E e ing Temple C representation, one can consider the following approaches when 
ation, which we ar ly w > are re-cr y ; : ; : 
t NC afe currently working. We are re GORE, empie measuring objects and sites: use instruments that can record 
of: e in Sici ing a suite of exper d er 
intain ( Seununte I Sicily using a Sue of mies expense details according to some 3D sampling criterion, use 
biect scientific, technical and historical). In one of Ie ask, a instruments to their best performance even though the surface 
ane Metope was scanned and the 3D modelling was performed by 
tld be 
two different users, one that we consider skilled and the other, 
an expert. Figure 8a) shows the result after alignment, merging 
and compression in the case of the skilled user. This mesh 
fepresentation contains about 18 000 polygons. The expert user 
produced the result shown in F igure 8b) starting from the same 
3D images and the same software package. This mesh contains 
only 10 000 polygons. From this simple experience, one might 
979 
details can’t all be captured and present the results by taking 
into account the human visual system. Using the design 
equations of Section 3.2, one can prepare a VR show that 
optimizes the information content from heterogeneous sources: 
2D texture from cameras, 3D model quality (resolution + 
uncertainty), system constraints and human visual acuity. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.