Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B5-2)

700 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008 
The geometry acquisition for the dataset 1 was done using the 
Breuckmann OptoTOP-SE fringe projection system. 18 point 
clouds were generated. The following registration was done 
using the software LS3D (Gruen, 2005). The final geometric 
model used for texture mapping was generated using 
PolyWorks and consists of 600k triangles. In (Akca, 2007) 
acquisition and modelling are described in detail. 
For the second dataset, no additional geometric information was 
generated. The few needed object points are acquired using the 
oriented images and manual measurements in the software 
package PhotoModeler Pro 5 (Eos Systems). 
The texture of the Khmer head was acquired using the standard 
still video camera Sony DSC-W30 with 6 mega pixel, in a 
circle around the object. A professional illumination system 
consisting of two diffuse lights was used, to reduce the 
radiometric differences between the images and shadow effects 
at the complex parts and object silhouettes. The interior and 
exterior orientations were computed using a photogrammetric 
bundle adjustment with self-calibration (Akca, 2007). 
The images of the Globe dataset were acquired using a standard 
still video camera Sony F828 with a focal length of around 
7mm. For the illumination a semi-professional light system was 
used. Nevertheless, because of other poor acquisition conditions 
the images were very inhomogeneous. Therefore this dataset is 
suitable to show the performance of the brightness correction 
procedure. To cover the object with a size of 70 x 25 square 
centimetres, four overlapping images are necessary. For our 
tests only two images were used. The orientation and camera 
calibration was done in the software PhotoModeler using 
manual measurements. 
3. ALGORITHM 
3.1 General workflow 
Figure 1 shows the complete workflow. Part I gives an 
overview about the necessary data. Part II covers the developed 
algorithms and will be described in detail. 
Figure 1. Texture mapping workflow 
3.2 Visibility analysis 
The visibility algorithm was designed to work with unsorted, 
unclosed and "un-oriented" triangle meshes. This means, the 
dataset of the elevation or 3D model can be available in any 
unsorted form. Information about connected triangles is not 
needed and holes in the mesh do not influence the procedure. 
“Un-oriented” implies that the usual order of the vertices of 
each triangle to define front and backside (counter- or 
clockwise) must not be assumed. Furthermore, the algorithm is 
generic in a sense, that it can be applied to satellite-, aerial- and 
terrestrial images. These limited prerequisites enable the 
handling of automatically generated datasets with a minimum 
of manual post-processing. The only constraint is that the 
triangles have no intersections. 
The result of the visibility analysis consist of two lists, one 
contains only fully visible triangles, the other one fully 
occluded triangles. To use the result of this algorithm in other 
applications and to minimize the efforts in further processing 
steps, partly occluded triangles have to be eliminated. 
After the import of elevation data, exterior and interior 
orientation parameters of every camera position, the algorithm 
conducts the visibility analysis without manual interaction 
using the following algorithm. 
Each triangle is compared with each other to find potentially 
occluded triangles. Therefore, the vertices of both triangles are 
projected into image space, in addition the distances between 
the triangles and the projection centre are calculated. The 
processing always considers the triangle with the longer 
distance to the projection centre as the possibly occluded one 
(in the following named as A) and the triangle with the shorter 
distance as the possibly occluding (in the following named as 
B). If all the vertices of A are inside of B, the triangle A is fully 
occluded by the triangle B. In contrast, if during the whole 
processing triangle A is never occluded by any other triangle, A 
is fully visible. Beside these two possibilities, the third 
possibility, the partial occlusion of triangles, has to be 
considered. As mentioned before, the results of the visibility 
analysis should be either fully visible or fully occluded triangles, 
partially occluded triangles have to be subdivided into these 
two categories. Therefore, possibly existing intersections of the 
triangle boundaries in image space are determined. After 
calculation of these intersecting image points and the equivalent 
object point in the object space at A, a re-triangulation is done
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.