Full text: XIXth congress (Part B3,2)

  
Antonio Ruiz 
  
3 DESIGN 
The TIN model is more complex than the grid model. We have to store on one side the geometric properties and on 
the other side the combinatorial properties: We need a list of vertices with 3 coordinates each and a list of pointers that 
describe explicitly how the vertices join to form the edges and how the edges join to form the triangles. Many structures 
have been used to store triangulations and between them we have chosen the Double Connnected Edges List (DCEL) 
introduced by Muller and Preparata in 1978 ((Preparata and Shamos, 1985)). 
ru 
«* dnext 
> = right 
left onext o 
Figure 1: DCEL data structure 
Edges in DCEL are oriented, but the orientation is arbitrary. Each edge has six pointers: one to the origin vertex and 
another to the destination, one pointer to the next edge that is found rotating counterclockwise around the origin and 
another to the next around the destination. Two more pointers are usually reserved to the left and to the right triangles, 
The space required to store the triangulation is |P| — 6|E| = 18|V| where we have taken into account the Euler relation 
and |P| is the number of pointers, | Æ| the number of edges and |V | the number of vertices. 
  
We can find in constant time which vertices belong to a triangle and which are its neighbouring triangles. With the DCEL 
it is possible to represent any plannar graph and in particular we can represent intermediate states of the triangulation that 
appear while updating it. We can also store edge attributes without redundancy. 
By default our surface will be the polyhedral surface formed by the flat triangles and we will not store the triangles 
explicitly. In the general case the triangles will not hold any information and they can be recovered from their boundary 
edges in constant time. For that reason our pointers to triangles will usually be null pointers but we reserve these pointer 
for the extension of the polyhedrical surface model as will be shown later in this report. 
3.1 Triangulation in external memory 
There are many algorithms to construct the contrained Delaunay triangulation. The sweep line algorithm for the CDT 
(Seidel, 1988) is a generalization of Fortune's algorithm for DT (Fortune, 1987). There is also a divide and conquer 
algorithm for CDT (Paul Chew, 1989) that generalizes the algorithm by Lee and Schacter (Lee and Schachter, 1980). 
Those algorithms are optimal within the real RAM model of computation (Preparata and Shamos, 1985) but they are not 
dynamic. They require to sort all the points prior to triangulate them because they work on ordered sets of points. We 
prefer for our application the incremental construction algorithm because it is fully dynamic, i.e. we can insert and delete 
vertices and required edges at any time. This method is O(n?) in the worst case but the average cost is low with the 
data distributions found in practice. Melhorn et al. showed in 1990 that randomized insertion of points is O(n log n) on 
average with the advantage of being fully dynamic. 
The real RAM computational model is not appropriate for our application. One better approximation for our case is the 
two level memory model (Goodrich et al., 1993). In this model of computation two kinds of memory are considered: one 
very fast that represents the computer RAM memory and another very slow that represents the disk storage. Operations 
in the fast memory can be considered as instantaneous (^ 10 ns) in comparison to the external memory operations (^ 10 
ms) that are a million times slower. We are aware of the optimal external memory algorithm for the computation of the 
convex hull in 3D and this problem is equivalent to Delaunay triangulation but this algorithm is not dynamic and is Very 
complex. We have not gone further in this direction. Instead we have followed a more heuristic approach. 
The TIN data is irregular. In case our data set is very large it will not fit in core memory and when the program follows 
pointers from one edge to another many page faults can take place. It is not affordable that each pointer access gives raise 
to one I/O operation. To increase the performance of our algorithms we must minimize the number of VO operations and 
reduce page faults. The technique available to do that is bucketing. Points and edges that are close in terrain should be 
stored also close into disk because most of the operations that are performed on the TIN are local. Usually we are going 
to recover or to update the information of a region corresponding to a photogrammetric model or to a sheet map. We are 
not interested in the insertion of points distributed at random anywhere on the domain. If the information for this region 
spans on a few pages on disk, a small number of I/O operations can be enough and the operation will be faster. 
  
786 International Archives of Photogrammetry and Remote Sensing. Vol, XXXIII, Part B3. Amsterdam 2000. 
  
3.2 
The c 
point 
Point 
distril 
increl 
becau 
set. C 
point 
If the 
We hé 
The t 
which 
locati 
The ic 
but th 
A BP] 
contai 
the cc 
data s 
of 40( 
We ha 
close t 
we sez 
startin 
The al 
maxin 
space 
void a 
did a ı 
——
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.