Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A „Photogrammetric Computer Vision", Graz, 2002 
  
e When the growth of one region stops we simply choose 
another seed point which does not yet belong to any region 
and start again. 
e This whole process is continued until all points belong to 
some region. 
Memory usage and time of work 
Programming a tree structure is very simple, if is not necessary 
to keep all levels in the memory. In fact to aggregate the level n 
we need to know level n-1 only. These two levels can be placed 
in a two dimensional array: 
Level(n —1) A © m d, 
Levelln) [1b bb. 6,1 
q 
When the aggregation of the level n stops, row 1 of the array is 
set equal to row 2, and row 2 is set equal to 0. Then, the 
aggregation continues from the points in row 1. 
Note that using this logic the algorithm works using little 
memory, even if the data set is very large. Moreover, each point 
is processed once, except the points near discontinuities. So, 
time of work is An). 
Comments 
Region growing can have several undesirable effects. Current 
region dominates the growth process, and ambiguities around 
edges of adjacent regions may not be resolved correctly. 
Different choices of seeds may give different segmentation 
results, and problems can occur if the (arbitrarily chosen) seed 
point lies on a discontinuity. 
To counter the above problems, simultaneous region growing 
techniques have been developed, but they don't exist if the 
aggregation criteria are invariant in the direction of growing. A 
simple example of an invariant criterion uses euclidean 
distance: point p,, is aggregated to point p, only if the distance 
d, lp, — p,|| is less than a threshold value r,,,,. It is clear 
that'd. =d 
mn nm ? 
  
  
and this criterion can be defined as invariant. 
The proposed region growing algorithm performs a region- 
based segmentation. May it be interesting to use the same 
aggregation criteria in an edge-based segmentation algorithm 
and compare the results. The difference between the two ways 
is that the region-based segmentation directly constructs the 
regions, and the edge-based segmentation first detects borders 
between regions. Segmentations resulting from edge-based 
methods and region growing methods are not usually exactly 
the same. 
PRINCIPAL COMPONENT ANALYSIS 
Principal component analysis (PCA) can be used to analyze the 
structure of a data set. PCA finds basis vectors for a (sub)space 
which: 
e maximise the variance retained in the projected data 
e or (equivalently) give uncorrelated projected distributions 
This description will not go into too much detail on PCA, as 
there is a large body of literature on it. More attention will be 
paid to some aspects involved in region growing logic. 
Formulas and examples are often referred to the three 
dimensional case, but it is easy to extend them to n-dimensional 
cases. 
Moments and tensor of inertia 
A- 290 
Properties of objects are described by discrete measures in 
many real cases. Every measure could be seen as a point in a 
n-dimensional space, and the distribution of these points 
(measures) is related to the properties of the real object. We 
can think of the geometrical properties of an object in a 2D 
or 3D space, but this logic can be extended to other 
properties (physical for example) in a nD space. A point 
(measure) in the 3D space can assume one of the three 
following roles: 
e surface patch or volume element 
e discontinuity (surface, curve or point junctions) 
e outlier 
Let’s consider the two extreme cases, a point on a smooth 
surface with a well defined surface (normal) orientation, and 
a point on a (curve or point) junction whose absolute 
orientation is uncertain. This whole continuum can be 
abstracted as a second order symmetric 3D tensor, which 
can be visualized geometrically as an ellipsoid. Such an 
ellipsoid can be fully described by the corresponding 
eigensystem with its three unit eigenvectors, os ; En , and 
r and the three corresponding eigenvalues 
Au 2 Adi dw: 
The second order 3D tensor is the symmetric matrix: 
Ix Ix I 
= Iy dy da 
La Iz I 
This tensor is called inertia tensor 1f a mass is associated to 
each point of the data set. The elements of I are the second 
order moments: 
É -» m, 1, - ye) (z, -—zç)] 
where m; is the mass associated to point 1, x; , y; and z; are 
the coordinates of point i and xg , yg and zg are the 
coordinates of the centre of mass of the points set. 
The symmetry of the tensor guarantees that all of I's 
eigenvalues are real and that there is an orthonormal basis of 
eigenvectors (principal system), that is: 
Vix Vox Vix 
[v]- Viv: Yap Var 
Vie" Yor V 
where v,,v,,v, are the eigenvectors of I. V is also the 
transition matrix between the canonic and principal system. 
The coordinates in the principal system are: 
é X. 
nV 
I Z 
The principal moments of inertia lU. L ad 3] are the three 
real eigenvalues of I.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.