bul 2004
d propa-
1. Every
enced by
ertain lo-
lds of all
look how
nts three
se vector
ve to han-
fterwards
; in à sin-
Ids. This
here two
1e reciev-
ynsidering
'hysics ().
‘he length
(4) is the
y the vot-
] received
in a local
(5)
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004
The curve voting field has its main direction along the e -
axis and is rotational symmetric around this axis. The
strength of the vote decays with increasing distance and
curvature like which can be written as (5), where d is the
distance, p is the curvature and a is a scale factor that has
to be chosen in context of the input data. The constants a
and b have to be chosen to define the field. The resulting
orientation of the voting field is chosen in a way that the
total curvature along the path from the voting site to the
recieving site is minimized.
The surface and point voting fields we can derive from the
curve voting field by rotating the field first around the e;-
axis (surface) and then around the es-axis (point).
All three fields have in common not to propagate infinitly
but decay with distance that they dont have any influence
after a given radius.
s 9 0 e > ° a
* er
>>» Jor
) - - e @ 9 o . * e.
a - erue io.
e 9. S e eo
e e © 9 - zd
5 ‘0%
99 8
2 8 € 4,919 Sas
a
4!
M tu
t * "no 4
Figure 4: The point (a), surface (b) and curve (c) voting
fields. The voting site is centered. On the left the view is
along the e»-axis, on the right the fields are rotated.
2.4 Sparse and dense voting process
To apply the tensor voting to LIDAR-data, we have to en-
code the data points as described in 2.2. The values for
the roation and the dimension of the initial ellipsoid has
to be chosen, e.g. scaled and oriented by the confidence
ellipse of the measured data, but even other geometrical
information can be encoded this way. With no additional
knowledge, we can assign a simple ballfield without ori-
entation. In a first step, the voting is carried out among
the group of the initial data points, i.e. each of the points
is influenced by its neighbor points. In this step the data
points dont lie close to each other and the number of point
is small in contrast to the number of points necessary to
1075
describe the entire object in total. Because of that the first
step is called sparse voting.
To obtain an approximation of the continous tensor field
we have to interpolate the field between the given initial
locations. To do this, we sample the space of the point
cloud in a grid. At every grid point we calculate the tensor
field value by letting the neighbor points of the initial point
cloud vote. With this procedure we obtain the discrete ten-
sor field.
3 SEGMENTATION
After the Tensorvoting step we have a dense grid of data
points, each defining a tensor location in space. To get con-
nected surfaces and lines in 3D space, we have to search
for the most likely locations. We will handle this with a
feature extraction. Later on we search for equal parts to
merge them to geometrical objects, what we will handle in
a segmentation task.
3.1 Feature extraction
To find the wanted meaningful features, we decompose the
tensor field into its point-, surface- and curve-field to exam-
ine them separately. For each field we can extract these fea-
tures by searching for locations with the most likelihood,
i.e. search for maxima in the fields.
In the point field we can simply search for maxima of its
strength (A3). In the other two fields we have also to look
on the orientation. therefore we build the gradient of the
strength (6) and project it on the orientation vector (7). In
the curve field we have the tangent vector (c4), in the sur-
face field we take the normal vector to that surface (e).
Then we can search for zero-crossings of b.
9=( 5 (6)
ds
Oz
b=n-g (7)
The resulting points are the most likely locations to be part
of curves, surfaces or to be single points.
3.2 Segmentation of planes
For Segmentation of planes there exist many different ap-
proaches in the literature. These approaches use different
types of procedures (Taylor et al., 1989, Boyer et al., 1994)
as well as different homogenity criteria (Besl, 1986, Wani
and Batchelor, 1994). Every approach has its special ap-
plication where it performs best (Hoover et al., 1996). We
use here a region growing algorithm that uses a minimum-
description-length criterion as homogenity criterion. Here
we present a quick overview over the algorithm whereas it
is review in detail in (Schuster, 2004).
The mathematical model we try to find are planes, that is
why we define every location from the output of the fea-
ture extraction as input for the segmentation. We encode
the locations as little plates of unit size and orientation of