;o create levels of de-
ation scales.
lysis and maximum
al component trans-
ns.
n, aggregation func-
(region adjacency),
d on the one for con-
the latter will appear
so, the map calcula-
egmentation process,
jeling. Therefore, we
newhat more detail.
a program that as-
a unique value. The
ype “integer”, which
s - more than there
t’s interesting to no-
tree does not change
cy: a pixel has only
ft and right), which
onnectivity is estab-
> diagonal ones don't
ution quadtree filo-
jacent without being
to occur,
Carried out by a pro-
a layers by perform-
ogical and relational
Is in different layers.
link between spatial
; have the meaning of
can be found at any
table with the pixel
tion in Figure 2.
used to establish ad-
quadtree. The result
mns; if somewhere in
s neighboring a pixel
à record in the table.
order of (primarily)
) the second column.
is always larger than
1 q, you will find a re-
e, every combination
se if the quadtree is
jue numbers, such as
tion process. In that
y information, which
nt classification.
a 1996
Input image (3 spectral bands, raster format)
raster to quad-
tree conversion
feature
vector
quadtree
object
quadtree
attribute table
Figure 2: Data structures for quadtree segmentation
of multi-spectral images
3 Image Segmentation
The implementation presented here uses a multi-band im-
age as input and gives one segmentation as output: One set
of objects, where each object has multi-spectral properties
(mean vector and variance-covariance matrix). Moreover,
topological (object adjacency) information can be retrieved
as well as, of course, object locations, sizes and perimeters.
The presented method performs segmentation by recurs-
ively combining (merging) pairs of pixels, leafs and re-
gions. It uses data from multiple (currently two or three)
input bands, that are combined into a single feature vector
quadtree first. Currently, the criterion for merging is very
simple: With a user-selected threshold T, the Euclidean
distance between the feature vectors of two candidates may
be not larger than 2 x T' and none of the variances and cov-
ariances after merging may exceed T?. Note that the above
mentioned connected component labelling is a special case:
only one band and T' — 0.
Like in the other programs in the package, the quadtree
is scanned sequentially, which implies a single traversal
through the image in Z-scan order (Figure 1). Therefore,
the process is recursive and works bottom-up. It starts try-
ing to combine individual pixels (within quadrants) first,
and looks at possibilities to combine objects in larger quad-
rants later.
The program relies on a highly dynamic data structure
consisting of an index table and an object table. 'The
object table has one record for each (intermediate) object,
in which the object size and spectral attributes are stored.
In case of three spectral bands, these attributes are: the
sums of the pixel values in band 1, 2 and 3 over the en-
tire object ($1,595, 53), the sum-of-squares (911, 922, 933)
and the sums of the cross-products ($12,515, 553). These
are used in the calculations of the mean values and the
covariance matrix for the object.
An object is entered in the table when a “new” leaf from
the input is read. A new entry in the index table points
to the object. When processing a quadrant, the values to
either side of the boundaries between the sub-quadrants
are taken from a stack. Via the index table, the spectral
data are retrieved from the object table and used in the
merge criterion.
Before
Index
Objects
size |s1,s2,s3,s11,s12, 513, 322, s23, $33
After
Index
Objects
size |s1, s2, s3, s11, s12, s13, s22, s23, S33
74
released
released
released |
bel et Bet levis aid
Figure 3: Index and Object table before and after pro-
cessing fig. 4
253
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996