'SIS,
y of
heir
ons
ical
tely
| on
nent
the
e of
.
tend
scale
s by
nage
> for
dent
tor is
itely.
e.9.,
ginal
ween
olour
2002;
must
= will
parts.
vhich
1 and
Id be
to be
lours
first
n the
mum
Then,
rector
| and
ns in
g, are
ology
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004
is used to colour edge detection and building roof extraction
from remotely sensed images.
The paper is organized as follows. Section 2 is devoted to a
series of background notions in vector ordering, multivariate
data analysis, and colour morphology. In Section 3, a new
reduced ordering based on ordinal first principal component
analysis is introduced. The basic morphological operations such
as dilation, erosion, closing, and opening based on the new
vector ordering are proposed in Section 4. Section 5 is devoted
to the applications of the proposed morphological operators to
colour edge detection and building roof extraction. In Section 6,
preliminary results of building extraction from pansharpened
Ikonos and QuickBird and colour aerial imagery are given
followed by discussion and outlook in Section 7.
2. BACKGROUND
2.1 Ordering Vector
A set of multivariate data consisting of n m-dimension random
vectors can be modeled as an n x m data matrix X. The rows of
the matrix X will be written with X,, X,,A X, corresponding
2? 7
to n observations. The columns of the matrix X will be written
with X,,X, ‚A X, corresponding to p variables. The
element locating at the row / and the column / in the matrix X is
x; representing jth variable on the ith observation, i.e.,
X, Xn Xp A X.
X, Xa XA x, (1)
SEM [xxl 7-5
M : M MA M
X, Xo X2 A X om
where, X. s [v x0 A x]; G2 ,2,A n) (2)
LG = 1,2,A m) (3)
The aim of ordering multivariate data X is to arrange them to
the form X, TT X, TA 1 X, according to each variable Xo
and x, — [nA X
=v
= 1,2, …, m, where the symbol 7t means less preferred to and
the subscripts i, j, ..., k range over mutually exclusive and
exhaustive subsets of integers 1,2, ..., n.
Unfortunately, ^ ordering ^ multivariate data are not
straightforward, because there is not the notion of the natural
ordering in a vector field as in the one-dimensional case.
Although there is no unambiguous form of multivariate data
ordering scheme, much work has still been done to order the
data. Barnett (1976) proposed the so-called sub-ordering
principles to rule the ordering. The sub-ordering principles are
classified in four groups: (1) marginal ordering (M-ordering), in
which multivariate data is ordered along each one of its m-
dimensions independently; (2) condition ordering (C-ordering),
in which the multivariate vectors are ordered conditionally on
one of components. Thus, one of the components is tanked and
other components of each vector are listed according to the
position of their ranked component; (3) partial ordering (P-
ordering), in which multivariate data is to partition the vectors
into groups, such that the groups can be distinguished with
respect to order, rank, or extremeness (Titterington, 1978); and
(4) reduced ordering (R-ordering), it reduces vectors to a scalar
value according to a measure criterion. Mardia (1976) further
developed the sub-classification of reduced ordering: distance
ordering and projection ordering. The distance ordering refers
to the use of any specific measures of distance, and the
projection ordering considers ordering the sample by using the
first principal component (PC1) or higher.
2.2 First Principal Component Analysis
An obvious extension of the univariate notion of mean and
variance leads to the following definitions (Mardia et al.,
T. ; 2 y
1979). The mean of jth variable, X; = [3 532 ^ y] ;
is defined as
1 n
x; = ca, (4)
Be
; : : : i 4 Kal
The variance of the jth variable, x, = [<> x, À Ay] ;
is defined as
] D oe] ^) 2 x ;
$ -—» (x,-X,) =s, pei123 m (5)
nt
The covariance between the ith variable,
7 : :
x, = [xj 1. 9A ras] ; and jth variables,
To. s
X. [X jr X A For] , 1s defined as
SS
n
EN E, M ox) LE LD H1. (0)
n rs]
The vector of means, or mean of the matrix Xs
1 n
E Wu = kia E ni
[T A UTE
1
r=] n
X'I (7)
: ^ . T
where 7 is a column vector of n one, i.e., | = [LLA A] . Also
the variance-covariance matrix S of the matrix X is
S=[s,]= “3X, — X(X,- X)= lxux ®
€ n
where pm] s bp denotes the centering matrix and I
N
denotes identity.
It is obvious thát S is symmetric and position semi-definite. By
the spectral decomposition theorem, the variance-covariance
matrix $ may be written in the form
S-GLG (9)
where L is a diagonal matrix of the eigenvalues of S, that is
A 10 A 0
f. © + A 0 (10)
M MA M
0. 10.4. A,
where À, > A; 2A 2 A, 2 0, G is an orthogonal matrix, its
column vectors g; (i = 1, 2, ..., m) is the standardized
eigenvectors corresponding to the eigenvalues 4, (i = 1, 2, ...,
m) of S, i.e.,
G=[g.8,.A sad (11)
According to the eigenvectors, PC1 is defined as
y, (X - IX)g, (12)
The eigenvalue 4, of the sample variance-covariance matrix S
can be written as
1169