×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
Mapping without the sun
Author
Zhang, Jixian

2.1 PCA Algorithm (James R. Carr, 1998)
2.3 Image Fusion
226
Given M intercorrelated image bands, one form of principal
components analysis can proceed by assembling a matrix, S ,
M x M in size and symmetrical, such that S(i,l) is the
variance for band i, and S(i, j) = S(j, /) is the
covariance between bands i and j . Variance can be
computed as
VAR(x) =
1
N-1
f N \
Yx,
1
N(N-1)
N N
N T. x i ~'L x ‘'L x i <2 - 1)
/=i /=i ¿=1
in which N is the total number of pixels, X, collectively
representing a particular image band. This equation compares
to that for covariance between two image bands, X and y :
" N i N N
COV(x,y) =
1
N-l
i=1
1
N(N-1)
/=1 ¿=1
N N
• Y Z v . v r Z'X 1 '. < 2 - 2 >
i=1 /=1 /=1
where N is also the total number of pixels, X, for an image
band, and y , for the other image band.
Suppose a scene multispectral image is to be transformed using
principal components analysis of seven (M) bands. In this
case, a matrix, S , of size 7x7 is computed. Diagonal
entries of this matrix consist of the seven variances for each
band. Additionally, there are 21 off-diagonal entries,
symmetrical above and below the diagonal, representing the
covariances for all possible two-band combination from the
group of 7 bands.
2.2 Image Transformation
Once the eigenvectors of the matrix, S , are computed for PCA
transformation, they can be used to image transformations. This
procedure is straightforward. Let X represent a matrix
whose columns are the M eigenvectors. The size of X , in
this case, is M x M . Further, let A denote the total
collection of multispectral or hyperspectral imagery. In this
case, A is of size, N X M , where N is the total number
of pixels per image plane (for example, N is 262,144 if the
image plane is 512x512), and M is the number of spectral
bands. Then, a simple matrix multiplication is obtained:
A’ = AX (2-3)
where the first column of A' is the first principal
components image, the second column of A' is the second
principal components image, and so on (James R. Carr, 1998).
Suppose the eigenvectors, 17 1 , U 2 ,A ,U M , are sorted by
descend order according the eigenvalues of the matrix S ,
and X = [Wj, U 2 ,A , U M ] . Then, A = \ Y^, Y 2 , A ,
Y u ], A = [A„A 2 ,A,A m ].
Let the histogram of the panchromatic image match with that of
the first principal component image. And the first principal
component was replaced by the matched image. Accordingly,
the fused image can be obtained by reconstructing the images
through inverse PCA transformation.
3. 2DPCA-BASED ALGORITHM
3.1 2DPCA
Let X denote an n-dimensional unitary column vector, and
image A , an 171ХП random matrix, project onto X by
the following linear transformation
Y = AX (3-D
Thus, we obtain an m-dimensional projected vector Y , which
is called the projected feature vector of image A . The total
scatter of the projected samples can be introduced to measure
the discriminatory power of the projection vector X , and can
be characterized by the trace of the covariance matrix of the
projected feature vectors.
J(X) = tr(S x ) (3-2)
S x = E(Y — EY)(Y - EY) T
= E[AX - E(AX)][AX - E(AX)] r
= E[(A - EA)X)][{A - EA)X] T
tr(S x ) = X T [E(A-EA) T (A-EA)]X (3-3)
Let
C, =E[(A-EA) T (A-EA)] (3-4)
The matrix C t , which is an 17 X n nonnegative definite
matrix, is called the image covariance (scatter) matrix. Suppose
that there are M training image samples in total, the j th
training image is denoted by an 171 xn matrix Aj
( j = 1, 2, Л , M), and the average image of all training
samples is denoted by A . Then, C t can be evaluated by
C -=T^(Aj-AfiAj-A) (3-5)
му;
Alternatively, the criterion in (3-2) can be expressed by
J(X) = X T C,X (3-6)
where X is a unitary column vector. This criterion is called
the generalized total scatter criterion. The unitary vector X
that maximizes the criterion is called the optimal projection
axis. Intuitively, this means that the total scatter of the
projected samples is maximized after the projection of an
image matrix onto X .
The optimal projection axis X opt is the unitary vector that
maximizes the total scatter, i.e., the eigenvector of C t
corresponding to the largest eigenvalue. In general, it is not
enough to have only one optimal projection axis, but usually
need to select a set of projection axes. Therefore, we select the