he
re
he
ng
ith
sis
ut
‘as
is
ib,
he
he
as
of
th
of
nd
ita
nt
nt
th
he
ut
le
er
As these results show, the intensities of the neighboring pixels
are very strongly correlated in the image channels; just a single
added band is required for the handling of neighborhood. This
last possibility isn't the theme of current paper; it points for
future works.
This unbelievable strong correlation can also be proved by the
visualization of the correlation matrix. Case of 4-neighborhood
is presented in Figure 5a, the 8-neighborhood in Figure 5b. The
periodicity isn't too difficult to detect in both cases.
Em EE
FT LLL ELT BR ET ET
EEE Ea] FE 3
EM LE
LI
HE
LEI FE
LLECII
EEE TEE TET
LEE us Fr E LÀ
Lid HTT (EHTS HERI BERSEN FH df CH
Fb SUED TEE Ebbtide Er Ee ee ee TE Li
b) 8-neighborhood
Figure 5. Visualization of the correlation coefficients’ absolute
values in all bands with neighborhood extension
Figure 6. Coefficients plot for a single band (values are shown
with their absolute values)
Barsi, Arpad
Plotting the absolute values of correlation coefficients of just a
single band (e.g. Band 7), this relation is more visual. (Figure 6)
The relation can’t be measured too well with the linear
correlation coefficient but the analysis is noticable! (The
periodicity could be proven even better by such plots.)
The analysis of the test set brings the same results about close
relations.
If the visualization of the neighboring pixels are completed
band by band, the relations between the bands are much bright:
see Figure 7 in the 4-neighborhood case!
Figure 7. The relation between the pixels in the 4-neighborhood
(band by band visualization)
The figure above illustrates well, that Band 4 and Band 6 are the
differing image channels, while the others are strongly
correlated. It’s supposed that the Karhunen-Loeve
transformation is calculated with the majority of these bands.
(The tools of mathematical factor analysis could answer this
question and prove this hypothesis.)
3. RESULTS
3.1. Neural network for the original image
The presented effective Levenberg-Marquard training method
calculated the network parameter changes, so the training
process was fast. Only some epochs (iterations) were necessary
to reach the desired error goal level. The initial goal level was
0.01, which produced 4 errors (0.2 96) for all training data.
Setting the value to 0.005, in 17 epochs a totally error-free
network is designed. The training is shown in Figure 8.
LT à
Uu 0 7m
Nos
=
=
L
€
N
=
e
e
a
e
c
n
=
a
©
c
wm
Epochs
Figure 8. Training the neural network for the original data set
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B7. Amsterdam 2000. 143