= [) are known we
AP by applying equa-
ven if M and J are
ly the MAP as given
jblem some alternat-
Dubes & Jain, 1989].
ve simulated anneal-
which find MAP es-
As the computational
erable there are two
s to the MAP estim-
' conditional modes)
r of posterior margin-
\ethods can be found
therein. We will cen-
[Besag, 1986] which
ent trade-off between
ion and the required
rrection using a MRF
correction methods.
ed-probability models
ession given in equa-
, the contextual cor-
of the MAP expres-
ditional independence
n a spatial neighbor-
¢ et al., 1985]: a) the
ind b) the Owen and
| models in this work.
the classifications ob-
cribed in section 2.1,
lassifications for each
f the classifiers are two
3reenland, Denmark! .
SS image of the lga-
SAT-5 TM image of
e 512 x 512 pixels in
cted by expert geolo-
ir spectral distribution
criminate, the training
s a slight overlapping
| training samples. In
iscriminate, the train-
there is a high over-
the training samples.
details.
t sample estimation to
ions. The training set,
(learning set) and 7"
ng randomly 2/3 of the
der are placed into 7 '.
e classifier and the test
show the learning and
iversity of Technology, Lyn-
mages used in this work.
ina 1996
T class quem TTE Sun
1 3806 1919 5725
2 1542 3830 |. 11372
3 5463 2768 8231
4 2796 1395 4101
5 8834 4443 | 13277
[| Total | 28441 | 14355 | 42796 |
Table 1: Learning and test set size. Igaliko.
I Class | 9g FT | Sum ||
if 2464 | 1234 3698
2 843 392 1235
3 413 194 6071
4 196 83 279
5 480 234 714
6 476 233 709
7 178 77 255
8 344 149 493
9 52 21 73
10 187 79 266
11 94 33 127
12 656 313 969
13 144 64 208
14 369 167 536
15 227 96 323
16 192 81 273
17 274 119 393
18 453 220 673
19 271 118 389
20 247 107 354
[ Total | 8560 | 4014 | 12574
Table 2: Learning and test set size. Ymer @.
4 EXPERIMENTAL RESULTS
In table 3 we show the accuracy of the classifications per-
formed on the Igaliko image and in table 4 we show the ac-
curacy of the classifications performed on the Ymer D image.
We show in the first column the name of the spectral classifier
used to get the initial map, and the accuracy of that classifica-
tion, in the second column. The remainder columns show the
accuracies of the contextual corrections made over the initial
map by using the three models adopted in this paper.
5 DISCUSSION AND CONCLUDING REMARKS
From tables 3 and 4 we must note that the accuracy
of the spectral classifications can be improved -sometimes
drastically- if they are used as input to a contextual classifier
independently of the nature of the spectral classifier. This is
true for the three contextual classifiers tested in this work.
We can conclude that among the contextual classifiers ICM
gives the best results and we must note that the required
computational effort is lower than the others. As the ICM
computational effort is identical for every initial classification,
the global computational cost is determined by the spectral
classification cost.
We must note that in both problems the accuracies got with
the combinations:
Spectral Classifier || Orig. | IC™ | Welch | Owen |
ML 7351 [9133 | 79.93 | 9021
RDA 78.97 || 89.37 | 85.46 | 85.68
CART 80.66 || 92.30 | 86.66 | 86.55
1-NN (7) 74.61 | 86.94 | 85.87 | 85.70
1-NN (7m) 77.76 || 83.02 | 84.63 | 84.66
1-NN (Tmc) 77.08 || 82.83 | 84.83 | 84.85
1-NN (Tpsm) 77.50 || 85.32 | 84.12 | 84.52
1-NN (71vo-1) 79.07 | 90.80 | 86.42 | 86.44
Table 3: Accuracy of the classifications. Igaliko.
|| Spectral Classifier Il Orig. I ICM | Welch | Owen ||
ML 61.92 11 91.37 | 85.11 | 85,33
RDA 64.29 | 85.55 | 69.36 | 69.57
CART 62.35 || 95.58 | 86.73 | 87.16
1-NN (7) 78.50 || 97.98 | 86.50 | 87.08
1-NN (73) 65.67 || 90.07 | 82.96 | 83.60
1-NN (7uc) 6323 81.09 | 70.12 | 70:35
1-NN (7psm) 64.55 || 80.97 | 72.66 | 73.22
1-NN (7rvQ-1) 68.18 | 93.64 | 85.55 | 86.41
Table 4: Accuracy of the classifications. Ymer @.
a) CART + ICM, and
b) 1-NN (T1491) + ICM
are very high. The computational effort associated to CART
is mainly influenced by the learning step (a function of the
training set size) but we must note that it is a relatively low
cost step. LVQ-1 learning is a quick process and as a addi-
tional advantage we can select the training set size and the
parameters involved [Kohonen, 1990]. As an additional ad-
vantage the values of the parameters involved in the LVQ-1
learning have been automatically estimated by using two al-
gorithms proposed by the authors.
These combinations have also been tested on synthetic very-
high-spectral images [Cortijo, 1995] and the results obtained
do extend these shown here.
REFERENCES
[Besag, 1986] Besag, J., 1986. On the Statistical Analysis of
Dirty Pictures. Journal of the Royal Statistical Society. Ser.
B, 48(3), pp. 259-302.
[Breiman et al., 1984] Breiman, L., Friedman, J., Olshen, R.
and Stone, C., 1984. Classification and Regresion Trees.
Wadsworth International Group.
[Conradsen et al., 1987] Conradsen, K., Nielsen, A.A.
Nielsen, B.K., Pedersen, J.L. and Thyrsted, T., 1987. The
Use of Structural and Spectral Enhancement of Remote
Sensing Data in Ore Prospecting - East Greenland Case
Study. Technical Report, IMM, The Technical University
of Denmark, Lyngby, Denmark.
[Cortijo et al., 1995] Cortijo, F.J., Pérez de la Blanca, N.,
Molina, R. and Abad, J., 1995. On the combination of non-
parametric nearest neighbor classification and contextual
correction. In: Pattern Recognition and Image Analysis,
123
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996
V crm cts
MEC
pS t