2eferential
supervised
Difference
cies.
The number of the p(x/w,) will be equal to the number
of the ground cover classes. This means, for a pixel at
aposition x in multispectral space a set of probabilities
can be computed for each class. The required p(w/Xx)
of the class and the available p(x/w) of training data
are related by the Bay's Theorem as follows:
p(wy/x) = p(x/iw) p(w)! p(x)
where p(wj is the probability that class w; occurs in
the image. The rule in classifying a pixel at a position
x will be:
x 5 wif pew) pw) > p/w) pw)
forall j zi
As we mentioned above this method is carried out in
two stages, Figure 2, in the first stage and as a result to
applying the maximum likelihood classification
mentioned above, an index related to the measured
spectral content is assigned to each pixel. Then in the
second stage each pixel at a time is taken along with its
coordinate and the spectral index which was derived in
the first stage, then this index is led to the pixel data
base and checked against the indices of a pixel with the
same coordinates there. If the check result is true this
means that the index derived in the first stage is the real
spectral content of the classified pixel and classification
decision will be confirmed. On the other hand if the
check result is false this points to conflicting pixels and
means one of two things: the characters of the
conflicting pixel have been changed after constructing
the data base, or the criteria established for the
classification in the first stage is not as accurate as
required. In both cases the final decision on classifying
such pixels is given to the user where he can adopt the
classification results though they are not in line with the
information from the data base or he can change the
classification criteria.
The result of applying this method on classifying a test
site image resulted in a good improvement in the image
quality and its general appearance. Comparison of the
accuracy results between the ordinary and referential
classification (table 1) shows that there is 0%, 7%,
13%, 16% and 25% when classifying poplar, water,
chestnut, forest, grass, water bodies and sugar beet
respectively, and the average accuracy in classifying all
the studied classes has been improved by 17.2%.
Further comparison between the accuracies of both
classification procedures in figure 3 shows that the
accuracy of the referential classification is always higher
than that of ordinary classification (supervised), the
worst case of the referential classification accuracy
happens when the data base does not contain any pre-
collected information about pixels required to be
classified, even in this worst case the referential
classification is just as accurate as ordinary classification
and is never less as in the poplar class case.
3. CONCLUSIONS
Referential classification of image data is a vital step
toward automating the classification process which is an
important step in automating the whole image processing
and analysis process. This can be achieved by cancelling
the role of the user in classifying the conflicting pixels
where the spectral data allocated to them in the ordinary
classification stage can be adopted, changing their
classification to ordinary and not referential or they can
be rejected and reported as unknown pixels; a multifold
process which involves three steps that reflect high level
of expert and artificial intelligent behaviour. Reporting
about rejected and unknown pixels could be a highly
advantageous feature of this method especially when
using multitemporal images in constructing the data base
about one area and using another image of later time
about the same area in the referential classification.
Then all rejected pixels in the referential classification
could denote possible change in the area between the
time of constructing the data base and the present date
of classification. However, the referential classification
method has slightly disadvantageous features such as the
fact that constructing a really indicative data base about
individual pixels is not an easy task, though not
impossible; also the algorithm for performing this
method is more complicated and needs more time and
more efficient hardware to be executed.
REFERENCES
Alhusain, O., 1992. Studies in the Design and
Implementation of Microcomputer-based Satellite image
Processing System. Ph.D. Thesis, Technical University
of Budapest, Budapest, Hungary.
Baxes, G.A., 1985. Vision and the Computer- An
Overview. Robotics Age, March, pp. 12-19.
Hunt, E., 1984. Digital Image Processing- Overview
and Areas of Applications. Siemens Forsch.- u.
Entwickl. - Ber. Bd. 12, pp. 250-257.
Richards, J.A., 1986. Remote Sensing Digital Image
Analysis. Springer Verlag, p. 281. Berlin [etc.],
Germany.
Swain, P.H., 1978. Fundamentals of Pattern
Recognition in Remote Sensing. In: Swain and Davis
(eds.). Remote Sensing: The Quantitative Approach,
pp.136-187. McGraw-Hill, NY, USA.
335