d network,
4 and n^
sformation
(4)
the current
exception:
ed as
(5)
d into all
important
etworks is
a single
networks
to collect
of several
ion of the
nberships.
infinitely
set theory
nd for all
he mostly
(6)
of fuzzy
AND-type
e is the
nation of
B (x))} ;
(7)
The y parameter controls how strong is the AND-feature.
In the case of y = 0, the function is the same with the
algebraic AND.
In fuzzy decision there are some modifiers called hedge
values. These hedges can be interpreted linguistically as
VERY, NOT SO etc. The simplest way of using them is
calculating the exponents of the memberships.
Let’s see the steps of fuzzy decision making! (Figure 2)
« reaching the desired network accuracy
+ reaching the maximal number of epochs (iterations).
In the second case the necessary network accuracy isn’t
fulfilled so the training must be started again. The design
of the networks follows some rules; these are the
conditions to get the minimal but adequate network. It's
possible that the number of neurons in a layer must be
increased in order to give much "flexibility" to the
Rule 1
; Rule 2 LM
Inputs pi — Implication — Inference | —»|
[E
Rule n
Output
Figure 2
Flowchart of the fuzzy decision making
In the first steps the input values must be eventually
fuzzified and the fuzzy rules must be evaluated. These
rules can contain constrains and hedges. In the
implication phase the results of rules are accumulated.
Also the importance of rules can be taken into
consideration. Inference means the decision so we'll have
the class belonging. In the praxis the max function is used
for inferencing.
3. THEMATIC CLASSIFICATION
3.1. Preparation of the classification
In the thematic classification the first step is the selection
of the training areas. The training areas deal to bring
terrain information into the classifier so the method is
supervised.
Neural networks must be learned directly with pixel
intensity — this is a difference to the traditional statistical
methods where some (statistical) measures are derived
from the pixels and are integrated in the classifier. The
usual measures are the mean vectors and covariance
matrices — using a maximum likelihood classifier — or
just a mean vector if the minimum distance method is
applied.
It was yet proved that the artificial neural networks could
learn directly from the pixels and have acceptable
accuracy; while they get only statistics the result won't be
night.
With the usual masking technique the training areas can
be marked. After the selection the training material of the
independent networks is to be prepared: all the class-own
pixels and a resampled set of the rest pixels are chosen.
This step is to be done for every thematic class.
The training of the neural networks is executed
independently. As it's known the training of a net is
iterative. There are two criteria to stop the iteration:
network.
When all independent networks are trained the previously
described transformation is executed. At the end we'll get
a single neural network which contains all the features of
the original nets.
3.2. The neuro-fuzzy method
Combining the neural network technique with the fuzzy
decision making we get the neuro-fuzzy classifier. Let's
. see how! (Figure 3)
As in chapter 2.1. is mentioned, neural networks give an
output in the range of 0 and 1. This continuous interval
can be interpreted as "neural probabilities". The inputs of
the fuzzy decision making are to be fuzzy or to be
fuzzified. Fuzzification means that we have to calculate
class memberships. In the thematic mapping it's very
luckily that during the classification yet the class
belongings are computed. If the network gives an output
of 0.9 for a pixel that can be understood as it belongs to
the category with a membership of 90 %. That’s why
these values are fuzzy inputs for the decision making!
Because of having several thematic classes the
transformed neural network produces a membership
vector. These memberships are taken into consideration
in fuzzy rules. These rules mirror the human knowledge
of the nature. The rules are easily coded in table form,
this is the knowledge table (KT). The knowledge table
can contain any terrain information: texture, digital
elevation data, intensity data etc. We can interpret the
knowledge table also a projection of the input data into
the (output) categories. Having n inputs and m classes the
evaluation of the rules is the following:
-— h,
IPM, = and(o", KT,)
i=] 2..n
12m] ] ..m
(8)
where IPM is the implication matrix, o the input vector
(same as the output of the neural network eventually
extended with further terrain data), h the hedge vector.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 7, Budapest, 1998 325