3. Istanbul 2004
tional to their
he affinity, the
ulation M and
to be kept as
net.
CATION
e involves two
m with a set of
by selecting the
ied to train the
remote sensing
a set of sample
ecting region of
, The training
that can be
Ab, represent
f remaining Ab.
domly choosing
ory cells Ab,
the training set
nd present it to
üns the affinity
nt investigation,
of affinity. The
(1)
(2)
age bands.
compose a new
| to Ag, and
MC
match *
genic affinities,
tigenic affinity,
r each of the n
senerated N, is
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004
where = a multiplying factor
N 7 the toutal number of Ab's
round(-) 7 the operator that rounds its argument toward
the closet integer.
5. Allow each Ab's in clone set C! the opportunity to produce
mutated offspring C 7 The higher the affinity, the smaller the
mutation rate. Where mutate procedure and function mutate(x)
are defined in Figure 2. In Figure 2, the function Irandom()
returns a random value in the range [0,1] and Lrandom returns a
random value in the range [-1,1].Function I(t, y) is defined in
equation(4) as follows:
Lt
AG. y)eyq-r f ) (4)
where /-the iteration number
T = the maximum of iteration number
r = a random value in the range [0,1]
A= a parameter to decide the nonconforming degree
mutate(x)
{
foreach(x.vi in x.v)
do
ai = minvi
bi = maxvi
rd mr 7 Irandom()
rd to = Lrandom()
if(rd_mr < mutation_rate)
if(rd to>=0)
xvi= x.vi+ 0, bi- xvi)
else
Xvi= x.vi- (t, x.vi- ai)
done
return x
}
Fig.2. Mutation
* pk
6. Calculate the affinity aff, of the matured clones C^ in
relation to antigen Ag;
p*
7. Select the highest affinity from the set of C" in relation
to À g; as the candidate memory cell, MC andidate . t0 enter
the set of memory antibodies Ab mn} :
that was
8. Decide whether the mc replaces mC
candidate match
previously identified. If MC has more affinity by the
candidate
training antigen, ag, The candidate memory cell is added to the
set of memory cells Ab, , and replace mc
{m} match *
9. Replace the d lowest affinity Ab's from Ab, :
j
10. A stopping criterion is calculated at this point. It is met if
the average affinity for Ab's is above a threshold value. If the
stopping criterion is met, then training on this one antigen stops.
If the stopping criterion has not been met, repeat, beginning at
step 3.
This process continues until all antigens have been training.
4.2 CLASSIFICATION
After training has completed, the evolved memory cells Ab,
are available for the use for classification. Each memory cell is
presented with a data item. By calculating the affinity between
memory cell and image data, the image is classified into the
class that has the maximum affinity.
5. EXPERIMENTAL RESULTS
5.1 Data
The study area of this research is in WUHAN city in China.
The TM images (400x400 pixels) used were acquired in Oct.26
1998. Fig.3.shows the image. The classification patterns
adopted here are five classes: Changjiang River, lake,
vegetation, road and building. In the experiment, five regions of
interests representing the five classes respectively were selected
for training regions and every training region had 100 ground
reference sample points.
5.2 Results
In this case, the running parameters were n = 10, d=5 and = 10.
Fig.4 illustrates the classification result using Artificial Immune
classifier. In order to compare the classification result, Fig.5
illustrates the classification result using maximum likelihood
classifier. Tablel shows the classification accuracy and the
Table 2 is the accuracy of the AIS method. From the table 2, it
is found that AIS approach produces better classification results
than the Maximum Likelihood method. In order to check the
results in more detail, we show confusion matrices in Table 1
and Table2. As shown in Table 2, the AIS approach improved
overall classification accuracy from 85.0% to 89.8%(4.8%
improvement). For each class, the vegetation has the largest
improvement from 59% to 76%(17% improvement), followed
by road (5% improvement), building (4% improvement). The
reason for this is that the maximum likelihood approach works
well only when the underlying assumptions are satisfied and
poor performance may be obtained if the true probability
density functions are different from those assumed by the model,
while AIS are nonlinear models, which make them flexible in
modeling real world complex relationships.