(2)
ively. After
pased on 16
ration (SD)
after being
(3)
, 1992). We
e coherence
thquake and
er and time
> and EC2-3
:oherence of
and 3 even
strated.
to the earth-
er coherence
ntially apply
levant to the
95. The map
nap indicate
id the others
res 2 (a) and
distribution
re calculated
images have
B
Yosuke Ito
(a) JC1-2 (L-band, D — 881days and B, — 225m) (b) EC2-3 (C-band, D = 898days and B, — 33m)
Figure 2: Coherence images with decorrelation cause by earthquake
2000 7 T s
z^ EC2-3(ERS-1)
—
e
3
-
-
Pixel count
JC1-2 (JERS-1)
0.0 0.2 0.4 0.6 0.8 1.0
Coherence
Figure 3: Histograms of JC1-2 and EC2-3
Figure 4: Hazard survey map
à similar distribution, that is, the mean value of w; is lower than that of ws. Divergence between w, and w» for JC1-2
is more evident than those for the ERS-1 coherence images even though the distributions are considerably overlapped.
This tendency is also shown in figure 2. It is interesting to notice for EC1-2 that the distribution of w; and vw» are largely
overlapped and w; has slightly higher average than w-. This hints that the damaged areas are mostly build-up areas which
are temporally stable without the earthquake. As a result of the coherence distribution shown in figure 6, a possibility of
detecting the damaged regions using the multi-source coherence is suggested.
4 EXTRACTION METHOD
The neural classifier for the extraction method is outlined in figure 7. It generally has a higher ability for classification
of remote sensing data, although its structure and learning algorithm are simpler than the back-propagation method for
well-known NNs. Coherence value [0.0, 1.0] is applied to each neuron as an input vector x = [x1,...,x N]* where N
denotes the number of neurons at the input layer. The number of output neurons M is a constant value multiplied by
the number of categories (L = 2). Input signals are exposed to the input layer and each signal is transmitted to all of
the neurons in the competitive layer through their connection weights. After competing with each other in the Euclidean
distance between the input vector and neuron weight, a winner is found and output as one and the other outputs are equal
to zero. Each neuron in the competitive layer is assigned to one of the predetermined categories wy, k = 1,...,L. The
Category must be assigned to a relevant number of neurons since it includes various objects with individual decorrelation.
A category wy, is thus represented by using a set of neuron weights. The competitive NN is trained by the LVQ method.
The LVQ cyclically update the weight vectors so as to reward correct classification and punish incorrect ones.
Kohonen (1997) proposed that LVQ1, LVQ2.1 and OLVQI algorithms for the LVQ method. After preliminary experi-
ments, we employ the following training approach;
(1) Move the weight vectors roughly by the LVQI.
(2) Tune up the weight vectors in category boundaries by the LVQ2.1, where window parameter w — 0.3.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part Bl. Amsterdam 2000. 159