Full text: Proceedings, XXth congress (Part 7)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004 
called /raining the neural networks. Once the network is well 
trained, it can be used to perform the image classification. 
  
Input layer hidden layer Output layer 
  
  
  
  
  
  
  
  
Figure 2. A backpropagation feed-forward multilayer neural 
networks. 
3. SIMULATED ANNEALING 
Simulated Annealing (SA) [13, 14] will be used to enhance the 
simple competitive learning neural networks. SA is an 
optimization algorithm that is based on the process of annealing 
metals. When a metal is heated up and slowly cooled down, it 
is hardened into an optimal state. The analogy behind this is 
that the algorithm begins the search for the optimal solution by 
exploring many possible solutions [13, 14]. Slowly, it restricts 
the search paths to only the most promising solutions as the 
algorithm is said to be cooling down. The laws of 
thermodynamics state that at temperature, /, the probability of 
an increase in energy of magnitude, JE, is given by 
P(3E) = exp(-SE /kf) 
where & is a constant known as the Boltzmann's constant. 
This equation is directly used in simulated annealing, although 
it is usual to drop the Bolzmann's constant as this was only 
introduced into the equation to cope with different materials. 
Therefore, the probability of accepting a worse state is given by 
the equation 
P = exp(-c/t) » r 
where 
c = the change in the cost function 
t = the current temperature 
r = a random number between 0 and 1 
The probability of accepting a worse move is a function of both 
the temperature of the system and of the change in the cost 
function. This approach allows SA to explore solutions that the 
simple competitive learning networks might reject on its own. 
Simulated annealing allows for some randomness in its search 
for the optimal or near optimal solution. 
Simulated annealing introduces some randomness into the 
selection of clusters (categories). This releases the algorithm 
from being trapped in a local optimum and allows it to venture 
into a search for the global optimum. 
4. EXPERIMENTS 
The image shown in Figure 3 is one of tested images using 
both competitive learning networks and backpropagation feed- 
forward multilayer networks. The results from applying the 
simple competitive learning networks, modified competitive 
learning networks and backpropagation feed-forward multilayer 
networks to the image, respectively and the results are shown in 
Figures 4 — 6. 
  
Figure 3. An original TM satellite image. 
  
Figure 4. A classified image using the simple competitive 
learning neural networks. 
3 
i 
] 
j| 
E 
| 
| 
dE 
i 
Ib 
iL 
p 
T 
| 
  
Figure 5. A classified image using the modified competitive 
learning neural networks. 
  
Figure 6. A classified image using feed-forward multilayer 
neural networks. 
106 
Internatio 
Hte fn 
The adva 
priori kno 
optimizat 
by simul: 
Like mos 
be applie 
classifica! 
results of 
results of 
to classif 
network 1 
The preli 
learning | 
overall c 
our exper 
| Thoi 
Clas 
2 : Sche 
Proc 
Aca 
Lille 
and 
4. Bish 
Recc 
S. Hert 
the 1 
6. Lipp 
Net 
7. Aha 
o 
Quai 
8. Hun 
Unsi 
No. 
9. Atlu 
Arti] 
Soil 
Mar 
‘99, 
Mul 
Usin 
ASP 
Nev 
Pote 
Usin 
Eng: 
129: 
Som 
an A 
18,1 
13. Alat 
Opti 
Algc 
Unh 
14. Batt 
and 
Task
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.