Full text: Resource and environmental monitoring (A)

  
  
    
  
   
   
  
  
   
   
     
    
  
   
   
    
    
  
  
    
    
  
   
   
  
  
   
   
  
  
  
   
  
  
   
  
   
  
  
    
Inference Flow 
V 
Input pixel reflectance 
  
   
     
     
If pixel reilectance in near-IR 
    
  
If reflectance in red 
band « 12€ 
       
If pixel reflectance in near-IR 
band < 5% 
True 
Vegetation 
  
   
If reflectance in red 
band < 10% 
     
    
Figure 5. Example of Decision Tree Classifier [1] 
8. NEURAL NETWORK CLASSIFIERS 
The Artificial Neural network based classifiers are used to 
circumvent the complex class boundary problem. The most 
widely used model is the Back Propagation model, which is a 
supervised approach. The issues involved in applying ANN 
models are : 
e architecture of the ANN, like the number of hidden 
layers , number of neurons in the hidden layer etc. 
e degree of training , over training may make the 
network mimic the pattern and not generalize it, 
under training will cause error in classification 
e feature to be used as inputs; gray levels, auxiliary 
information, derived contextual information like 
texture etc. 
With regard to number of neurons in the hidden layer, Garson 
[1] has suggested that the number of neurons in the hidden 
layer should be equal to Ny(r(Nr-No), where 
N, is the number of training samples 
N, is the number of input features and 
Np is the number of output classes 
r is related to the noisiness of the data 
For unsupervised classification Kohonen and ART models are 
employed. For shape classification Hopfield type models are 
better suited. For classification of highly correlated patterns an 
iterative variant of Hopfield based on spin glass theory is used. 
It is to be noted that ISODATA clustering is influenced by the 
variance of the initial sample distribution to determine class 
structures. Whereas in the case of Kohonen SOM cluster 
structures depends on initial sample distribution. 
IAPRS & SIS, Vol.34, Part 7, “Resource and Environmental Monitoring", Hyderabad, India,2002 
8.1 Probabilistic Neural Networks [11]: Probabilistic neural 
network (PNN) is based on statistical principles rather than 
heuristic approach. The PNN uses Parzen or Parzen like 
probability density function estimators, which asymptotically 
tend towards parent density. PNN does this by using sums of 
spherical Gaussian functions centered at each training sample 
for probability density function estimation for that class. Hence 
PNN is able to make a classification decision in accordance 
with the Bayes principle , providing probability and reliability 
measure for each class. 
The decision function commonly used in PNN architecture is 
M; 
gi(x)= Eexp((Zy-1/0") (1) 
j=l 
and ((Zy-1Y/0?) s[G-x) (x)/20] (2) 
which involves dot product Zi; and an exponential activation 
function. © is smoothing factor which has same value 
throughout the network, this is the only modification for 
optimizing the network as classifier. Training involves finding 
the optimal 6 for a set of training samples xij. 
One of the properties of PNN is associative memory, i.e. 
feature vectors are also stored unlike Back Propagation, where 
only weights are stored. 
Advantages of PNN are : 
e Training is fast and easy and few passes are only 
needed. 
e As the number of training vectors increases the 
decision surfaces will tend towards Bayes optimal 
boundaries. 
e  Thesingle parameter, smoothing factor, is to be 
modified to make the decision boundaries complex or 
simple. 
e For new training vectors the network need not be 
retrained only smoothing factor is to be adjusted. 
e Erroneous samples are tolerated. 
e Network performance will not deteriorate if few 
samples are only available. 
Disadvantages of PNN are . 
e New vectors are classified using all the vectors , 
hence all vectors are to be stored thus requiring a 
large memory. 
9. CONTEXTUAL CLASSIFIERS 
It has been observed that pattern recognition tasks cannot be 
treated as complete with out satisfactory induction of contextual 
information either during classification process or as post 
classification operation. Context, is defined as the local domain 
from which observations were taken, and often includes 
spatially or temporally related measurements. It is often 
assumed that contextual relationships decay with distance as it 
happens in the natural world. 
Some techniques for incorporating contextual knowledge in the 
classification process are 
    
9.1 Sta 
increas 
measur 
maxim 
distribu 
times c 
9.2 Mc 
In cla 
probab 
knowle 
be pos 
modele 
One s 
premis 
pixels 
Marko 
contex 
provid 
density 
Any p 
has 
Homo 
Marov 
its nei; 
Homo 
pixel 
indepe 
The e 
3 Cliff 
*A un 
define 
The cl 
and di 
is del 
neighl 
functi 
minim 
The ti 
every 
and c 
conve 
accur: 
data [ 
bits p 
The € 
into r 
the ir 
simpl 
classi 
Hum: 
deteri 
uncer 
classi 
classi
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.