Full text: Proceedings, XXth congress (Part 4)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial [Information Sciences, Vol XXXV, Part B4. Istanbul 2004 
  
If (nj > nj && ni? n foralli- j ) then x £c; (1) 
where x = centre pixel, 
ni & nj = the number of adjacent pixels belong 
to class i and j 
nt = threshold 
Usually a moving 3*3 window is used and threshold 5 applied 
for this purpose, the effect of this algorithm is to smooth the 
classified image by weeding out isolated pixels that were 
initially given labels that were dissimilar labels assigned to the 
surrounding pixels. (Mather, 1999) 
2.2 Thomas Filter 
Thomas (Thomas, 1980)introduce a method based on proximity 
function which is described as follows: 
q:q 
f.- S ANS, if x; € w; then q;=2 else qi-0 (2) 
if xs € w; then qs=2 else qs=1 (1=2,4,6,8) (=1,2,3,...k) 
where qi = weight of ith pixel 
q5= center pixel 
oj = jth class 
di52-distance between ith and center pixel. 
As shown in figurel this algorithm uses direct adjacent for its 
calculation. Like the majority filter, Tomas filter remove 
isolated pixels and relable considering direct neighbours. It 
might also reallocate a previously unclassified pixel that had 
been placed in the reject class by the classification algorithm. 
(mather, 1999) 
  
co 
  
6 
Cn 
N 
  
  
  
  
  
4 
  
Figurel: direct neighbor pixels 
2.3 Transition Matrices 
Transition Probability Matrices is an algorithm which uses 
temporal information and expresses the expectation that cover 
types will change during a particular period of time (Franciscus 
Johannes, 2000) Knowledge about the dependency of crops to 
seasons and their mutual sequences is valuable for defining the 
conditional probability as P(class c at time t»/ class ; at time 
tj) . The statistical concept of marcov chains is closely related 
to this subject, as it describes the dependencies between a state 
at t; and the previous states (tj,to,L1,...) this algorithm concern 
to agriculture area. 
2.4 Probability Label Relaxation 
Probabilistic label relaxation is a postclassification context 
algorithm which begins by assuming that a classification based 
on spectral data alone has been carried out. This algorithm was 
introduced by hurries in 1985.This method is based on the key 
992 
concepts of probability, compatibility coefficient, neighborhood 
function, and updating rule (Richards 1993). 
2.4.1 Probabilities: Probabilities for each pixel describe the 
chance that the pixel belongs to each of the possible classes. In 
the initial stage, a set of probabilities could be computed from 
pixel based and subpixel classifiers. These algorithms 
performed by spectral data alone, maximum likelihood and 
linear spectral Unmixing are among these algorithms. In this 
research for LSU classification the fraction of each 
endemember is consider as initial stage. 
k 
S p; (07) 196 «505 «1 (3) 
£i Jj nj 
where  pj(w;) = probabilities of pixel / belongs to class j 
2.4. Compatibility Coefficient: A compatibility coefficient 
describes the context of the neighbour and how compatible it is 
to have pixel m classified as «y and neighbouring pixel n 
classified as «.it is defind as 
N i (w ERE ) 
  
r (VE > WI ) = log (4) 
1 C uds K 
>, N; (Wy SW] ) AN; (wi. Ww] ) 
where N i (w k^ WI ) = the frequency of occurrence of class 
oy was neighbours at pixel i and j; 
2.4.5 Neighbourhood Function: A neighborhood function 
is a function of the label probabilities of the neighboring pixels, 
compatibility coefficients, and neighborhood weights. It 
defined as: 
ro ue ques vy (5) 
q; GE i ij 2 Vie WP (Wy s 
where | Ny» the number of neighbors considered for pixel i 
di7the weight factor of neighbors 
N,= number of classes 
T=number of iteration 
2.4.4 Updating Rule: A neighborhood function allows the 
neighborhoods to influence the possible classification of the 
center pixel and update the probabilities, by multiplying the 
label probabilities by the neighborhood function. These new 
values are divided by their sum in order to the new set of label 
probabilities sums to be one. 
Interna 
t1) 
Pi 
where 
Therefc 
probabi 
update 
betweei 
This ap 
noise (7 
3. 
The go; 
able to 
conseqt 
system 
be able 
differen 
can be « 
Learnin 
been us 
optimal 
Learnin 
process 
they = 
optimiz 
pattern 
modelli 
partitio 
entities: 
process 
betweer 
learning 
any giv 
offered 
on an 
environ 
either ¢ 
probabil 
input t« 
environ 
the lean 
repeats. 
converg 
a(n) 
Figure2:
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.