Full text: XVth ISPRS Congress (Part A3)

   
con- 
in 
ht 
hi) 
w/e 
of 
ree 
2la- 
trum, 
j are 
omi- 
res- 
mon 
ince 
Summarizing one may distinguish two types of filters for object location: 
+ 
a. The filters m,, m. and m, are restauration filters for the unknown function $(z- x. The 
phase correlation filter m, is the geometric mean of the inverse filter n. and the Wiener Filter 
.It is invariant to arbitrary prefilters Z(z) which degrade the object g(x) and robust with 
J to bandlimited or longwaved distortions, such as oscillations, clouds, reseau crosses, 
shadows, temporal changes, local geometric distortions eic. Assuming usual imagery all three 
filters are high pass filters. They minimize the probability for a false match, which however 
seems not to be proved rigorously up to now. Though they do not yield optimum precision, i. e. the 
smallest possible standard deviation for the estimated shift, they are very well suited for deter- 
mining good approximate values 2, for Ë, in extreme cases (cf. Emmert and McGillem 1973, Pratt 
1974) even down to signal noise ratios below I (Ehlers 1983). 
b. The matched filter m, leading to optimal precision has only local properties and is therefore 
suited for high precision application, provided the mathematical mode] (geometric, radiometric 
stochastic) is adequate. As the model eq.(1) is oversimplified and only suitable for RUE, 1 the 
location of one object in one image we will next discuss other optimization functions and extensions 
of the matched filter. 
2.2 Robust filters 
1. The equivalence of the least squares and the maximum likelihood estimator for normaily distri- 
buted noise suggests to base the estimation on other , especially longtailed distributions. This 
leads us to robust estimators. Here the maximum likelihood type estimators seem to be best suited 
as they fit quite well into classical least squares algorithms. Such estimators are obtained by 
instead of minimizing the sum of squares one minimizes the sum of less increasing functions p(v.) 
of the residuals E Er p(v,,m)omin. , e.g.: 
e(æ) = L plg,(æ,) - g(æ, - æ)) + min. (2) 
Robust estimators can easily be realized using a least squares algorithm by modifying weights or 
residuals after each iteration step (cf. Huber (1981), p. 181 ff): thus one either minimizes 
202 fp, with P;= = p(v, )/(v?+k2) (with k2<< 02, cf. Krarup et al. 1980) or minimizes 07 with 
ICH . Tho approach using modified residuals is most attractive as the set up of the normal 
esuation matrix needs not to be changed. 
2. Several choices of the function o(v) are proposed (cf. Huber (1981), Gótze (1983), Kubik 
(1984)): 
a. The choice o(v) = v2/2 leads to the classical least squares solution being a special case of 
b. The choice o(v) = |v| leads to the minimization of the Z.-norm, The well known least sum 
technique going back to Laplace results from taking p - 1. Using this method, however is 
not optimal, as large disturbancies still have an influence on the estimator (cf. Forstner 
and Klein (1984) and Werner (1984). Other functions have been proposed by Hampel, Andrews or 
Tukey which eliminate this effect. 
Also the exponential function 
C. o(v) » v?/2*exp(-v?/2) proposed by Krarup et. al. (1980) and known as the Danish method guaran- 
tees that large discrepancies do not influence the result. 
3. The least sum method is the most commonly applied robust method,as it is the fastest one 
(cf. Gambino and Crombie 1974). It is used e.g. by Barnea and Silverman (1972), Limb and Murphy 
(1975), Wong and Hall (1979) for template matching. Widrow (1973) has used the method for his 
  
     
    
  
  
  
   
  
   
  
  
  
  
  
  
  
   
  
  
  
  
  
   
  
  
  
   
  
  
  
   
   
   
   
  
  
  
  
  
  
  
  
    
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.