Full text: XIXth congress (Part B3,2)

  
Zhu Xu 
  
The second robust estimator employed is a combination of M-estimator and data snooping. An important consideration 
in data snooping is that one should look not just for large residuals, but for residuals that are large in comparison to their 
own standard deviations [Schwarz and Kok 1993], i.e. the redundancy numbers of individual observations should be 
taken into account. Noting this, we replace u; in equation (6) by u;—r// ?,;, which is the Baarda's statistic. The resultant 
estimator belongs to the GM-estimators (generalized M-estimators) [Rousseeuw and Leroy 1987] which hope to bound 
the influence of leverage points. This estimator is here called MD estimator. Experiments on simple regression problem 
[Rousseeuw and Leroy 1987] showed that GM-estimators tolerate more outliers than M-estimators do. From a different 
point of view, the MD estimator is similar to the iterative data snooping [Kok 1984] while they differ in two aspects. 
First, more than one observations can be identified as potential outliers after each iteration. This is necessary for 
detecting local deformation of DEMs where there could be too many outliers that deleting only one of them after each 
iteration makes the procedure computationally infeasible. Second, potential outliers are not deleted permanently but are 
less weighted temporarily. Incorporating the MD estimator with the LZD algorithm, we call the resulting matching 
algorithms LZD_md. 
The last robust estimator employed is the LMS (least median of squares) estimator developed by estimator Rousseeuw 
and Leroy (1987), which is given by 
Minimize med 5 (7) 
1 
That is, the estimates must yield the smallest value for the median of squared residuals computed for all observations. 
The LMS estimator has a breakdown point of 0.5, the highest possible value, which is desirable for our application. The 
high breakdown point of the LMS estimator results from the that the estimates need only to fit well half of the 
observations and ignore the other half. However, its relative efficiency is abnormally low. This can be intuitively 
understood by noting that half of the observations have no actual influence on estimates. Actually, the LMS estimator 
converges as n°” and is not asymptotically normal. To compensate for this deficiency, it is proposed to perform an M- 
estimation or a one-step weighted least squares using the LMS estimates as starting values. In our study, we use the 
same weight function in equation (6) to perform a successive M-estimation. As we will see, this compensation works 
only if some condition is fulfilled. Robustifying the LZD algorithms with the LMS estimator, we call the resultant 
matching algorithms LZD ms. 
We use the random sampling algorithm to resolve the LMS problem given by (7). The algorithm proceeds by repeatedly 
drawing subsamples of p different observations. In most cases of our experiments, we let p=7. For such a subsample, 
we compute the 6 transformation parameters using LS estimation and denote the solution by ©; (trial estimate). For 
each ©; we determine the objective function with respect to all available observations, ie. the value med rj is 
calculated. Finally, the trial estimate for which this value is minimal is retained as the LMS estimate. In principle, we 
should make trials with all possible subsamples, of which there are C p (n is number of observations). However, this is 
infeasible for our application. The random sampling technique is therefore introduced. The basic idea is that one 
performs a certain number (r1) of random selections such that the probability that as least one of the m subsamples 
contains no outliers is almost 1. The expression of this probability is 
1-(1-(1—26)^)" (8) 
where € is the contamination fraction and € is at most 0.5. In most of our experiments, with € =0.5 and p=7, we let 
m=600 and therefore the above probability is approximately 0.991. 
3.3 Experiments and Analysis 
We now test the capabilities of the robust matching algorithms developed above in detecting local deformation. The 
surface examined here is a 50 X50 grid DEM, called Pulse, as shown in figure 1. Random errors and deformation 
quantities are added to the Z coordinates of Pulse to generate a simulated DEM with local deformation relative to the 
original one. Then the simulated DEM is rotated and translated by a parameter vector ©. The robust matching 
algorithms are required to compute the parameter vector accurately. 
Several matters about our experiments are explained before we present experiment results. Since local deformation with 
various sizes, shape, deformation percentage and position may occur in practice, we can not simulate all the possible 
cases. The simulated deformation quantities in our experiments take the form of random errors distributing as N(c, 7) 
while the added random errors distribute as N(0, 0). With varying c, local deformation of different size can be 
simulated. When c is as small as only several times of ^, the simulated local deformation is of very flat shape and 
small size. The spatial distribution of local deformation is always grouped together within a square area and the local 
deformation can locate at any position within planar range of Pulse. 
  
1004 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.