Full text: XVIIth ISPRS Congress (Part B5)

   
er Om -— qus 
M bid Y was 
Mw -—— NV udo NE SUM XS 
2. ROBUST METHODS FOR OBJECT 
RECONSTRUCTION 
A general problem when dealing with real measurements is 
the influence of data contamination such as noise and 
outliers on the result. The aim of the present research is a 
reliable 3D-reconstruction of features from images. In a 
first step the feature pixels are extracted from the images 
with the help of any edge detection routines, whose 
robustness is not discussed in this paper. The extracted 
pixels are the input to the presented algorithm. Pixels from 
other features or errorpixels are outliers related to our line. 
The term ’robust’ here means that we want to have a result 
which takes only those pixels into account which really 
belong to our line. 
A least squares adjustment leads to wrong results when the 
data contains observations which do not belong to the 
model (Huber, 1981), (Rousseeuw/Leroy, 1987), (Fôrstner, 
1989). Even a single outlying observation may affect the 
result severely. 
Methods of hypothesis testing using test statistics are able 
to detect outliers, but they don't work very effective in 
presence of a high contamination. 
In image processing, however, we sometimes have to deal 
with highly contaminated data. For that reason different 
attempts have been made to develop robust methods. Some 
of the methods are the M-estimation (Huber, 1981), the 
Random Sampling Consensus procedure of (Fischler/Bolles, 
1981) and the Least Median Squares estimation by 
(Rouseeuw/Leroy, 1987). They all have in common that 
they try to find out how well an observation fits into the 
model. This is expressed by an individual weight for each 
observation. 
2.1 The random sample consensus procedure 
Random sample consensus (RANSAC) is a hypothesis- 
verify technique which can be used for determining model 
parameters, even if the amount of contamination exceeds 
50% (€ > 0.5). It requires no initial values. À closed form 
solution for the parameters, based on a minimum set of u 
observations, must be known. For some problems these 
solutions are complicated or impossible. Basically the 
technique works in the following heuristic way: 
1) Determine parameters from a random subset of u 
observations. 
2) Compute residuals for all observations. À tolerance 
threshold, based on the expected noise level, has to 
decide if an observation will be accepted. 
3) Count accepted observations. 
4) Stop procedure when enough observations are accepted, 
otherwise repeat it with a new random subset. 
Remark: Residuals in this procedure shouldn’t be mixed up 
with the residuals v in chapter 1.3. The computation of the 
residuals meant here is explained in 3.2.1. 
The algorithm parameters are the threshold for data 
acceptance in step 2), the threshold for model acceptance 
in step 4) and the number of subsets needed. The latter 
corresponds to the computation time. More about the 
number of subsets in chapter 2.1.1. The thresholds are 
  
  
   
  
  
  
   
  
   
  
  
   
  
  
  
  
   
  
  
   
  
  
   
  
   
  
  
  
  
   
  
  
   
   
  
  
  
  
  
  
  
  
  
  
   
  
  
  
  
  
  
  
   
    
discussed for the special case of a space line in chapter 
3.1.1. The determination of model parameters in step 1) by 
a closed form solution is treated in chapter 3.1. 
The described method is a powerful tool to detect outliers 
and to find approximate values for model parameters. 
The method is not able to find the best possible solution 
when not all subsets are used or when the non-outlying 
observations are noisy. How to procede in case of noisy 
data is treated for the space line example in chapter 3.3. 
2.1.1 The number of subsets in RANSAC 
According to step 4) in the RANSAC procedure the 
computations are finished when enough observations are 
accepted. In general, however, the correct solution, the 
number and size of outliers and the optimum threshold for 
data acceptance are unknown. Under these circumstances 
the best matching subset cannot be determined. For very 
small data sets all possible subsets are taken, but this is not 
possible for large data sets, because the number of subsets 
as well as the computation time increases rapidly. 
Under the assumption that the data contains n-o 'good' and 
0 "bad' points, the expected number m of random subsets 
to find at least one subset containing u good observations 
can be calculated due to probability laws. The probability 
to have at least one subset is usually set to 95% or 99%. 
The probability for one subset to contain u good points is 
P = A (2:3) 
x n n-1 n-(u+1) 
The probability of at least one uncontaminated subset out 
of m can be calculated by the binomial distribution: 
m-1 
P(m,,421) = [Pig p) pri @-2) 
i=l 
Instead of this exact formula an approximation is given for 
large numbers n of observations and a contamination of £96 
(Rousseeuw/Leroy, 1987): 
1-(1-(1-e)")") = 95% 
4 log 0.05 (2-3) 
log (1-(1-e)") 
Unfortunately the formula requires the input of € which is 
usually not known exactly. Some experimental results on a 
suitable number of subsets are presented in chapter 4. 
2.2 Least median squares method 
Another heuristic, robust method is the least median 
squares estimation (Rousseeuw/Leroy, 1987). Here the 
squared residuals of observations according to different 
random models are computed. The model with the smallest 
median of squared residuals will be the solution. 
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.