Full text: Mapping surface structure and topography by airborne and spaceborne lasers

      
   
    
  
    
    
     
     
     
    
    
    
   
   
   
   
   
   
    
   
    
  
  
    
    
   
   
   
   
   
    
     
     
    
    
  
  
9-11 Nov 1999 
  
  
  
  
ts. (a) Photograph of the 
ding from laser points using 
square and circle mark laser 
ne, spiral shows the perspec- 
:) interpolation of laser data 
iterpolation of laser data by 
International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 3W14, La Jolla, CA, 9-11 Nov. 1999 
Input range image 
  
Detect range edges 
  
Compute initial threshold for smooth regions 
Compute initial count for seed points 
| Extract smooth regions and seed points EEE 
I 
> 
  
  
  
  
  
  
| If seed not already covered | 
  
  
E Find best approximating model | 
  
  
  
| Expand entire image and prune | Update threshold 
and count 
A 
  
  
  
   
      
  
   
  
  
  
No 
  
Resolve ambiguities 
Remove isolated points 
Fill gaps 
Figure 4: Block diagram of complete robust surface perception 
scheme 
  
  
  
seed selection. For example [Besl, 1988] extracts connected 
regions from the HK-sign map (mean and Gaussian curva- 
tures), and erodes these regions until sufficiently small regions 
remain. By a seed point, we mean the center of the seed win- 
dow. In the RSE approach, first simple 2D edge detector is 
used to locate smooth regions. Points which are centers of 
the largest, contiguous regions, are identified. Finally, redun- 
dant seeds are eliminated. The procedure is implemented in 
a loop, starting with stringent criterias for smoothness and 
minimum size of smooth regions. These thresholds are de- 
creased in every iteration. Seed selection stops when all the 
data have been covered by at least one surface, except for 
isolated singletons and pairs, or when no smooth region left. 
5.5 Choosing best approximating model 
The next step is to choose the best approximating model from 
planar and biquadratic models for each seed point. In this 
subsection first the robust M-estimator, the Robust Sequen- 
tial Estimator (RSE), and the information theoretic model se- 
lection is introduced briefly followed by the description of the 
procedure used for selecting the best approximating model. 
Robust M-Estimator Robust estimators are more efficient 
(lower variance) than least squares (LS) if the errors are not 
normally distributed, as is the case in laser scanning. They 
are only slightly less effective than LS if the error has normal 
distribution. We use a maximum likelihood (M)-estimator 
as a robust estimator for the t distribution error model that 
we assume for the ALR data (Section 4). Unfortunately, the 
direct evaluation of maximum likelihood estimates from non- 
normal distributions becomes quite complicated. Therefore, 
estimates are obtained by using iteratively reweighted least 
squares (IRLS). Let p(c;) be any differentiable error density 
function which can be written in the form 
= Ei,2 
p(e) ao7'g{(&)°} (4) 
where o is the scale parameter, g{.} denotes a functional 
form, and. € = 2; — > 0j, is the i^^ actual error from 
eq.1. After computing the log-likelihood for 0 and o? we get 
a set of nonlinear equations. These nonlinear equations can 
be solved using IRLS. Rewriting the equations in matrix form 
X"W(s- X6) —0 (5) 
where W is a diagonal weight matrix whose elements depend 
on the residuals, since 
> ol 
wi = wi(f,07) = ZI oe (6) 
The resulting iterative scheme, after simplification, is given 
by 
Oi E Duis + (aT WA AX) IxTW, a = X6, 1) (7) 
For a t distribution having a degree of freedom f and scaled 
by a parameter c 
gl(ei/0)?] = [1 + d/(02) C 1n (8) 
Substituting this expression for g in (6) yields the individual 
weights: 
euh. 
aupres 
where the residual r; = =; — Bot Ojr;j. We use s in this 
(9) 
Wi 
expression to distinguish the (unknown) true value of the scale 
parameter, c, from an estimated value, s, used in computing 
the weights. These weights are then assigned to the diagonal 
elements of the weighting matrix, W, and (7) is used to 
update the parameters. 
Robust Sequential Estimator (RSE) The RSE is a robust 
extension of the sequential least squares (SLS) estimator. 
The RSE can be applied to any linear regression model of 
the type defined in (1). In the SLS algorithm, equal weights 
are assigned to every data point, while the RSE differentiates 
between valid data and outliers through its weighting mech- 
anism. That is, the RSE rejects any data point which fails to 
qualify as possibly valid for the current surface model. The 
RSE does not remove these data points from the data set en- 
tirely - just from the current surface. The estimation starts 
from an initial robust estimate of the surface parameters in 
a small neighborhood centered around a seed point. The 
parameters of the fitted surface are computed by IRLS with 
errors assumed to be t distributed. The RSE then adds the 
remaining data sequentially, assigning weights to each new 
observation based on the previous surface estimate. 
Information theoretic model selection We have to choose 
the best approximating model from a given set of models be- 
fore we can apply the RSE. Since models with more parame- 
ters fit the data better, the selection criteria should balance 
complexity with goodness of fit. The Akaike Information Cri- 
terion (AIC) is a powerful tool for choosing among differ- 
ent regression models. The asymptotically consistent AIC 
(CAlC)is a generalization of the Akaike Information Criterion 
(AIC) by making it asymptotically consistent and by penaliz- 
ing over parameterization more stringently to avoid modeling 
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.