Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A „Photogrammetric Computer Vision‘, Graz, 2002 
  
2. STUDY AREA AND DATA 
2.1 LIDAR data 
LIDAR data were acquired with a Saab TopEye system over a 5 
km” area within Capitol State Forest, WA in the spring of 1999. 
The sensor settings and flight parameters are shown in Table 1. 
Data were provided in the form of an ASCII text file, with GPS 
time, aircraft position, and coordinate position for the first laser 
reflection included. 
  
  
  
  
Flying height 200 m 
Flying speed 25 m/s 
Swath width 70m 
Forward tilt 8 degrees 
Laser pulse density 3.5 pulses/m? 
Laser pulse rate 7000 pulses/sec 
  
Table 1. Flight parameters and LIDAR system settings. 
The LIDAR vendor also provided a LIDAR-derived digital 
terrain model (DTM) for the study area with a 4.57-meter (15- 
ft) resolution. 
2.2 Aerial photography 
Large-scale (1:7000) normal-color aerial photography was 
acquired over the study area in 1999. This photography was 
oriented in an analytical stereoplotter. 
3. METHODS 
3.1 Bayesian image analysis 
In general, Bayesian image analysis provides a means to 
incorporate prior knowledge or beliefs into the analysis of 
remotely sensed data (Besag, 1993). These a priori beliefs are 
represented in the form of a prior distribution, or prior model, 
that is placed over the image and is updated upon observation of 
the data. Formally, if this prior description of the image is 
denoted as p(x), then the conditional spatial distribution of this 
description, given the observed image y, is given by: 
p(x| y) e I(y | x) p(x) (1) 
In Bayesian parlance, this conditional distribution p(x|y) is 
referred to as the posterior distribution, on which all inferences 
are based. In Bayesian inference this posterior distribution is 
always represented as the product of the /ikelihood K(y|x) and the 
prior p(x) Typically, the goal in Bayesian inference is to 
calculate expectations or credible intervals (explicit probability 
statements made regarding the range of a parameter given the 
observed data). 
Bayesian image analysis has traditionally been carried out using 
digital images consisting of a discrete grid of picture elements 
(or pixels). Often the objective is to reconstruct an "underlying" 
image that has been distorted through a noise process. 
3.2 Bayesian object recognition 
More recently, the methods of Bayesian image analysis have 
been applied to the problem of object recognition (Baddeley 
and van Lieshout, 1993; van Lieshout, 1995; Rue and 
Syversveen, 1998; Rue and Hurn, 1999). The objective of this 
type of analysis is typically to locate and characterize various 
objects of interest in space, incorporating prior knowledge of 
the spatial distribution of these objects. Therefore, prior models 
based upon discrete grid-based neighborhood structures tend to 
be less appropriate. The description of Bayesian object 
recognition presented here generally follows van Lieshout 
(1995). 
In Bayesian object recognition, the observed data consist of an 
image, y=(y,;teT)» where T (the image space) is an arbitrary 
finite set. The class of possible objects U, is an arbitrary set, 
termed object space. Objects can be seen as points in U, and 
each determine a subset R(u)c T of image space that is 
occupied by the object. Any particular configuration is a finite 
set of distinct objects x =] ad xd The objective in 
object recognition is to estimate the (unobserved) true 
underlying pattern x given the observed image y. 
This true configuration x is related to the observed image y 
through the likelihood function /(y[x). As van Lieshout (1995) 
describes, the likelihood /(y|x) represents both the deterministic 
influence of the true configuration x, and the stochastic effects 
within the remote sensing process that produces the image, y. 
In a Bayesian analysis, the prior models will represent our prior 
beliefs regarding the spatial distribution of objects, and can be 
formulated to assign low probability to configurations that we 
do not expect to occur frequently, such as a large number of 
overlapping objects. The maximum a posteriori (MAP) 
estimator of x is the configuration X that maximizes the function 
((y\<)p(x), and the prior essentially is a penalty assigned to this 
maximization. Therefore MAP estimation is also called 
penalized maximum likelihood estimation. 
3.3 Bayesian object recognition for the analysis of three- 
dimensional LIDAR data in forested areas 
While Bayesian object recognition has previously been applied 
to the analysis of two-dimensional images, this approach can 
also be applied to analyze structure within three-dimensional 
LIDAR data. In this case, the observed data, y,, are not defined 
in terms of a raster image space, 7. Instead, the scan space 
becomes a collection of vectors, 7, determined by the LIDAR 
scanning process. Therefore, an individual pulse vector, f, 
represents the three-dimensional direction of each LIDAR 
pulse, from the aircraft to the terrain surface. The observed data, 
ys then represent range measurements along these vectors at 
which point the returning signal intensity exceeded a 
predetermined threshold (see Figure 1). 
Figu 
The 
mod 
actu 
spac 
loca 
obje 
mea 
influ 
LID 
appt 
plan 
actu 
folie 
This 
mea 
foli 
lase 
the 
indi 
fami 
mea 
rela: 
disc 
refle 
alon 
inde 
like! 
data 
3.3. 
scel 
fore 
pop 
indi 
In t 
was
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.