Full text: Proceedings, XXth congress (Part 2)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B2. Istanbul 2004 
  
A very important source often neglected in more theoret- 
ical work are GIS data. (Brenner, 2000) use two dimen- 
sional (2D) polvgons from which they generate straight 
skeletons in conjunction with laser-scanner data to effi- 
ciently and reliably extract buildings. In (Gerke et al., 
2004) a two stage process is employed to verify given road 
data. After extracting reliable roads in the first stage using 
strict parameters, topologic information is used to restrict 
the further analysis so that relaxed parameters can be used 
leading to a more complete verification and therefore to a 
higher efficiency. (Zhang, 2004) employ color and stereo 
data together with extensive modeling, comprising, e.g., 
context, occlusions, and shadows, making heavy, yet intel- 
ligent use of the GIS data, leading to an impressive perfor- 
mance. 
Even though the above papers show the potential of using 
GIS information, it is essential to keep in mind, that one 
cannot rely absolutely on it, as it might be outdated and 
unprecise and therefore lead to wrong conclusions. There 
is always a trade-off to be made between accepting wrong, 
because changed objects, and rejecting many correct, be- 
cause unchanged objects. Therefore, even when using ad- 
ditional information from GIS, in most cases approaches 
are needed which model reality so deeply, that they can ex- 
tract the objects at hand even under complex circumstances 
as, e.g., (Zhang, 2004) demonstrates it. 
2.3 Statistical Modeling 
The deficits of a mainly deterministic modeling, for in- 
stance based on semantical networks, e.g., (Niemann et 
al., 1990), have been known for a long time. There have 
been heuristic attempts by adding for instance believe val- 
ues, but more sound ways of including statistical modeling 
have been used only recently, for instance Bayesian net- 
works in (Growe et al., 2000) or (Kim and Nevatia, 2003). 
The work on dynamic Bayesian networks (Kulschewski, 
1999) has been interesting in terms of modeling objects 
and their relations. Though, manually generated ideal data 
were used, and thus the feasibility of the approach to cope 
with real world, noisy, and unreliable data is hard to judge. 
Until recently, semantical modeling was also lacking the 
capability to visualize the actual contents of the knowledge 
modeled. The quality of the modeling, e.g., by a semanti- 
cal network, could only be judged by looking at interpre- 
tation results and it was not clear how much a component 
contributed to the results. 
By the advent of reversible Jump Markov Chain Monte 
Carlo (RIMCMC) (Green, 1995) there is a means for sta- 
tistical modeling which can also be used for simulation. 
The jumps in RIMCMC make it possible not only to use 
distributions for the parameters of objects and relations, 
but also to introduce new objects or relations and to delete 
them. The latter is the reason why the jumps are called 
reversible: For every jump generating a new object there 
needs to exist a backward jump, allowing to eliminate the 
object. Because of this, RIMCMC has the following out- 
standing features: 
416 
e The modeling is extended in a sound way to deal with 
the uncertainty of objects as well as their relations 
even when it is not known beforehand, which and how 
many objects exist. 
e It is possible to sample into the distribution allowing 
to simulate objects and their relations according to the 
model. Thus, one can check from the outcome, if 
the given model really describes what it is supposed 
to describe. Le., in stark contrast to most modeling 
schemes, one can check the model without analyzing 
given data. 
That the ideas of RIMCMC are practically feasible and 
meaningful was shown by work on facade interpretation 
(Dick et al., 2002), road extraction (Stoica et al., 2004), 
and vegetation extraction (Andersen et al., 2002). The 
former two demonstrate that one can produce realistically 
looking facades or roads, respectively, by starting from a 
few basic primitives, such as a window and a door, or a 
road piece, and then sampling into the distribution. For the 
roads and the vegetation, sampling is done for the extrac- 
tion in conjuction with simulated annealing, avoiding local 
minima, but also resulting in a very high complexity of the 
approach. 
Another issue of statistical modeling is self-diagnosis. 
(Forstner, 1996) introduced the “traffic light” paradigm. 
Results which are correct (green) are distinguished from 
certainly incorrect results (red) and results, which might be 
correct, but should be checked (yellow). The idea is that a 
calling routine will get back information if it can rely on 
a result (green), if the result might be correct (yellow), or 
if there was no meaningful result (red). Self-diagnosis is 
based on statistical modeling. The more one knows about 
the deterministic and stochastic structure of the problem, 
the more reliable self-diagnosis will be. (Gerke et al. 
2004) have built their approach for road verification on top 
of the traffic light paradigm. 
2.4 Geometry and Statistics 
An area of statistics linked to problems often geometri- 
cal in nature is concerned with the large number of blun- 
ders in the data, the vision community always has to deal 
with, especially when using matching algorithms. This has 
sparked the development of techniques which approach the 
problem differently from how most photogrammetrists do 
this. Especially popular is the random sample consensus, 
or short RANSAC approach of (Fischler and Bolles, 1981) 
and its variants such as the geometric information crite- 
rion (GRIC) (Torr, 1997). The basic idea is to take a larger 
number of random samples with the minimum number of 
observations necessary to solve the problem. All these 
samples lead to solutions which are then checked against 
the rest of the observations. Finally, the solution is taken, 
which is in correspondence with the largest portion of ob- 
servations. This technique is extremely useful for applica- 
tions such as the estimation of the epipolar geometry (Hart- 
ley and Zisserman, 2000), for aero-triangulation (Schmidt 
Intern 
and B 
3D pc 
Comp 
proble 
well. 
while 
serma 
years 
tant ir 
by (Fz 
hahn : 
metric 
articul 
work 
algebr 
matioi 
25 1 
From 
autom 
els frc 
tance 
generz 
SONS, \ 
variety 
For le: 
degree 
the ful 
as buil 
geome 
Unfort 
been ii 
Micha 
well w 
tion. 
large [ 
ter a lo 
unders 
Also í 
den M 
throug 
Insteac 
(gramr 
plicati 
cies of 
1999). 
age pr 
progre: 
Finally 
learnin 
discuss 
nition : 
concep 
logical 
cipled :
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.