Full text: XVIIth ISPRS Congress (Part B3)

from grayvalues to geometry and structure, the probability 
for a hypothesized interpretation can be stated and evalu- 
ated: 
Il 
P P(D,1,6,5S) (1) 
P(D|,G, S) - PU|G, S) - P(G|S) - P(S) 
Il 
or in terms of selfinformation or description length 
(Liz -In P): 
Pe SDIG S (2) 
zo LDLGSa Les rLxcs) Ls) 
In the formulas, 
e S denotes the structural model description, the ideal 
geometry 
e G describes the deviation of the real geometry from the 
ideal one 
e [ denotes radiometry and texture of the segmented im- 
age 
e D corresponds to the original image data, describing 
signal, noise and outliers 
Interpretations yielding the highest probability, or the short- 
est description length resp., are considered as the best ones. 
Evaluating an interpretation in terms of MDL presumes the 
functional dependencies of all the contributing knowledge 
sources to be modelled. 
Pan and Fórstner [1992] give a sketch of the use of the 
MDL principle for the interpretation of different landuse 
classes in airborne images: following an information pre- 
serving smoothing, in a second step segmentation techniques 
lead to edge and region information. A subsequent group- 
ing process which is governed by the hypothesized model, 
namely polygonal areas, leads to a segmentation containing 
only polygonal boundaries. Still this representation is not 
complete and will contain ambiguous or false information. 
Basing on the model of the aggregation structure (S), hy- 
potheses may be formulated. The best interpretation, the 
one yielding the shortest description, is found in a search 
process. 
This paper is concerned with the last term of the formulas 
((1) or (2)), namely the structural aspect. A functional and 
probabilistic description of the model structure is derived. 
3 MACHINE LEARNING 
TECHNIQUES 
Machine Learning is a branch of Artificial Intelligence which 
is of increasing interest in the AI community. Especially 
in the domain of knowledge acquisition for expert- or infor- 
mation systems there is a great demand for such methods. 
According to Simon [1984], learning denotes changes in a 
system to do the same task with the same input data more 
efficiently and effectively the next time. Michalski [1984] 
simply defines learning as a transformation of the represen- 
tation. The new representation has to be , better in some 
sense. In order to perform such a learning task, the notion 
of „better“ has to be specified. The new representation ... 
e ...mostly is not generated for its own sake, but is the 
basis for subsequent processes. In order to control and 
857 
verify the new representation, it is given in a language, 
that is understandable by humans. 
e ...supports and eases the handling of subsequent pro- 
cesses, like object classification, recognition, or location. 
e ...1s more compact than the old one: the task of ,learn- 
ing from examples“ starts from a collection of examples 
and ends with a general description of these examples. 
The examples need more storage than the general de- 
scription. Learning therefore supports data reduction. 
e ...is explicit in contrast to the old one: knowledge ac- 
quisition often has to deal with ’diffuse’ expert knowl- 
edge. Learning can structure this knowledge. 
e ...is more general than the old one. 
e ...can reveal new facts about the data. 
Learning comprises three major considerations: 
e the representation of the given data (input) and the de- 
sired data (output). 
e a strategy to accomplish the transformation of the data 
from given extensional into the intensional representa- 
tion. 
e in order to evaluate the quality of the new representa- 
tion and to distinguish different possible hypotheses an 
evaluation measure has to be given. This measure forms 
the basis to decide when the given aim of the learning 
has been reached or when to generate new hypotheses. 
Following an historical sketch of Machine Learning research, 
the main techniques are shortly presented, along with exem- 
plary programs from the domain of structural learning for 
Computer Vision purposes. 
In the beginning of Machine Learning research, there was 
the wish for a general purpose learning system which starts 
without any initial structure or task dependent knowledge. 
Many of the early Neural Network approaches date to that 
phase. The limitations of these approaches, however and 
the idea of modelling human behaviour led to the develop- 
ment of programs which base on symbolic descriptions of the 
data. Representation schemes like logic, graphs, grammars 
were used to describe both features of the objects and rela- 
tions. Since the mid 70 ies it is agreed upon, that learning 
does not start from scratch, but has to be incorporated in 
a broad knowledge framework. Thus task specific programs 
were developed, where the amount of background knowledge 
is manageable. This development reflects the change in data 
representation from numerical to structural. 
Besides discerning Machine Learning techniques according 
to the knowledge representation into numerical and struc- 
tural issues, a second distinction can be made considering 
whether the learning process is supervised or unsupervised. 
In supervised approaches, the training instances are pre- 
sented along with a classification (, learning from examples"), 
whereas unsupervised techniques automatically find a clas- 
sification based on clustering or grouping processes (,,clus- 
tering“). Principally, clustering methods can only classify 
patterns, but do not give an explicit description of them. 
The result of a classification is just a distribution of the ex- 
amples to different object classes, but not a description of the 
features of the classes. A subsequent characterization step 
(e.g. with learning-from-examples techniques) has to give 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.