Roeland de Kok
to be out of the main field of image analysis in environmental applications during the 80" and 90". Meanwhile in
industrial applications there was a constant development concerning this issue and the link to fuzzy set theory has been
brought to attention as well (Haberücker, 1995). Although regular appearing in RS literature (Janssen, 1994,Cross,
1984, Gerbrands,1990, Gorte,1995), Gorte (1998A) points out the lack of this subject in standard literature for
educational purposes in environmental remote sensing, such as Sabins (1978), Lillesand (1987) and Richards (1992)
In the early studies of Gorte (1995), image segmentation based on quad-trees was used to improve classification results.
With an additional table output (Gorte 1998A), a basis for intensive GIS-RS synergy has been made available. Object
oriented classification of agricultural parcels has been under study in the work of Janssen (1994). The integration of GIS
and remote sensing databases where a hampering factor for his research. Janssen points out the low level database
integration of standard software of Arc/info (ESRI) and ERDAS in 1994. Also the segmentation goals in the study of
Janssen (1994) focus upon the creation of new vector boundaries. Gorte points out that by register the raster object, in
the database analysis, the need for the vector outer boundary disappears (Oral remarks).
With the development of the eCognition software, two basic deficits have been solved since the Janssen (1994) study; A
theory for formalizing knowledge in object-based image interpretation is solved with the use of fuzzy logic rules in
combination with a semantic network and the highest integration level of one database for raster and vector data has
been achieved.
2.1 Segmentation equals classification !?
The study of Flack (1996) gives an insight in the need for contextual information and an increasing use of GIS and RS
data, especially where hyper-spectral data as well as VHR data is concerned. Also Flack's description of segmentation
versus classification is made clear in the statement; * The classification of an entity relies upon the context within which
it is embedded. Establishing the context of an entity, however depends on the ability to group like entities, and therefore
requires some form of classification. The latter is the segmentation problem'(Flack, 1996). This similarity between
segmentation and classification becomes very clear in the work of Schneider (1999). Classifying scene objects per-pixel
is a special case of object classification, where the single pixels are the objects (after Schneider, 1999). Flack (1996)
notices the general misconception of segmentation, solely seen as a pre-processing step for classification. Also Gorte
points out the need for iterative classification-segmentation sessions (Gorte, 1998B). As there is no conceptual
difference, it should be noticed that such a per pixel view of classifying scene-objects still holds it's value for particular
digital data, where, from the user point of view, the sensor characteristics depend upon a scale factor in proper
relationship with the objects of interest. A multi-layer approach combining different sensor data in both segmented and
classified layers are under these considerations a proper way to handle different sensor data as well as GIS layers.
Furthermore, Flack's remarks on the object-based segmentation approach, signalizes a lack of incorporation of proven
statistical techniques as well as a less theoretically sound basis. However, the need for hybrid approaches to contextual
classification with respect to the consideration of spatial objects is obvious (Flack, 1996). Using object based
classification, such as used by the eCognition software, the segmentation part is directly linked to the construction of an
image-object database. The resulting map is simply a graphical display of that database. This is similar to any other GIS
application. Classification is assigning a label to a set of objects which have a positive response to a condition of a
query function. Statistical decisions in feature space are very necessary if attributes are similar and statistical decisions
become necessary in the face of huge similarity. Database queries focus-in on a fingerprint-like combinations of
attributes of the object set. Including attributes such as spatial context and textural behavior makes this approach
reliable. The focus on unique features that are allowed to be correlated, makes query based decisions acceptable as an
extension to statistical decisions among independent features. So there is a practical difference between segmentation
and classification in the software eCognition, here segmentation is linked to the database construction, classification is a
query result from this database.
3 ADESCRIPTION OF DELPHI 2 eCOGNITION, IT'S PHILOSOPHY AND POSSIBILITIES IN IMAGE
ANALYSIS.
3.1 The role of the scale factor
Although the origin of the image analysis is still quite dominant in the eCognition software, more and more it is
transforming into a spatial analyzer. The analysis tools allow the user to rely on standard thematic map output from GIS
and remote sensing, as well as offering a set of tools for data which are preferably not processed using traditional
methods. It is not intended to make traditional analysis superfluous, but forces these traditional practices to define the
limitations of their scale and object domain. From a remote sensing point of view, traditional multi-spectral methods are
bound to a certain sensor resolution at a certain scale level. Landsat type data belong to a 1:50.000 mapping scale in
which the role of the maximum likelihood spectral classifier is still powerful. For VHR data, the object based analysis
tools using fuzzy logic decision rules is more successful. The thematic maps such as the 50 Meter DTM grid can be
analyzed according to traditional Boolean logic as well as the fuzzy logic set available in eCognition. The main focus is
224 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000.