Full text: XIXth congress (Part B3,1)

Roeland de Kok 
  
1 CLASSIFICA TION 
Classification decisions are grouping sets of unique ‘objects’ into classes, which members share a common feature. In 
standard classification the ‘objects’ are single pixels and these pixels have 3 attributes; Value, Position and Size. The 
pixels line up in arrays, making up an ‘image’. A digital image contains only implicit information about the objects in 
the scene. Based upon object models, it is possible to discern individual entities in a seemingly unstructured collection 
of pixels. In a per —field analysis or *pixel in polygon' analysis, pixel information is already linked to a spatial database 
build up in a digitizing session. In the spatial database, besides the explicit information, there is still a huge amount of 
implicit information available (Sester,2000). 
1.1 Traditional 
Normally, image analysis takes place in 3 basic domains of image-data and deals with Image space, Spectral space and 
Feature space (Landgrebe, 1999). There is a common held conception that the main processing tasks in remote sensing 
are concerned with the labeling of each pixel, but this is not necessarily so (Hinton, 1999). Non pixel-based 
classifications are well known in radar analysis. Analyzing such data therefore means offering the geometric resolution 
of the image to achieve a signature characteristic of the surface. This is not a real problem if objects of interest are 
formed by a group of pixels (>30). Standard radar analysis focus on the use of GIS derived polygon data to calculate 
statistics inside a surface. Most of the time these polygons are made by an operator and therefore time consuming. 
Classical image analysis tools for per-pixel analysis are focuses on decisions in Feature space (Richards 1992), a 
statistical domain where the advantages of computer calculation abilities are used. Traditionally, two fundamental 
decision steps for pixels have to be taken: 
1. Labeling a pixel to define it's object class, using it's unique spectral values in feature space and/or the values of it's 
predefined neighborhood (using filter operators). 
2. Grouping the labeled pixels to an image object, using the topological structure of the labeled neighbors, a GIS 
operation (after Molenaar, 1990). 
1.2 Object oriented 
Object based analysis uses the ‘image object’ or ‘local pixel group’ as a basis. Thus, the image object can take the 
spatial context of the pixel population into account. The image object can be considered as the 4" attribute of a pixel, 
answering the question of :' to which (spatial) pixel population does this pixel belong *. Consequently, the registration 
of the neighborhood results in a construction of a database. In the software eCognition, this database registration is 
advanced and user friendly and therefore fit for use in this study. The database in eCognition describes the image object 
in the context of the semantic network. The network is based upon sub-objects and their connection to neighboring 
objects, which form a super-object on a higher (in this case) hierarchical level. The following shows a way of dealing 
with these possibilities: 
l. An advanced segmentation algorithm is used to select pixels from different raster layers. These pixels are assigned 
to a local spatial pixel population. This population is called an image object and a constructing takes place of the 
object topology and registered in a relational database. 
The different image and GIS layers are connected through their image objects (multi-level segmentation) and their 
object relationships, thus creating a semantic network, both in their horizontal as well as their vertical 
neighborhoods. 
3. Objects which are similar with respect to an operator-selected feature group are assigned a label, using query 
functions formalized with fuzzy logic decision rules. A class is a group of objects sharing the same selected 
features (attributes). 
4. Classified neighboring objects are merged to create a knowledge based polygon layer with it's additional database. 
ta 
2 SEGMENTATION AND DATABASE OUTPUT 
Image segmentation as a ‘basis’ for classification has been around in remote sensing community for quite some time 
now. Experiments by Kettig and Landgrebe (1976), already showed the weak spots of conventional ‘per point’ 
approach (per-pixel), which lacks the possibility to describe dependencies between adjacent states of natural objects. In 
‘The Extraction and Classification of Homogeneous Objects’ (ECHO, Landgrebe, 1976) the ‘objects’ as a result of the 
segmentation were mentioned and the important role of tabulated results or type map that should be an output product 
for a segmentation session is pointed out. The switch from pixel-oriented to table oriented analysis is main focus in data 
reduction (Haberäcker,1995). Run length encoding and quad-tree structures are widely used in data compression 
techniques. An extensive use of the tabulated result or more precise a database linked to image objects beyond the 
registration of pixel arrays in a recoverable format, is a step which seems to be overlooked or at least not used to it’s full 
extent in many a segmentation algorithm. The application of segmentation algorithms in remote sensing analysis seems 
  
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 223 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.