Full text: Resource and environmental monitoring

  
246 
  
  
2. FEATURE EXTRACTION 
Feature extraction and image interpretation are the most 
time consuming tasks in photogrammetric mapping and 
are regarded as the typical job of a human operator in a 
traditional photogrammetric environment. A digital 
approach offers the prospect for automation of these 
tasks. There have been research activities in 
photogrammetry, remote sensing and computer vision 
areas in automated feature extraction (Albertz and Konig, 
1991). Knowledge required for this include physics of the 
imaging process, geometry and photometry of specific 
objects and the spatial relationships and constraints 
between objects (Quam and Strat, 1991). To date, full 
automatic feature extraction is not operational, some 
semi-automatic procedures or techniques have been 
developed over the years. Forstner (1993) has classified 
features into three categories: low level features which are 
attributes of the pixel arrays of the images such as 
spectral features used in multispectral classification, mid- 
level features which are either geometric primitives such 
as points, edges or regions or they are aggregates of these 
primitives including relations, high level features which 
are already interpreted, with meanings or labels attached. 
In other words, the feature extraction can be characterized 
as  thematic information extraction or spectral 
classification, geometric feature extraction and the 
combination of both. In the field of remote sensing, 
feature extraction algorithms were developed based on 
spectral properties of the objects and their relationships. 
In photogrammetry and computer vision fields, most of 
the activities were focused in extracting features by 
exploiting geometric knowledge of the objects under 
investigation. In the following subsections, we review the 
feature extraction methods used in these fields 
respectively. 
2.1 Feature Extraction in the Photogrammetry and 
Computer Vision Context 
Feature extraction from digital images involves two steps, 
firstly to identify the objects by interpreting, 
understanding and classifying the image, then to track the 
objects by measuring the coordinates of the object 
outlines. Usually, a geometric primitive (point, line or 
region) is defined by an algebraic equation with a number 
of parameters. The extraction algorithm looks for subsets 
of points in the data set that lie on a geometric primitive 
or close to it, in other words, the algorithm is essentially 
looking for subsets with low fitting cost. Depending on 
the context, additional constraints may be imposed on the 
subset (Veelaert, 1997). 
Edges carry most of the information in an image and are 
relatively robust to changes in image contrast and 
radiometry (McKeown, 1990). Therefore, edge detection 
has been an important process in image processing, 
pattern recognition and computer vision and it can be 
achieved by detecting the maxima of the gradient or zero- 
crossings of the second derivatives including the 
Laplacian (Bennamoun, et al., 1997, Shen, 1996). There 
are many detectors, such as Canny detector (Canny, 
1986), developed over the past by researchers in the field 
of photogrammetry and computer vision. Research in 
both fields also showed that there exists no universal edge 
detector which can be applied to a digital image function 
to both identify and track edges with sufficient success 
(Agouris and Stefanidis, 1996). 
Attention has been given by the researchers in the 
photogrammetry and computer vision fields to semi- 
automatic or automatic linear feature extraction, 
especially detection and delineation of roads. In the most 
recent development in semi-automatic extraction of roads 
from satellite and aerial images, a generic road model was 
represented by using photometric and geometric 
properties with several constraints and merit functions 
(Gruen and Li, 1996, Li, 1997, and Trinder and Li, 1995). 
Li (1997) has given an overview on some existing feature 
techniques, and shows that most application presented in 
the literature used black and white small scale aerial 
images or single band satellite images, and no spectral 
properties were taking into consideration. Trinder and 
Wang (1997) have proposed a knowledge-based system 
for automatic road extraction. Knowledge about the 
objects was limited mainly to geometric properties. The 
radiometric property applied was the average intensity or 
gray value of the roads. 
Extracting buildings has been another focused area for 
photogrammetrists and computer scientists in the past. It 
requires knowledge about the structure of built objects, 
existing techniques of edge-line analysis, shadow analysis 
and stereo imagery analysis to produce building 
hypotheses, but no single technique can perfectly 
delineate the structures in every scene. McKeown (1991) 
identified the problems associated with building 
extraction and pointed out that there is a need for 
information fusion techniques and for incorporating 
information of spectral properties into the extraction 
process. However, no literature to date has presented such 
applications . Multispectral images offer a richer dataset 
image processing and interpretation, but they place 
greater demands on technology and algorithm 
development. Currently the application of multispectral 
images has been limited to classification techniques in 
remote sensing. Hence, most image understanding 
procedures have not incorporated multi-spectral data 
(Trinder and Sawmya, 1997). 
2.2 Levels of Information Extraction in Remote 
Sensing 
Remote sensing technology has provided the capability of 
extracting information about objects or surfaces on the 
Earth’s surface and in the atmosphere. The level of 
information extraction can range from manual image 
interpretation using raw images, image interpretation 
using ortho-rectified images together with existing base 
maps, using enhancement, segmentation and 
transformation techniques to improve the interpretability, 
and digital classification using supervised or 
unsupervised methods. More detailed information on 
image enhancement, transformation and classification can 
be found in Richards (1993) and McCloy (1995). Reed 
and Du Buf (1993) gave a good review of segmentation 
and feature extraction techniques. Segmentation 
applications can be found in Ryherd and Woodcock 
(1996), Dong et al (1997) and Dong and Forster (1998). 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 7, Budapest, 1998 
m 5" py em, 
t" U t U» nm to 
m^ o^ f^»
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.