Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

Monograph

Persistent identifier:
856473650
Author:
Baltsavias, Emmanuel P.
Title:
Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Sub title:
Joint ISPRS/EARSeL Workshop ; 3 - 4 June 1999, Valladolid, Spain
Scope:
III, 209 Seiten
Year of publication:
1999
Place of publication:
Coventry
Publisher of the original:
RICS Books
Identifier (digital):
856473650
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
TECHNICAL SESSION 2 PREREQUISITES FOR FUSION / INTEGRATION: IMAGE TO IMAGE / MAP REGISTRATION
Document type:
Monograph
Structure type:
Chapter

Chapter

Title:
AUTOMATED PROCEDURES FOR MULTISENSOR REGISTRATION AND ORTHORECTIFICATION OF SATELLITE IMAGES. Ian Dowman and Paul Dare
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
  • Cover
  • ColorChart
  • Title page
  • CONTENTS
  • PREFACE
  • TECHNICAL SESSION 1 OVERVIEW OF IMAGE / DATA / INFORMATION FUSION AND INTEGRATION
  • DEFINITIONS AND TERMS OF REFERENCE IN DATA FUSION. L. Wald
  • TOOLS AND METHODS FOR FUSION OF IMAGES OF DIFFERENT SPATIAL RESOLUTION. C. Pohl
  • INTEGRATION OF IMAGE ANALYSIS AND GIS. Emmanuel Baltsavias, Michael Hahn,
  • TECHNICAL SESSION 2 PREREQUISITES FOR FUSION / INTEGRATION: IMAGE TO IMAGE / MAP REGISTRATION
  • GEOCODING AND COREGISTRATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Hannes Raggam, Mathias Schardt and Heinz Gallaun
  • GEORIS : A TOOL TO OVERLAY PRECISELY DIGITAL IMAGERY. Ph.Garnesson, D.Bruckert
  • AUTOMATED PROCEDURES FOR MULTISENSOR REGISTRATION AND ORTHORECTIFICATION OF SATELLITE IMAGES. Ian Dowman and Paul Dare
  • TECHNICAL SESSION 3 OBJECT AND IMAGE CLASSIFICATION
  • LANDCOVER MAPPING BY INTERRELATED SEGMENTATION AND CLASSIFICATION OF SATELLITE IMAGES. W. Schneider, J. Steinwendner
  • INCLUSION OF MULTISPECTRAL DATA INTO OBJECT RECOGNITION. Bea Csathó , Toni Schenk, Dong-Cheon Lee and Sagi Filin
  • SCALE CHARACTERISTICS OF LOCAL AUTOCOVARIANCES FOR TEXTURE SEGMENTATION. Annett Faber, Wolfgang Förstner
  • BAYESIAN METHODS: APPLICATIONS IN INFORMATION AGGREGATION AND IMAGE DATA MINING. Mihai Datcu and Klaus Seidel
  • TECHNICAL SESSION 4 FUSION OF SENSOR-DERIVED PRODUCTS
  • AUTOMATIC CLASSIFICATION OF URBAN ENVIRONMENTS FOR DATABASE REVISION USING LIDAR AND COLOR AERIAL IMAGERY. N. Haala, V. Walter
  • STRATEGIES AND METHODS FOR THE FUSION OF DIGITAL ELEVATION MODELS FROM OPTICAL AND SAR DATA. M. Honikel
  • INTEGRATION OF DTMS USING WAVELETS. M. Hahn, F. Samadzadegan
  • ANISOTROPY INFORMATION FROM MOMS-02/PRIRODA STEREO DATASETS - AN ADDITIONAL PHYSICAL PARAMETER FOR LAND SURFACE CHARACTERISATION. Th. Schneider, I. Manakos, Peter Reinartz, R. Müller
  • TECHNICAL SESSION 5 FUSION OF VARIABLE SPATIAL / SPECTRAL RESOLUTION IMAGES
  • ADAPTIVE FUSION OF MULTISOURCE RASTER DATA APPLYING FILTER TECHNIQUES. K. Steinnocher
  • FUSION OF 18 m MOMS-2P AND 30 m LANDS AT TM MULTISPECTRAL DATA BY THE GENERALIZED LAPLACIAN PYRAMID. Bruno Aiazzi, Luciano Alparone, Stefano Baronti, Ivan Pippi
  • OPERATIONAL APPLICATIONS OF MULTI-SENSOR IMAGE FUSION. C. Pohl, H. Touron
  • TECHNICAL SESSION 6 INTEGRATION OF IMAGE ANALYSIS AND GIS
  • KNOWLEDGE BASED INTERPRETATION OF MULTISENSOR AND MULTITEMPORAL REMOTE SENSING IMAGES. Stefan Growe
  • AUTOMATIC RECONSTRUCTION OF ROOFS FROM MAPS AND ELEVATION DATA. U. Stilla, K. Jurkiewicz
  • INVESTIGATION OF SYNERGY EFFECTS BETWEEN SATELLITE IMAGERY AND DIGITAL TOPOGRAPHIC DATABASES BY USING INTEGRATED KNOWLEDGE PROCESSING. Dietmar Kunz
  • INTERACTIVE SESSION 1 IMAGE CLASSIFICATION
  • AN AUTOMATED APPROACH FOR TRAINING DATA SELECTION WITHIN AN INTEGRATED GIS AND REMOTE SENSING ENVIRONMENT FOR MONITORING TEMPORAL CHANGES. Ulrich Rhein
  • CLASSIFICATION OF SETTLEMENT STRUCTURES USING MORPHOLOGICAL AND SPECTRAL FEATURES IN FUSED HIGH RESOLUTION SATELLITE IMAGES (IRS-1C). Maik Netzband, Gotthard Meinel, Regin Lippold
  • ASSESSMENT OF NOISE VARIANCE AND INFORMATION CONTENT OF MULTI-/HYPER-SPECTRAL IMAGERY. Bruno Aiazzi, Luciano Alparone, Alessandro Barducci, Stefano Baronti, Ivan Pippi
  • COMBINING SPECTRAL AND TEXTURAL FEATURES FOR MULTISPECTRAL IMAGE CLASSIFICATION WITH ARTIFICIAL NEURAL NETWORKS. H. He , C. Collet
  • TECHNICAL SESSION 7 APPLICATIONS IN FORESTRY
  • SENSOR FUSED IMAGES FOR VISUAL INTERPRETATION OF FOREST STAND BORDERS. R. Fritz, I. Freeh, B. Koch, Chr. Ueffing
  • A LOCAL CORRELATION APPROACH FOR THE FUSION OF REMOTE SENSING DATA WITH DIFFERENT SPATIAL RESOLUTIONS IN FORESTRY APPLICATIONS. J. Hill, C. Diemer, O. Stöver, Th. Udelhoven
  • OBJECT-BASED CLASSIFICATION AND APPLICATIONS IN THE ALPINE FOREST ENVIRONMENT. R. de Kok, T. Schneider, U. Ammer
  • Author Index
  • Keyword Index
  • Cover

Full text

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
42 
Figure 7. Flow-chart summarising patch matching procedure. 
The cost function used by Morgado and Dowman (1997) is: 
T = ^\a l - a 2 \+\p i -p 2 \ + \r l - r 2 \+\c l -c 2 \+... 
where T is the value of the cost function, a/ is the area of patch 
i, Pi is the perimeter length, and rj and q are the length and 
width of the bounding rectangle. 
A specific problem associated with this function is that it was 
found that the area component influenced the results more than 
the other attributes. To get around this problem, the value of the 
area component was halved to reduce its influence. However, 
since areas and lengths are being compared, it seems more 
reasonable to take the square root of the area component, so that 
all the components being compared have the same 
dimensionality. Furthermore, with the above function, larger 
patches will always produce a larger value than smaller patches. 
Therefore, to ensure this is not the case, the differences in 
components have been normalised with respect to patch size. 
Thus, the cost function used in this study is expressed by the 
equation: 
l 
a \ ~ a 2 
2 
+ 
Pi -Pi 
+ 
r i ~ r 2 
+ 
C i C 1 
«1 +a 2 
Pl+ Pi 
r i +r i 
Cj + c 2 
are shown in Table 2. It is apparent from this table that by using 
a combination of the cost function, the patch separation and the 
overlap, correct matches can be identified. Although the cost 
function is not by itself reliable, there is a clear bimodal 
distribution of the patch separations and the overlap. Using this 
method, correct matches can be identified and these are shown 
in Figure 8. 
In order to determine the best method of segmentation, the tests 
were repeated for all methods of patch extraction using the 
Istres images and another pair. The results do not allow any 
conclusions about the best method, only that there are large 
variations with image and method. For both these images it is 
interesting that the number of correct matches from all 
combinations varied from 0 to 5 out of total matches varying 
from 39 to 366. 
The correct matches were successfully used to register the 
SPOT and SAR images together. 
SAR patch 
index 
SPOT patch 
index 
Cost 
function 
Patch 
separation 
(pixels) 
Percentage 
overlap 
(x 100) 
2 
5204 
0.463 
112.429 
0.194 
3 
0,484 
0.353 
0.885 
7 
13897 
0.270 
225.892 
0.331 
9 
3417 
0.550 
253.794 
0.312 
10 
2901 ¡1 
0.277 
4.651 
0.899 
12 
3754 
0.229 
6.835 
0.944 
13 
18389 
0.265 
267.599 
0.199 
18 
26749 
0.240 
514.928 
0.372 
21 
13746 
0.662 
209.264 
0.668 
22 
21272 
0.394 
275.525 
0.366 
24 
8585 
0.259 
378.102 
0.401 
28 
18558 
0.306 
166.298 
0.493 
30 
21476 
0.327 
198.795 
0.526 
32 
11307 
0.313 
59.894 
0.561 
33 
25114 
0.129 
242.566 
0.561 
37 
21579 
0.137 
303.653 
0.652 
39 
20029 
0.286 
395.225 
0.573 
43 
16449 
0.549 
43.834 
0.307 
45 
15793 
0.287 
3.995 
0.887 
56 
1339 
0.414 
330.546 
0.230 
60 
11922 
0.416 
382.007 
0.629 
61 
16119 
0.474 
123.264 
0.596 
71 
17966 
0.215 
347.164 
0.514 
73 
13154 
0.714 
222.182 
0.290 
75 
6637 
0.276 
384.673 
0.482 
76 
16520 
0.186 
386.544 
0.351 
77 
4572 
0.382 
418.060 
0.471 
Table 2. Matching results with correct matches highlighted. 
5. EDGE MATCHING 
Every patch in image 2 is matched with each patch in image 1 
and the matches with the lowest cost function are retained. 
There will be false and multiple matches and these are reduced 
by eliminating matches with large differences in centroid 
values. A further technique used to improve matching is to 
repeat the operation with image 2 as the master and image 1 as 
the slave. To compare shape is a further check. Patches are 
compared by counting the number of pixels in common when 
centroids are made coincident. Those with large overlap are 
considered to be the best matches. The results of these processes 
The polygon matching provides directly the centroid co 
ordinates of conjugate polygons and these can be used to carry 
out a transformation between the two images. This 
transformation can be improved by matching the detail of the 
edges around the polygons. The basic method of edge matching 
using dynamic programming is described by Newton et al. 
(1994). This previous work used the edges of the polygons as 
extracted by the segmentation. In the new method, edges are 
extracted from the raw data in the region of the polygon 
boundary and then matched using dynamic programming as 
before. In this way, it is ensured that reliable edges are extracted
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

baltsavias, emmanuel p. Fusion of Sensor Data, Knowledge Sources and Algorithms for Extraction and Classification of Topographic Objects. RICS Books, 1999.
Please check the citation before using it.

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

Which word does not fit into the series: car green bus train:

I hereby confirm the use of my personal data within the context of the enquiry made.