1119
DATA FUSION USING IHS TRANSFORMATIONS FOR EXPLORING ORE DEPOSITS
IN NORTHEASTERN PART OF THE SAHARAN METACRATON
A. H. Nasr, T. M. Ramadan
National Authority for Remote Sensing and Space Sciences, 23 Joseph Broz Tito st., El-Nozha El-Gedida,
P.O. Box: 1564 Alf-Mascan, Cairo, Egypt - aymanasr@hotmail.com, ramadan_narss2002@yahoo.com
KEY WORDS: Data Fusion, IHS Transformations, Landsat TM, RADARSAT-1, Saharan Metacraton
ABSTRACT:
The main objective of the remotely sensed data fusion is to create an integrated composite image of improved information and
enhanced interpretability. This data have geospatial details about earth’s surface for substantial assessment of land resources and
mineral exploration. Fusion of Visible-Infrared (VIR) and Synthetic Aperture Radar (SAR) images provides complementary data to
increase the amount of information that can be extracted from the individual input images. It contains the details beneath the surface
cover of the respective SAR data while maintaining the basic color content of the original VIR data. Image fusion can be performed
at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion
takes place. In this work, a pixel based image fusion from different sensors, namely Landsat TM and RADARSAT-1 was performed
using the Intensity, Hue, and Saturation (IHS) transformations procedures. The northeastern part of the Saharan Metacraton is
dominated by medium to high-grade gneisses and migmatites, disrupted by belts of low-grade volcano-sedimentary sequences
representing arc assemblages and highly dismembered ophiolites and intruded by a-type granitoids. Banded Iron Formation (BIF)
and gold mineralization are associated with the high grade gneisses and migmatites. According to the fusion results, the fused image
have enhanced subsurface structures such as foliation, faults and folding that control mineralization of several deposits and reveals
the fluvial features which are not observable in Landsat TM images.
1. INTRODUCTION
Earth observation satellites provide data at a broad range of
characteristics and multisource imageries including; spectral,
spatial, and temporal resolutions. By combining these data that
use different physical principals and record different properties
of the objects, this may generate datasets that have more
information than each of the input data alone. This process of
combining several kinds of imagery is known as data fusion
(Park and Kang, 2004). Several definitions can be found: "Data
fusion is capable of integrating different imagery data to
produce more information than can be derived from a single
sensor" (Pohl and Van Genderen, 1998). Another
comprehensive definition: "Data fusion deals with the
synergistic combination of information made available by
various knowledge sources such as sensors, in order to provide
a better understanding of a given scene" (Dasarthy, 1994). The
benefits from the fused images vary, they may detect the
changes occurred over a period of time, enhance spatial
resolution of multispectral images, generate an interpretation of
the scene not obtainable with data from a single sensor, and
reduce the uncertainty associated with the data from individual
sensor (Kim et al., 2005). They generally offer increased
interpretation capabilities, achieve more specific inferences and
produce more reliable results.
Data fusion can be performed at three different processing
levels namely; pixel level, feature level and decision level. In
pixel level fusion, the combination mechanism works directly
on the data obtained from the outputs of sensors. Feature level
fusion, on the other hand, works bn features extracted from the
source data or the features which are available from different
other sources of information. Decision level fusion works at an
even higher level, and merges the interpretations of different
objects obtained from different source of information
(Samadzadegan et al., 2006). In this work we have used pixel
level image fusion to obtain a new image that has superior
properties over the individual input images with different
properties. A general survey of pixel level image fusion
techniques can be found in (Pohl and van Genderen, 1998).
In order to enhance some features like spatial and textural
features as well as features that are not visible in optical images,
data fusion of optical VIR and SAR imagery is used. VIR
sensors offer spectral information about terrain cover types,
while SAR sensors are active sensors and can penetrate
materials which are optically opaque, and thus not visible by
optical or IR techniques. Therefore, SAR images complement
photographic and other optical imaging capabilities to increase
the amount of information that can be extracted from the
individual input images (Gungor and Shan, 2006). In this paper,
we integrated RADARSAT-1 features into co-registered TM
image using IHS transformations for geological and mineral
exploration in the study area. It enhanced subsurface structures
such as foliation, faults and folding. The remainder of the paper
is arranged as follows: Section two explains the IHS
transformations with their equations. Section three presents the
data acquisition and methodology. Where, the data, the
software used, and the processing steps are introduced. Section
four focuses on the results and discussions. Finally, our
conclusions are given in section five.
2. INTENSITY-HUE-SATURATION
TRANSFORMATIONS
The IHS color space is very useful for image processing
because it separates the color information in ways that
correspond to the human visual system’s response. It is an