A TARGET DETECTION METHOD
BASED ON SAR AND OPTICAL IMAGE DATA FUSION
Sun Mu-han, Zhou Yin-qing, Xu Hua-ping
Dept, of Electronic Engineering, Beijing University of Aeronautics and Astronautics, Beijing, China
muhansun@yahoo.com.cn, huaping_xu@sohu.com
KEY WORDS: data fusion, digital image processing, target detection, image segmentation, edge detection.
ABSTRACT:
Detection is the first step in most ATR (Automatic Target Recognition) systems to find out ROI (Region of Interest) with the
potential targets. The concept and technique of MSDF (Multi-Sensor Data Fusion) exploits the potential of ATR systems mostly
based on digital images to a great extent. SAR (Synthetic Aperture Radar) can work in all-weather, day and night, and is capable of
penetration, due to microwave or millimeter wave scattering. All these imaging advantages facilitate the tasks of reconnaissance,
detection and recognition. It is a guarantee of providing image segmentation results with region integrality, especially for the ground
targets with weak backscattering. However, the existence of speckle noise caused by its unique imaging mechanism has a bad
influence on the accuracy of target edge location in SAR images. While remote sensing images obtained by optical sensors can make
up this limitation. This paper presents a method of automatic detection for line-shape target based on SAR and optical images. To
improve the universality of the proposed method, the remote sensing images are all processed by traditional image processing
techniques. The method makes use of the preponderant characters of the line-shape target in SAR images and optical images, i.e. the
region segmentation result of SAR images and the accurate edge information of optical images, to realize target detection and
location. Experiments are carried out on both the SAR image and optical image of the same region which have been registered, and
the detection result demonstrates the validity of the proposed method.
1. INTRODUCTION
The processing flow of a typical ATR (Automatic Target
Recognition) system is composed of target detection, target
discrimination and target classification and recognition. Target
detection is applied to the whole image pixel by pixel to find
out ROI (Region of Interest) with the potential targets. The
computation burden is really huge, so the algorithm for target
detection should not be too complicated. Manmade targets in
remote sensing images can be sorted into three types, i.e. point
targets, line-type targets and extended targets. To realize
automatic target recognition, different characters (spatial and
frequential features, edge, texture etc.) should be utilized
according to the target type. In this paper, the line-type targets,
such as highways, bridges, and airport runways, are to be
detected. For this type of targets, their geometric features are
often used for detection, and edge information is the best
expression of this feature in the image.
The result of target detection of the line-type targets is the edge
pixels of the targets. Following detection, target discrimination
will extract the meaningful pixels and eliminate the fake and
false pixels which have nothing to with the target of the interest.
Target discrimination involves project representation and
description, and the schemes are advanced and complex. The
fewer false alarms exist in the detection result, the less
computational burden there is for discrimination, and the higher
efficiency of the whole ATR system is.
In the field of remote sensing image processing and application,
taking use of only one sensor of various types to detect target
has been explored a lot and a great amount of technologies have
been developed and matured. For example, the line-type target
detection of optical images mainly adopts edge detection
technology. There is a lot of speckle noise which is
multiplicative in SAR images, due to its unique imaging
mechanism. This multiplicative noise challenges SAR image
processing. And SAR image processing usually makes use of
two schemes. One scheme is to denoise the SAR image first and
then apply traditional image processing for optical images to
SAR images; the other is to utilize the image gray-level
information as well as the statistical models for describing SAR
images. At present, the methods of integrating multiple remote
sensing images to realize target detection can be classified into
three categories according to the information fusion level, i.e.
pixel level, characteristic level and decision level (E.Lallier,
2000; Min-Sil Yang, 2003; Li Ming, 2004). Because radar
images are quite different from optical images in many aspects,
such as image features and target characteristics, most image
fusion schemes are based either on optical images or on radar
images. Even some methods uses these two kinds of images, the
fusion is realized on decision level.
This paper proposed a method for line-type detection
combining the edge information in optical images and the
region boundaries in segmented SAR images, based on the
analysis of both image characteristics and how the target
express itself differently in two images. This method is the
image fusion on characteristic level in nature. And the
characteristic used in detection is the edge pixels of target. First,
the mature edge detection technology is used to extract as much
edge information as possible from the registered optical image;
then the region boundaries of ROI in the SAR image after
segmentation are regarded as reference in order to eliminate a
great lot of irrelevant edge pixels to the target being detected in
the optical image. The decrease of fake and false edge pixels to
a great extent can dramatically ease the burden for
discrimination process, because the objects to be described and
represented have been reduced. Therefore, the efficiency of the
ATR system becomes higher.
The paper is organized as follows. An optical image and a SAR
image of the same region which have been registered are
presented in Section 2. Image characteristics and line-type
target features are being analyzed in this section, based on the
given images. The general processing flow is presented in