Full text: Proceedings, XXth congress (Part 3)

  
ON-BOARD SATELLITE IMAGE COMPRESSION 
BY OBJECT-FEATURE EXTRACTION 
Hassan Ghassemian 
Iranian Remote Sensing Center, No, 22, 14" St. Saadatabad, 19979-94313- Tehran, Iran & 
Department of Electrical Engineering, Tarbiat Modares University, P.O.Box: 14115-111, Tehran, Iran 
ghassemi(@modares.ac.ir 
Commission III, WG IIU/4 
KEY WORDS: Satellite, Hyper Spectral, On-line, Object, Feature Extraction, Segmentation. 
ABSTRACT: 
Recent developments in sensor technology make possible Earth observational remote sensing systems with high spectral resolution 
and data dimensionality. As a result, the flow of data from satellite-borne sensors to earth-stations is likely to increase to an 
enormous rate. This paper investigates a new on-board unsupervised feature extraction method that reduces the complexity and costs 
associated with the analysis of multispectral images and the data transmission, storage, archival and distribution as well. Typically in 
remote sensing a scene is represented by the pixel-oriented features. It is possible to reduce data redundancy by an unsupervised 
object-feature extraction process, where the object-features, rather than the pixel-features, are used for multispectral scene 
representation. The proposed algorithm partitions the observation space into exhaustive set of disjoint objects. Then, pixels 
belonging to each object are characterized by object features. Illustrative examples are presented, and the performance of features is 
investigated. Results show an average compression more than 25, the classification performance is improved for all classes, and the 
CPU time required for classification is reduced by a factor of more than 25, and some new features of the scene have been extracted. 
1. INTRODOCTION 
On-line data redundancy reduction is especially important in 
data systems involving high resolution remotely sensed image 
data which require related powerful communication, archiving, 
distribution and scene analysis. A complex scene is composed 
of relatively simple objects of different sizes and shapes, each 
object of which contains only one class of surface cover type. 
The scene can be described by classifying the objects and 
recording their relative positions and orientation. Object-based 
scene representation can be thought of as a combined object 
detection and feature extraction process. The object extraction 
is a process of scene segmentation that extracts similar groups 
of contiguous pixels in a scene as objects according to some 
numerical measure of similarity. Intuitively, objects have two 
basic characteristics: they exhibit an internal regularity, and 
they contrast with their surroundings. 
Because of the irregularities due to the noise, the objects do not 
exhibit these characteristics in an obvious sense. The ambiguity 
in the object detection process can be reduced if the spatial 
dependencies, which exist among the adjacent pixels, are 
intelligently incorporated into the decision making process. The 
proposed multispectral image compression algorithm is an “on- 
line pre-processing algorithm that uses unsupervised object- 
feature extraction” to represent the information in a 
multispectral image data more efficiently. This algorithm 
incorporates spectral and contextual information into the object- 
feature extraction scheme. The algorithm uses local spectral- 
spatial features to describe the characteristics of objects in the 
scene. Examples of such features are size, shape, location, and 
spectral features of the objects. The local spatial features (e.g., 
size shape, location and orientation of the object in the scene) of 
the objects are represented by a so-called spatial-feature-map; 
820 
the spectral features of an object are represented by a d- 
dimensional vector. The technique is based on the fundamental 
assumption that the scene is segmented into objects such that all 
samples (pixels) from an object are members of the same class; 
hence, the scene's objects can each be represented by a single 
suitably chosen feature set. Typically the size and shape of 
objects in the scene vary randomly, and the sampling rate and 
therefore the pixel size are fixed, it is reasonable to assume that 
the sample data (pixels) from a simple object have a common 
characteristic. A complex scene consists of simple objects; any 
scene can thus be described by classifying the objects in terms 
of their features and by recording the relative position and 
orientation of the objects in the scene. 
We introduce the basic components that make up the structures 
of an analytical model for scene representation in an efficient 
measure space. This process is carried out through a specific 
feature extraction method which maps the original data (pixel 
observation) into an efficient feature space, called the object- 
feature-space. This method utilizes a new technique based on a 
so-called unity relation which must exist among the pixels 
within an object. The unity relation among the pixels of an 
object is defined with regard to an adjacency relation, spectral 
features, and spatial features in an object. The technique must 
detect objects in real-time and represent them by means of an 
Object-feature. The unity relation, for on-line object-feature 
extraction, can be realized by the path-hypothesis. The path- 
hypothesis is based on the fundamental assumption that pixels 
from an object are sequentially connected to each other by a 
well-defined relationship in the observation space, where the 
spatial variation between two consecutive points in the path 
follows a special rule. By employing the path-hypothesis and 
using an appropriate metric for similarity measure, the scene 
can be segmented into objects. 
In 
pass pat + AT Pd EA
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.