Full text: Proceedings, XXth congress (Part 3)

nbul 2004 
following 
set P- 
dr € Ar 
> X4) = 1 
WX, X) 
n» Xk) = | 
veen two 
> property 
that, the 
d by an 
criterion 
he pixels” 
test the 
since the 
ce of the 
‘here two 
ectral or 
nt on the 
1 and has 
f noise in 
easure is 
ties. This 
^ relation 
ch pixels 
evolving 
| this path 
ent status 
| between 
tric to be 
termined 
it and the 
lal space, 
on in that 
ed in the 
the scene. 
ture Sos 
int Xkit1- 
juence of 
h implies 
servation 
nsecutive 
servation 
ve points 
pixel X; ) 
X. 
sy object 
object as 
ibsets of 
reliability 
decision 
in theory, 
achieved 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
with d+1 variable, but the converse is not true. Thus one might 
expect that by increasing the number of features the object 
recognition error rate should decrease or at least stay the same, 
but in practice quite often the performance of the features will 
improve up to a point , then begin to deteriorate as further 
attributes are added. This is referred to as the Hughes’ 
phenomenon (Hughes, 1968). The existence of an optimal set of 
features is indicated for the representation of the objects, 
relative to feature selection and feature reliability problem. An 
object can be described by a set of parametric primitives. Such 
primitives may be based on observation as well as knowledge 
about the object. Typically in remote sensing the important 
primitives, for recognition of an object, are spectral feature and 
/ or contextual features. But since it is usually presumed that the 
shape and the size of natural objects in a scene (ground cover 
types) are random and unrelated to the ground cover classes, 
these features are often ignored in feature extraction and pattern 
recognition of the ground cover types. However in this work, 
the objects’ geographical features are preserver in the spatial- 
feature-map L, and can be used by an appropriate pattern 
recognition system, if it be necessary. It is assumed that two 
adjacent objects differ in a measurable way relative to the 
spectral or contextual features. In this system, a set of points 
representing similar patterns are represented with the same 
features. Thus the attributes of P can be refined by observation 
which is given by a set of three parametric primitives: 
Vz Sr Kind) (1) 
Where S is estimated within-object spectral feature 
representation, V is the estimated contextual feature, and L is 
the spatial-feature-map or the object geographical shape and 
location in the scene. Let n be the number of pixels in the object 
P, and L be the corresponding spatial-feature- map, then the 
object spectral feature S is estimated by averaging the spectral 
response of pixels within the object P. Then the contextual 
feature, V, is estimated by averaging the spectral variation of 
pixels within the object P. 
] 
s,=—> x, (2) 
n kel, 
Notice that the spatial variation can be horizontal, vertical, 
diagonal, and any other possible spatial direction. The objects 
with small area, whose number of pixels within the object is not 
sufficient for contextual feature estimation, will be represented 
only by the spectral feature. This is done by adjusting the 
degree of uncertainty in the feature extraction process: the 
uncertainty about the feature is inversely dependent on the 
number of pixels that are contained within the object P. 
Although the contextual feature is dependent on the sensor 
resolution as well as the sensor altitude from the scene, the 
intra- object spatial variation between adjacent pixels can be a 
significant factor for on-line object extraction. A metric for 
testing the unity relationship between the pixel-feature X, and 
object-feature Y; is introduced. This metric normalizes the 
spectral distance by their spectral gradient vector: 
dX, ,Y,) - (S, - X, (aV, € BV.) G) 
where a=(wn;/n;+w) and B=(w"/n+w), n; is the number of 
pixels in the current object, w is the size of observation 
window, and S, V;, X and Y are the same vectors as defined 
before. 
4. FEATURE EVALUATION 
The performance of a feature extraction process is measured in 
terms of the information-bearing quality of the features versus 
the size of the data set. Classification accuracy is an important 
quantitative measure of feature quality in applications where the 
data is automatically interpreted. The comparative performance 
results for the various feature configurations between the 
original pixel-features X and compacted object-features Y. The 
features’ reliability and quality are measured in terms of overall 
misplacement error in the scene (OME), feature classification 
performance (FCP), and subjects' appearance (SOA). 
The first evaluation is a simple quantitative criterion which has 
a conventional mathematical form to measure the number of 
pixels assigned to an incorrect neighbouring object based on the 
object classification, relative to the total number of pixels in the 
scene (overall misplacement error) Let GTM represent the 
ground-truth-map of original data, and let CPM represents the 
classification-pixel-map result of feature classification. Then 
the overall misplacement error can be computed by comparison 
of the CMP and the GTM. The feature classification 
performance (FCP) measures the number of pixels classified 
into the correct class relative to the total number of pixels in 
that particular class. This criterion is used to evaluate the 
object-feature performance when the effects of classifier 
decision rule and training samples on the class feature 
performance should be considered. Good ground truth 
information is a very important parameter in feature evaluation 
to minimize the unrelated error in the feature extraction. 
However, obtaining a valid ground-truth-map (GTM) and 
registering the multispectral image data with this map is often 
costly and very time consuming. Thus among the available real 
data those subsets which have a relatively reliable ground-truth- 
map should be selected and used for the OME and FCP feature 
evaluations. 
The subjective appearance is an appropriate criterion when the 
ground-truth-map is not accurate enough to be used by other 
feature evaluators, or when some objects in the scene are more 
important than the others regardless of the size of the objects. In 
such cases it is often too difficult to define a mathematical 
expression for a feature quality adequate for quantitative 
evaluation. In this case visual assessment will be used for this 
kind of qualification. This criterion is used to evaluate the 
spatial quality of the spatial-feature-map, for prediction of more 
information about the scene, by using more complex features, 
which should be extracted from the training samples. In other 
words, by incorporating the object appearance in the spatial- 
feature-map into the feature selection strategy, more complex 
objects in the scene can be detected. For example some 
significant within-class variation shows that more information 
about the complex objects (perhaps soil type covered by 
vegetation) in the scene might be extracted by using even more 
complex features. 
The proposed feature extraction technique is applied to several 
set of image data. As previously stated, the objective of this 
experiment is to demonstrate the validity of the unity 
relationship and the path-hypothesis, and to show that the 
performance of object-feature is better than the performance of 
pixel-feature regardless of the choice of classification decision 
rule and the training set. To establish the unity relation, the 
system learns about the functional coefficients simultaneously 
with the data acquisition process by measuring the object 
spectral gradient which, is then, normalized within a window. 
823 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.