Full text: XVIIIth Congress (Part B7)

)00 using 
e control 
> and the 
height of 
I 2, 6, 18 
> the soil 
id biomas 
'signed. 
1:2000 to 
n 18 lines 
6, 18 and 
nation; 
and output 
| 350 kg. 
light; 
ROBUST MIXED PIXEL CLASSIFICATION IN REMOTE SENSING 
P Bosdogianni, M Petrou and J Kittler 
University of Surrey, Dept. of Electronic and Electrical Engineering, 
Guildford GU2 5XH, United Kingdom 
E-mails: {p.bosdogianni, m.petrou, j.kittler}@ee.surrey.ac.uk 
Commision VII, Working Group 6 
KEY WORDS: hough, unmixing, classification, multispectral, satellite, Landsat, simulation 
ABSTRACT 
In this paper we present a novel method for mixed pixel classification where the classification of groups of mixed pixels is 
achieved by using robust statistics. The method is demonstrated using simulated data and is also applied to real Landsat TM 
data for which ground data are available. 
1 INTRODUCTION 
The problem of mixed pixel classification is a major issue 
in Remote Sensing and Geography and many approaches 
have been developed to deal with it [Adams et al., 1986, 
Foody et al., 1993, Lennington et al., 1984, Li et al., 1985, 
Marsh et al., 1980, Settle et al, 1993, Smith et al.,1990]. In 
the past we addressed the problem of mixed pixel classifica- 
tion when whole regions of mixed pixels have to be classified 
by treating the distribution of pixels in each region as a ran- 
dom distribution [Bosdogianni et al., 1994]. In this work we 
address the same problem but in a way that is applicable to 
cases that our previous approach is unreliable, namely when 
outliers are present. 
The motivation of our work is to monitor burned forests for a 
few years after the fire so that the regeneration processes can 
be evaluated. In particular, we are interested in assessing the 
danger of desertification conditions ensuing in the site of a 
burned forest in the Mediterranean region. If the forest does 
not show signs of recovery a couple of years after the fire, it 
probably has to be artificially re-forested to prevent further 
erosion. Quite often, different types of vegetation grow in 
a burned region. It is usually the case that this new vege- 
tation presents a deterioration of the quality of the flora of 
the region. The main type of forests that are common in the 
Mediterranean region consist of aleppo pine (pinus halepen- 
sis). Thus, for the purpose of our work, we are interested in 
assessing the degree of presence of three classes in a region: 
aleppo pine, bare soil and other vegetation, using Landsat 
TM images. 
There is a major problem, however, when one deals with real 
data: The data tend to be very noisy and inaccurate. The 
statistics computed from them tend to be distorted and it 
is difficult to obtain consistent results. Thus, a more robust 
way of solving the problem is needed. 
2 THE PROPOSED METHOD 
In the linear mixing model adopted here, it is assumed that 
the pixel value in any spectral band is given by the linear 
combination of the spectral responses of each component 
within the pixel, so the model can be expressed as: 
w=az+by+cz (1) 
where w is the known spectral reflectance of a mixed pixel, 
t, y and z are the known spectral reflectances of the three 
85 
possible cover components within the mixed pixel and a, b 
and c are the proportions for each component contained in 
the mixed pixel that have to be estimated. 
If we consider again the linear mixing equation mentioned 
above, we see that it actually is the equation of a hyper-plane 
in luminance space where we measure one type of luminance 
along each axis. What we are interested in identifying are 
the parameters a, b and c for this plane. The method usually 
used for this purpose is that of least squares fitting. It is 
well known, however, that the method of least squares is 
particularly sensitive to outliers. What we propose in this 
paper is the use of Hough transform to identify the best values 
of a, b and c. Hough transform is known to be a robust 
technique which can tolerate large amounts of outliers and 
still produce good results. In its most commonly used form it 
is used to identify straight lines in images, but more generally 
Hough transform can be thought of as a transformation into 
the parametric domain where we seek to identify sets of real 
data that indicate the same values of the parameters for the 
parametric hyper-surface they define. 
In our case this hyper-surface is a plane defined in the 3D do- 
main (z, y, z), which is parameterised by different values of 
w. Thus, our method consists of the definition of an accumu- 
lator 3D array defined in the parametric (a,b, c) domain. For 
each quadrupole (x, y, z, w) we have a different plane defined 
in the (a,b,c) domain. The surface of this plane intersects 
various cells of the accumulator array the occupancy number 
of which is incremented by 1. When all possible quadruples 
of the data have been considered, the highest peak in the 
accumulator array defines the best values of the mixing pa- 
rameters a, b and c. In reality, of course the problem is even 
simpler than that, because we know that the values of these 
parameters have to sum up to 1, so we can eliminate the 
third one in terms of the other two and the linear model of 
equation (1) now looks like: 
w— z-(r-z)a-d(y-— z)b (2) 
Then our accumulator array is only 2D and it can be sampled 
with sufficiently high accuracy. The next step in our optimi- 
sation procedure is to estimate the bin size in the accumulator 
space for the two parameters a and b. In our applications we 
do not really need to know the values of a and b to better 
than two significant figures accuracy, so the size of our accu- 
mulator array will not be larger than 100 x 100, but generally 
it will be a lot smaller. 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B7. Vienna 1996 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.