formation about the
mpts to provide an
technique/algorithm
gle label, it is “one-
form “one-to-one”
y" mapping; in this
han one label, with
the pixel and each
ssed as probabilities
rm "one-to-many"
S it is pertinent to
or systems over the
SYSTEMS
ement in terms of
ta volumes, Fig 2,
lution [10]
B. Muhispectoral wide bandwidth
p
©: Hyperspectral narrow bundwidth
high spectral resolution
D Ulrraspecrral vcry narrow band-
width vccy high spectral resolntion
Figure 3. Spectral resolution evolution [10]
| Data Volum e Vs SpatialResolution |
| (Graph is for an area covering 10x10 km)
120 4
100
80 -
60
Data Yolume (MB)
40 -
20 |
0 10 20 30 40
SpatialResolution (mt)
Figure 4. Data Volume evolution
4. CLASSIFICATION METHODS
The process of labeling can be supervised or unsupervised or a
combination of both. Supervised labeling method requires the
analyst to collect samples to "train" the classifier to determine
the decision boundaries in feature space. Decision boundaries
are significantly affected by the properties and the size of the
samples. On the other hand unsupervised classifiers ‘learn’ the
characteristics of each class directly from input data.
The classification approaches can be characterized by the
following dichotomies
Supervised Vs Unsupervised
Parametric Vs Non parametric
Fuzzy Vs Crisp
Assumed probability Vs Neural Network
distribution method methods
Knowledge based Vs Purely data
oriented
Table 1. Dichotomies of classification
IAPRS & SIS, Vol.34, Part 7, “Resource and Environmental Monitoring”, Hyderabad, India,2002
5. MAXIMUM LIKELIHOOD CLASSIFIER
Theoretically the classification problem is that of estimating the
a posteriori probability p(o | x), where x is the unknown pixel
value and oy; represents class i. However in the absence of
knowledge of a priori probability the likelihood function p(x
| 0X) itself is used. Hence the major problem with this classifier
is the estimation of the a priori probability. The a priori
probability can be estimated either from contextual information
or from multi temporal data. Another problem associated with
this classifier is the lack of adequate training samples when a
large number of classes and bands are present, the overlap
between these classes and the presence of mixed pixels as well
as the fact that in real life class boundaries in feature space may
be highly complex which can not be described as difference of
Gaussian probability distributions.
6. ISODATA
ISODATA algorithm is a migrating means cluster algorithm,
widely used for automatic image segmentation. This is an
unsupervised statistical approach. The analyst has to label the
clusters identified by the algorithm. Although widely used, the
difficulty is, analyst has to estimate, the initial number of
clusters present in the data. If the initial number of clusters is
too small, some significant clusters may go unidentified; if the
number is too large clusters have to be merged. Generally later
is preferred by analysts.
7. KNOWLEDGE BASED METHODS
The methods mentioned above are statistical in nature and
depend on users inputs in the form of training sets or labeling of
clusters. The knowledge of the user is embedded either in
training sets or in labeling of the clusters. This knowledge is
used in conjunction with the statistical measures to perform
classification.
The knowledge based method attempts to incorporate the
knowledge of the user in the form of heuristic rules. The
hierarchical decision tree method is the most general type of
knowledge-based classifier. A hierarchical decision tree
classifier is based on the premise that an unknown pattern can
be labeled using a sequence of decisions. A decision tree is
composed of three basic elements: terminal node or Hypothesis
represents final classification, interior node or rule representing
set of conditions to satisfy the hypothesis, root node or
conditions. The advantage of tree classifier lies in the flexibility
of defining conditions. The classification methods mentioned
above rely solely on spectral characteristics, where as
“conditions” in tree classifier can include ancillary data like
. DEM, soil map, etc along with multispectral data. The Figure 5
illustrates decision tree classifier: