X-B3, 2012
ith increasing in
recognition rate
RDA shows the
;, that is to Say,
lways has the
nts in following
order to conduct
6 labels whose
Thus we need
s 220 bands;
0 samples, 1(=5,
for training and
pplied in PCA,
ind SRDA, the
> 40, while the
3 of recognition
. And the max
Iso reported in
de NPE |
ue PCA |
s SRDA
0 30 7 49
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
(c) 15 Training Samples
Recognition rate
ol
0
1 L +
5 30 35 40
10 6 20. 25
(d) 20 Trairine$æmples
Fig.2 The curves of recognition rate vs. Dimension
of different training samples of each approaches
Tab.2 The max recognition rates on AVIRIS Indian Pines
(%)
Training Size NPE PCA SRDA
5x220 33.55 51.16 50.79
10x220 41.79 54.47 53.14
15x220 36.96 56.02 55.27
20x220 31.54 56.22 56.31
From the Fig.2 and Tab.2, we also can observe that the
SRDA also has a higher recognition rate, compared with
the PCA and NPE, especially when the number of the
training samples increases to 20, the recognition of
SRDA is the highest. Also with the number of training
samples turns large, the recognition rate increases, and
this is true to PCA and SRDA, but not fit with the NPE.
And this is related to the neighbor classifier.
3.3 Discussion
The experiments on Washington DC Mall and AVIRIS
Indian Pines have reflected some significant points.
1) All methods mentioned in this paper shares higher
classification with the increase in the number of the
training samples, expect NPE when it is applied in
5-nearest neighbor classifier.
2) The NPE, KPCA, and PCA are all involved with
eigen-decomposition of dense matrices, which is
computational expensive. While SRDA only needs to
solve c-1 regularized Least-Squares Problems which are
efficient. Here the c represents the number of classes.
3) When the number of training samples is small, the
same dimensions cannot be access to all these methods.
So we have to change the dimensions to meet the
demands of the experiments.
4. CONCLUSIONS
In this paper, we developed an efficient and useful
approach for dimension reduction, which is called the
Spectral Regression Discriminant Analysis (SRDA). This
method avoids the difficulty of eigen-decomposition and
casts the problem of learning an embedding function into
a regression framework, which is a huge save of time and
memory. As we all know that, the SRDA can conduct
discriminant analysis of large-scale high-dimensional
data. With experiments, we can easily find that the
SRDA shows higher recognition rate by comparison to
other methods such as NPE, LDA, PCA and KPCA.
REFERENCES
[1] R. Dianat, S. Kasaei, 2009. Dimension reduction of
remote sensing images by incorporating spatial and
spectral properties. Int. J. Electron. Commun. (AEU), 64
(2010), pp. 729—732.
[2] M. Belkin and P. Niyogi, 2001. Laplacian eigenmaps
and spectral techniques for embedding and clustering. In
Advances in Neural Information Processing System 14,
pp. 585-591.
[3] Jolliffe I. T., 2002. Principal Component Analysis,
Springer, Berlin, pp. 150-166.
[4] R. O. Duda, P. E. Hart, D. G. Stork, 2000. Pattern
Classification, 2nd edition, Wiley-Interscience, NY, pp.
214-281.
[5] M. Brand, 2003. Continuous nonlinear
dimensionality reduction by kernel eigenmaps, In
International Joint Conference on Artificial Intelligence.
[6] X. F. He, D. Cai, S. C. Yan and H. J. Zhang, 2005.
Neighborhood preserving embedding, In Proceedings of
IEEE 10th International Conference on Computer Vision,
Beijing, China, pp. 1208-1213.
[7] Y. X. li, X. G. Ruan, 2005. Cancer subtype
recognition and feature selection with gene expression
profiles. Acta Electronica Sinica, 33(4), pp. 651- 655.
[8] S. Roweis, L. Saul, 2000. Nonlinear dimensionality
reduction by locally linear embedding. Science,
290(5500), pp. 2323-2326.
[9] J. Tenenbaum, V. de Silva, J. Langford, 2000. A
global geometric framework for nonlinear dimensionality
reduction. Science, pp. 2319-2323.