Full text: XVIIIth Congress (Part B2)

  
100000. When the size of map was 11x11 processing 
elements, number of inputvectors was 70000 or 140000. 
When the size of map was 19x19 processing elements, 
number of inputvectors was 400000. 
Parameter for classifier was the number of neighboring 
samples, k, used to calculate the value of the conditional 
density function of the class. The bias of density 
estimate depends on value of k, and value must be 
determined experimentally. In these experiments value 
of k varied between 1 and 10. 
6. RESULTS 
Experiments consisted of about 6000 individual testruns. 
This is short overview to results. 
6.1 Dataset II 
First used feature extraction method was Karhunen- 
Lówe transformation. Classification error was about 33% 
when dataset II with number of samples in class 40 
(N=5, d=8) was classified. In this case RES error varied 
between 20% - 30% (deviation 4.5% - 5.5%) with varying 
k 2 - 10 and LOO error varied between 39% - 36% 
(deviation 796 - 896) with varying k 2 - 10. When number 
of samples in class was increased (N=10) classification 
error was about 34%, RES error varied between 21% - 
31% (deviation 3.5% -5%) and LOO error varied between 
40% - 36% (deviation 6% - 7.5%). When number of 
samples in class was again increased (N=100) 
classification error decreased to about 28%, RES error 
varied between 19 - 28% (deviation 1% - 1.3%) and LOO 
error varied between 35% - 30% (deviation 1.6% - 1.9%). 
Transformed two features contained on an average 
36.5% from original information (percentage of two 
largest eigenvalues from all eigenvalues) when N=5, 
33.7% when N=10 and 29.1% when N=100. 
The results of SOM case A were independent from 
number of samples presented to algorithm and the size 
of map, variation between different combinations were 
small. When N=5, the average classification error varied 
between 37% - 39%, RES error varied between 20% - 
33% (deviation 4.5% - 6.5%) and LOO error varied 
between 45% - 40% (deviation 7% - 10%). When number 
of samples in class was increased (N=10), the average 
classification error varied between 34.5% - 36%, RES 
error varied between 20% - 33% (deviation 2.5% - 4.8%) 
and LOO error varied between 42% - 37% (deviation 
4.5% - 6.5%). When number of samples was again 
increased (N-100), the average classification error 
varied between 34% - 35%, RES error varied between 
21% - 32% (deviation 0.9% - 1.8%) and LOO error varied 
between 41% - 36% (deviation 1.8% - 2.8%). 
Also, the results of SOM case B were independent from 
number of samples presented to algorithm and the size 
of map, variation between different combinations were 
small. When N=5, the average classification error varied 
between 37% - 38.5%, RES error varied between 20% - 
35% (deviation 4% - 6.5%) and LOO error varied 
378 
between 45% - 41% (deviation 6.5% - 10%). When 
number of samples in class was increased (N=10), the 
average classification error varied between 33.5% - 36%, 
RES error varied between 21% - 33% (deviation 3% - 
4.3%) and LOO error varied between 43% - 38% 
(deviation 4.8% - 6.2%). When number of samples was 
again increased (N=100), the average classification error 
varied between 32% - 33%, RES error varied between 
20% - 31% (deviation 0.5% - 1.2%) and LOO error varied 
between 38% - 35% (deviation 1.5% - 2.0%). 
6.2 Dataset 141 
Again, first used feature extraction method was 
Karhunen-Lówe transformation. Classification error was 
about 45% when dataset I4I with number of samples in 
class 40 (N=5, d=8) was classified. In this case RES 
error varied between 24% - 40% (deviation 4.1% - 5.1%) 
with varying & 2 - 10 and LOO error varied between 
51% - 48% (deviation 6.6% - 8%) with varying k 2 - 10. 
When number of samples in class was increased (N=10) 
classification error was about 44%, RES error varied 
between 25% - 38% (deviation 3.2% -4.4%) and LOO 
error varied between 49% - 47% (deviation 5.2% - 6.3%). 
When number of samples in class was again increased 
(N=100) classification error decreased to about 43%, RES 
error varied between 24 - 38% (deviation 1% - 1.3%) and 
LOO error varied between 48% - 46% (deviation 1.3% - 
1.8%). Transformed two features contained on an 
average 36.4% from original information when N=5, 
33.0% when N=10 and 27.4% when N=100. 
SOM case A with N=5, the average classification error 
varied between 45% - 47.5%, RES error varied between 
24% - 42% (deviation 4.5% - 6.5%) and LOO error varied 
between 55% - 50% (deviation 6.5% - 10%). When 
number of samples in class was increased (N=10), the 
average classification error varied between 45% - 46%, 
RES error varied between 24% - 41% (deviation 3% - 
4.5%) and LOO error varied between 52% - 49% 
(deviation 4.5% - 6.8%). When number of samples was 
again increased (N=100), the average classification error 
varied between 44.5% - 45.5%, RES error varied between 
24% - 41% (deviation 1.0% - 1.5%) and LOO error varied 
between 50% - 49% (deviation 1.5% - 2.2%). 
SOM case B with N=5, the mean classification error 
varied between 45% - 47.5%, RES error varied between 
24% - 41% (deviation 4% - 6%) and LOO error varied 
between 55% - 50% (deviation 6.5% - 9%). When number 
of samples in class was increased (N=10), the mean 
classification error varied between 45% - 46%, RES error 
varied between 24% - 40% (deviation 2.5% - 4.5%) and 
LOO error varied between 51% - 49% (deviation 4% - 
6.9%). When number of samples was again increased 
(N=100), the mean classification error varied between 
43% - 44%, RES error varied between 24% - 40% 
(deviation 0.8% - 1.3%) and LOO error varied between 
49% - 47% (deviation 1.5% - 2.6%). 
6.3 Dataset IA 
Classification error with features extracted using 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B2. Vienna 1996 
Y^ 
J3-— Que: WIN uA jeu — SN FR "d — aul VIN
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.