Full text: Proceedings, XXth congress (Part 2)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, 
thesis named “A Mathematical Method of Communication”, in 
which he presented a method to measure information based on 
probability. In this thesis Shannon defined entropy, a concept 
originated from classical stochastic physics to represent the 
chaos of physical system, as the measurement of uncertainty 
and it is calculated as follows: 
Let X be a random variable with a set of possible 
choices (A, A, … A, } with probability {P, ; P, 3 P, } : 
then the entropy of X is 
HXY= BR, PP Y=~3 PluP (D 
;=l 
I 
When this approach of information measure is introduced into 
quantitative map information measure, a natural thought is that 
to give every kind of symbol on the map a probability, and then 
the information content of the map can be calculated 
corresponding to the probability. Sukhov (1967,1970) did the 
initial work. He utilized a statistical model in which each kind 
of symbol’s probability is calculated based on its frequency of 
appearance in the map. His method is described as follows. 
Let N be the total number of symbols on the map. M the 
number of symbol types and F; the number of ith type, then 
N = F + f de F, , the probability of each type of 
symbol can be decided by 
= 
Il 
= mn 
~ 
Where P; is the probability of the ith type. 
Then the entropy can be calculated through the probabilities 
defined above. 
M 
H(X)=H(B,B,-,P,)=-> PlnP G) 
i 
i=] 
This method firstly introduced information theory into map 
information measurement, vct its fault is obvious. It completely 
did not consider any position and topological information in the 
map, which are very important components of spatial 
information that we can get from a map. If symbols on the map 
scatter in different manner (i.e. Fig.1), undoubtedly we can get 
different amount of information from these two maps, yet this 
statistical method mentioned above fails in this situation for it 
can only obtain equal amount of information. 
366 
  
Part B2. Istanbul 2004 
  
  
| 
| 
| 
  
Figure 1. Two maps with same amount of symbols but different 
distribution 
For consideration of topological information more methods are 
developed by researchers, some of them deserve being 
mentioned here. Neumann (1994) proposed a method to 
estimate the topological information of a map. In this method 
vertices are classified according to their topological information, 
such as how many neighbors they have. After the classification 
entropy can be computed the same way by formula (2) and (3). 
Yet in this method the classification of vertices is hard when a 
map is relatively complex and the significance of the 
classification is not so consistent with the real map. 
Bjerke (1996) was not satisfied with such ways to define 
topological information then he provided another definition of 
topological information by considering the topological 
arrangement of map symbols. He introduced some other 
concepts, including positional entropy and metrical entropy. 
"The metrical entropy of a map considers the variation of the 
distance between map entities. The distance is measured 
according to some metric’ (Bjorke 1996). He also suggests to 
‘simply calculate the Euclidean distance between neighboring 
map symbols and apply the distance differences rather than the 
distance values themselves’. The positional entropy of a map 
considers all the occurrences of the map entities as unique 
events. In the special case that all the map events are equally 
probable, the entropy is defined as H (X)=In (N), where N is the 
number of entities. 
Li and Huang (2002) proposed their consideration of 
topological information on the map. In their paper they took 
into consideration both the spaces occupied by map symbols 
and the spatial distribution of these symbols. They divide 
information about the features in the map into three types: 
® (Geo) metric information related to position, size and 
shape. 
® Thematic information related to the types and importance 
of features. 
® Spatial relations between neighboring features implied by 
distribution. 
Based on this division they introduced Voronoi diagram to deal 
with (Geo) metric information and spatial relations. A Voronoi 
diagram is essentially a partition of the 2-D plane into N 
polygonal regions, each of which is associated with a given 
feature. The region associated with a feature is the locus of 
points closer to that feature than to any other given feature as 
shown in Fig. 3. As this region is determined by both the 
feature’s size and the feature’s relative space, in some sense it 
can represent the (geo) metric information and spatial relations. 
Intern 
  
After. 
is tess 
define 
detern 
The er 
follow 
H(M 
As to 
classif 
and ca 
All th 
inform 
the w 
differe 
differ 
topolo 
menti 
some : 
can re 
or not 
inform 
the otl 
they c. 
the m 
only r 
map, 
betwe 
contai: 
chang: 
discern 
all me 
after t 
metho 
actuali 
cannot 
or is te 
also c: 
apply 
amour
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.