Full text: Sharing and cooperation in geo-information technology

74 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 6. Bandung-lndonesia 1999 
DISCRETE MATHEMATIC FOR SPATIAL DATA CLASSIFICATION AND UNDERSTANDING 
Luigi Mussio* Rossella Nocera** Danielapoli * 
*DIIAR - Politecnico di Milano 
Piazza Leonardo da Vinci, 32-20133 Milano, Italy 
** DIEMA - Universita degli Studi di Reggio Calabria 
Via Emilio Cuzzocrea, 48 - 89100 Regiio Calabria, Italy 
luigi@ipmtf2.topo.polimi.it 
ISPRS Commission VI - Working Group 1 
Key words : Education, Data processing, Segmentation techniques and relational strategies 
ABSTRACT 
Data processing, in the field of information technology, requires new tools, involving discrete mathematics, like data compression, 
signal enhancement, data classification and understanding, hypertexts and multimedia (considering educational aspects too), because 
the mass of data implies automatic data management and doesn’t permit any a priori knowledge. The methodologies and procedures 
used in the class of problems concern different kinds of segmentation techniques and relational strategies, like clustering, parsing, 
vectorization, formalization, fitting and matching, on the other hand, the complexity of this approach imposes to perform optimal 
sampling and outlier detection just at the beginning, in order to define the set of data to be processed: rough data supply very poor 
information. For these reasons, no hypotheses about the distribution behavior of the data can be generally done and judgement should 
be acquired by distribution-free inference only. 
1. THE CHANCE AND THE CHALLENGE (BELLONE, 
ET AL., 1997) 
As a matter of fact, history tells that nothing is definitely over in 
technique and nothing is forever acquired. The critical attitude 
drives not only to be severe in the judgement of the past, but 
also to be rightly and objectively fair in the judgement of 
present tendencies. These tendencies can be affected by trends 
and pressures that don’t come from the intimate structure of the 
technique and aren’t the logical and unavoidable development 
of it. When the technique loses this objectivity of judgement, it 
also loses the capability to be close to the reality of the world of 
today. It loses the fact that it should mainly take into account 
the social utility of their products; i.e. they should help 
humanity to live better in this world. The life of human on the 
earth is undermined by a technique that doesn’t put the human 
life itself at the centre of its attention. 
Data processing could become one of the totems of the present 
day. People give data processing and its enormous scope the 
task of solving many operational problems, of making really 
objective choices, of obliging the local authorities to perform 
rational and optimal intervention. Data processing, with its 
powerful equipment that would seem able to find the solution 
for every problem, has to be afraid mainly of its exasperating 
omnipresence, its ambitious independence and its underhand 
quantitative axiomatization. 
Data processing should be careful not to suffocate the reality of 
the product with its formal strictness that is often the result of 
many theoretical and practical compromises. Hoping to be not 
misunderstood, data processing is need, it must be enlarged and 
deepened as much as possible, but the final judgement of the 
practical results must be critically referred to reality. The 
evaluation must be critical, on the basis of experimental data 
realistically obtained by experimentation, of what this data 
processing can give. 
Data processing is necessary, but has to be critically evaluated. 
The simulation of many experimental data is one of the best 
possibilities offered to the technical operator by the power of 
data processing. However simulating something means to know 
already what is committed to the simulation; to this knowledge 
already acquired, the simulation doesn’t add too much: it just 
allows a little less subjective choice. 
Today data processing, in the field of information technology, 
requires new tools, involving discrete mathematics, like data 
compression, signal enhancement, data classification and 
understanding, hypertexts and multimedia (considering 
educational aspects too), because the mass of data implies 
automatic data management and doesn’t permit any a priori 
knowledge. The methodologies and procedures used in this 
class of problems concern different kinds of segmentation 
techniques and relational strategies, like clustering, parsing, 
vectorization, formalization, fitting and matching. 
On the other hand, the complexity of this approach imposes to 
perform optimal sampling and outlier detection just at the 
beginning, in order to define the set of data to be processed: 
rough data supply very poor information. For these reasons, no 
hypotheses about the distribution behavior of the data can be 
generally done and a judgement should be acquired by 
distribution-free inference only. 
The figure, enclose at the end of the paper, illustrates, step by 
step, the nearest neighbor procedure which is central in many 
methodologies and technicalities, presented in the follows. 
Notice that the actions of looking, seeing and recognizing, 
together with the aggregate of elements, like observer, point of 
view, scene objects, figures, etc., belong to the concept of 
vision, both concerning the problematic of Psychology and 
involving the field of Machine Vision. Anyway the authors 
should state that more information about very general 
methodologies and procedures, concerning the field of 
information technology, is still an open problem.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.