Full text: XVIIIth Congress (Part B3)

   
YGON APPROACH 
e product. Typically, 
int of the product is 
urce errors at each 
lata processing flow 
le error source in an 
f the same scene to 
ons are investigated 
inary results from a 
source data bases, 
which includes the 
| bias can be a 
rough spatial data 
ipon the nominal 
ith boundaries and, 
hat exists in reality. 
;ontiguous classes 
n a construct of 
ter 1991). The 
delineate classes in 
are subjective. The 
dependent on the 
Is, the scale of 
allocated classes. 
that, in interpreting 
areas yielded less 
that classification 
han positional error 
maps with fixed 
a binary (yes/no) 
in a measure of the 
eing at that location 
ation between the 
> a cartographic line 
| dependent on the 
classes. However, 
rring’ of a boundary 
on of the extent of 
geographic detail present as this zone is less distorted 
than the original boundary. 
Gong and Chen (1992) deal with methods that may be 
used to determine, represent and display boundary 
uncertainties in categorical (area-class) maps. They state 
that it is impossible to tell the most accurate realisation 
and they suggest ways in which the most probable 
boundary could be determined using curve-fitting 
techniques and blending functions. They generated a 
number of realisations of land use categories from 
classification and subsequently manually digitised the 
map. Other authors (Maffini et a/ 1989; Dutton 1992) 
have investigated the positional uncertainty of boundaries 
resulting from the manual digitising of land cover maps. 
The problem that arises in these cases is the introduction 
of an additional interpretational process within data 
processing, i.e. classification and digitisation. This paper 
suggests that a framework for determining both local and 
boundary errors resulting from multiple realisations of the 
same phenomenon be resolved prior to raster to vector 
conversion within the spatial database. In this case, the 
operators are responsible for determining the classes 
without the need to digitise each determination. Using the 
GIS overlay function, the level of agreement can be 
assessed, most probable class boundary positions 
derived and then a once only vectorisation of the 
polygons for entry into the GIS data base carried out. 
1.3 Accuracy Assessment Used in Remote Sensing 
Present problems with accuracy assessment of thematic 
maps are that there is no indication of the variation of 
land use / land cover from the sampled data within each 
class. Each location on the ground has been allocated to 
a particular class and the assignment of the appropriate 
map label for some locations is ambiguous (Gopal and 
Woodcock 1993). 
The importance of accuracy assessment for remotely 
sensed data is well recognised particularly when the data 
may be used in a GIS (Congalton & Green 1993). Allan 
et al (1996) detail previous literature concerning the 
methods employed in assessing the accuracy of remotely 
sensed data. In summary, the error (confusion) matrix 
(Aronoff 1982), determination of producers and 
consumer's risk (errors of commission and omission) 
using row and column marginals of the matrix (Story & 
Congalton 1986) and compensation for chance 
agreement in the classes - Kappa coefficient of 
agreement (Rosenfield & Fitzpatrick-Lins 1986) have 
been used. Sampling designs have been investigated by 
a number of authors (Congalton 1988; Stehman 1992). 
2. ERROR SOURCES IN REMOTELY SENSED DATA 
In the production of a thematic map the user needs to 
have a knowledge of the error in the position and 
labelling of the derived classes. As this product (map) 
may be only one layer used in the GIS, quantitative 
measures of error are necessary at its source before 
progressing to error propagation in the GIS processing 
flow. Until these source errors are thoroughly examined 
and measured the utility of remotely sensed data as an 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996 
    
  
  
   
  
   
  
   
  
   
  
   
  
  
  
  
  
   
   
  
  
    
   
  
  
   
   
  
  
  
   
   
  
   
  
   
  
    
  
  
    
   
  
   
   
   
   
   
  
    
   
   
  
   
    
  
   
   
   
     
  
   
      
appropriate and valuable information source within GIS is 
restricted. Goodchild et al (1994) suggest that 
consistency, or replicability, of processes and realisations 
have rarely been executed in practice. 
An accuracy assessment of errors in a thematic map 
should provide details concerning their nature, frequency, 
magnitude and source (Gopal & Woodcock 1993). A 
conceptual framework has been described by Veregin 
(1989) in which he sets out 'a hierarchy of needs for 
modeling error in GIS operations. In a five level 
hierarchy, level 1 is concerned with the identification of 
error sources, level 2: error detection and measurement, 
level 3: error propagation modelling, level 4: strategies 
for error management, and level 5: strategies for error 
reduction. A number of authors have set out the various 
stages in the spatial data ‘life-cycle’ in which error may 
be introduced (Aronoff 1989; Lunetta et al 1991; Collins 
& Smith 1994). They are summarised as follows: 
e Data acquisition: geometric aspects, sensor systems, 
platforms, ground control. 
e Data input (processing): geometric registration and 
resampling. 
e Data analysis: classification systems, data 
generalisation. 
e Data conversion: raster to vector. 
Data output: positional and attribute errors. 
Data usage and interpretation: insufficient 
understanding and incorrect use of data. 
Lunetta et a/ (1991) identify source errors (Veregin's 
Level 1) for each stage. They point out that error 
accumulates for each successive stage but also may be 
introduced within any stage. 
The proposed framework, shown in Figure 1, describes 
an approach to determine the degree to which stages in 
the GIS information processing flow contribute to the 
overall error in the data layer. It integrates the 
hierarchical approach proposed by Veregin (1989) within 
the framework of GIS processing with the potential error 
sources suggested by Lunetta et al (1991). 
Specifically, the approach is to detect and measure the 
uncertainties in two stages of the processing flow: data 
processing and data analysis after having identified the 
source errors. At the source level in the framework, it 
acknowledges that error may accumulate from one stage 
to the next but may also contribute separately at each 
stage. Two stages in the data processing flow are 
selected to examine their respective contributions to the 
determination of class accuracy assessment. 
At the next level in the framework, the detection and 
measurement of error phase, operational constraints are 
imposed on the interpreters to elicit quantitative 
estimates of error and its spatial variability. These 
constraints may be the adoption of the same image 
classification technique and the division of the same 
image into a predetermined number of classes. Polygons 
in disagreement are formed based on the realisations 
from each image interpreter. The polygon characteristics 
can be measured and aggregated based on a threshold 
established for areas, perimeters, shapes or a 
177 
  
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.