Full text: Proceedings, XXth congress (Part 3)

   
e concepts are 
re realizations 
ene. Thus, the 
ige analysis, is 
n a symbolic 
nstances. The 
ges or links of 
'epresented by 
bject can be 
yrmation of an 
entation in the 
viated con-of. 
ze in different 
ind an image 
| of particular 
neration and 
representation 
)bject can be 
1e goal of this 
to a smaller 
generation of 
the following 
h enables to 
rding scale 
an automatic 
suitable parts 
letely with all 
important. for 
1g scale. 
tic extraction 
semantic net 
le object into 
] directly by 
E 
sing standard 
e experts in 
omplicate it. 
s and expert 
epresentation, 
ids and road 
es on objects 
xis along the 
ine markings, 
e are able to 
periodic and 
s, which we 
ect types. 
re stripe (the 
a road, the 
)bjects at the 
2.7 the object 
Ve define for 
butes: 
object by its 
ect type have 
e and can be 
re extraction 
Istanbul 2004 
   
International Archives of the Photograninctry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
operators. For the description of streets we define four 
object types: continuous stripe, periodic stripe, 
continuous line and periodic line. This also affects the 
group of feature extraction operators, which are used 
to extract the object type. The difference between 
stripe and line is given by the width of the object. We 
define that lines are not wider than 2 pixels. For the 
extraction of lines in raster images other feature 
extraction operators will be used than for the 
extraction of stripes. Stripes would be extracted by 
finding the edges. The specification periodic or 
continuous indicates dashed or continuous lines and 
stripes respectively. 
e Grey Value: This attribute describes the radiometric 
characteristic of the object. 
e Extent: This value describes the width of the object 
and is independent from resolution, i.e. is stated in 
meters, not in pixels. 
e Periodicity: For periodic object types this attribute 
expresses the ratio of the length of the lines/stripes 
and the extent. For continuous object types this 
attribute has no meaning. 
The relations, which are used in the presented net in Fig.6, are 
part-of and spatial relations. Some of the part-of relations have 
additional attributes, as “central” or “left/right boundary". The 
attribute “central” can be used to guide the object extraction. 
The attribute “left/right boundary” has an important function 
regarding the scale adaptation: part-of relations with these 
attributes describe the boundary parts of an object. These parts 
are important, if neighbouring objects exist. In that case the 
border parts of both neighbouring objects have to be 
considered, because they can affect cach other as scale varies. 
The spatial relations play a key role in the object description, 
because they directly affeet the necessary modifications for 
scale adaptation of the nets. It is essential that the position of all 
object parts are clearly specified by spatial relations. For the 
examined objects, the position perpendicular to the street axis 
has to be described. We use the relations left-of and right-of for 
this description. Furthermore, attributes are important for the 
description of the distances. Usually, it is not possible to 
describe the exact distance to another object. We therefore use 
ranges for distances here. The magnitude of the distances is also 
expressed in meters, independently from scale. 
All nodes of the semantic net are connected to feature 
extraction operators, which are able to extract the represented 
objects. But the strategy of object extraction is to call the 
feature extraction operators only for the nodes without parts, 
corresponding to the nodes at the bottom of the semantic net. In 
Fig.6 the node “Roadway” contains a connection to a feature 
extraction operator, which is able to extract stripes with the 
given constraints. But as long as markings on the stripe are 
extractable in the target scale, the extraction of “Roadway” 
would be realized by the extraction of the markings. 
Based on these semantic nets the scale behaviour of the defined 
object types can be investigated. Taking into account single 
object parts, the following behaviour is possible: 
Unfortunately, these four possibilities are not sufficient for an 
automatic scale adaptation of the semantic nets. It is also 
possible that different parts affect each other during scale 
change. As an example, two stripes with a small distance apart 
will merge at a certain scale. Hence, neighbouring parts have to 
be analysed simultaneously in this case. 
Taking into account object pairs, which might affect each other, 
the following possibilities can be found: 
  
Nr | Before Scale Change 
5 Cont. Stripe — any 
6 Per. Stripe — Per. Stripe 
Per. Stripe — Cont. Line 
Per. Stripe — Per. Line 
7 Cont. Line — Cont. Line 
Cont. Line — Per. Line 
8 Per. Line — Per. Line PL or CL or Invisible 
After Scale Change 
CS or CL or Invisible 
PS or CS or PL or 
CL or Invisible 
  
  
  
CL or Invisible 
  
  
  
Nr | Before Scale Change After Scale Change 
  
Continuous Stripe (CS) | CS or CL or Invisible 
  
Continuous Line (CL) CL or Invisible 
  
WIN — 
Periodic Stripe (PS) PS or CS or PL or CL or 
Invisible 
  
4 | Periodic Line (PL) PL or CL or Invisible 
  
  
  
  
Table 1. Object Type Scale Behaviour of Single Objects 
  
  
  
  
Table 2. Object Type Scale Behaviour of Object Pairs 
This scale behaviour of the object types has to be used for the 
adaptation of the semantic concept nets. It is possible, that a 
given range in a concept net for a distance between two objects 
will lead in combination with a given target scale after scale 
adaptation to different possibilities for the new object type. In 
that case different possibilities have to be included in the new 
concept net as representation of one object. 
The question, at which target scale the object type changes is 
directly connected to the scale behaviour of feature extraction 
operators. This problem is addressed in the next section. 
4. SCALE BEHAVIOR OF FEATURE EXTRACTION 
OPERATORS 
As described in section 3, the object type of a node in a 
semantic net is also determined by the feature extraction 
operator, which is bounded to the nodes of a certain object type 
and is used for its extraction. Objects of different types use 
different feature extraction operators. But as scale varies the 
object type may change, because the same operator is not able 
to extract the same object type successfully at all resolutions. In 
order to be able to predict from which scale on the object type 
has to change, an analysis is necessary about the scale range, in 
which the feature extraction operators are usable. The 
performance of three commonly used higher developed 
operators for edge and line detection are exemplarily examined 
- Canny, Deriche and Steger. The goal of this investigation is to 
analyse the behaviour of the operators in sensor data of 
different resolutions. For that in a first step the different sensors 
are simulated by creating synthetic images of different 
resolutions and applying the operators on them. 
The Canny edge detector was developed as the “optimal” edge 
detector (Canny, 1986). Its impulse response shape closely 
resembles the first derivative of the Gaussian. The Deriche edge 
detector is based on the Canny operator, but uses recursive 
filtering and thereby reducing computational effort (Deriche, 
1987). The Steger operator extracts lines in sub-pixel accuracy 
by using the first and second order derivative of the Gaussian 
(Steger, 1998). 
The generated synthetic grey value image has a size of 
1000x1000 pixels and also is composed of a bright line with 
two pixel width stretching over the entire image on dark 
background. An image pyramid was created from this synthetic 
image by Gaussian interpolation. The pyramid comprises 100 
levels from the original image (labelled as pixel size 1.0) to the 
smallest image with the largest pixel size of 1000-fold, 
   
  
   
   
  
  
  
   
  
   
  
    
   
  
   
   
   
  
  
   
   
   
    
  
  
   
  
   
  
  
   
  
  
   
   
   
   
   
   
   
    
  
   
   
   
  
   
  
  
  
  
  
  
  
  
  
  
  
  
   
    
  
  
  
  
  
  
  
  
    
  
  
  
   
  
  
  
   
   
  
  
  
  
   
  
  
  
  
  
  
  
  
  
  
  
   
   
   
   
  
   
   
  
  
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.