Full text: XVIIIth Congress (Part B3)

s. But, at this 
lid objects is 
package in the 
ole in the HIPS 
digital number 
0. The map is 
HIPS system to 
ngs to achieve 
lid objects. The 
cts like lines or 
ng the drawing 
ing solid object 
m for further 
clutter from the 
his can be done 
nique DN value 
is section and 
f objects to be 
  
cts of 
h area 
adient 
  
  
  
  
An algorithm is used which scans the map from top left to 
bottom right and while scanning it assigns value DN 1 to 
the first object it encounters and DN 2 to second object and 
so on to all objects in the map. Figure 1(c) shows each solid 
building object, separate roads and each letter of street or 
place names as a separate object in the map and all are 
assigned an individual unique DN numbers between 0-255. 
2.1.3 Elimination of Clutter Objects 
At this stage each object in the map has a unique DN value. 
An algorithm is used with a threshold size (number of 
pixels) value and below this value all objects are removed 
from the map. The algorithm first looks for an object 
having value DN 1 and counts number of pixels in that 
object, and if it is greater in size than the threshold size 
chosen then the object with DN 1 remains in the map, 
otherwise it is removed i.e. a zero DN value is given. In the 
same manner the algorithm looks for other DN value 
objects in the map, and removes clutter and small size 
objects. Figure 1(d) clearly shows a map of the solid 
buildings after removal of objects not required . 
2.1.4 Map of Building Region Boundaries 
To find the boundaries of the solid building regions, an 
algorithm is used which works in a very simple way. It scans 
the map of solid building regions from top left to bottom 
right. The first pixel of a solid object it encounters is 
considered as the first boundary pixel of that object. From 
that boundary pixel, the algorithm starts looking for such 
neighbouring pixels which have the same DN value of that 
object as well as lying adjacent to the background DN value 
i.e. DN 0, and it then traces the boundary of that object. 
After this solid building region boundary is traced, the 
algorithm looks for the next solid object, and in the same 
manner traces the boundaries of the object and the 
subsequent solid objects in the map [see Figure 1(e)]. 
2.1.5 Gradient Direction of the Map Boundaries 
For determining the best match between the map and the 
image, edge pixel direction in the image and boundary pixel 
direction in the map are used. An algorithm is used to 
determine the directional component of each map boundary 
pixel. A two frame sequence input is used to apply this 
algorithm. The first frame contains the map boundaries and 
the second frame contains the solid regions from which the 
boundaries were defined. The output also results in two 
frames, first map boundaries, and second map direction as 
shown in Figure 1(f). The map with these two frames, 
boundaries and its directions, are ready at this stage to be 
used as input for matching . 
2.2 Preparing the Image for Matching 
The Farnborough subscene image is shown in Figure 2(a) 
which consists of 230 x 180 pixels. The aim here is to 
extract edges that define edges of the building regions and at 
the same time to suppress edges that do not represent 
building regions. The pre matching steps for preparing 
image for matching are described below: 
2.2.1 Edge Preserve Smoothing 
An edge preserving filter is applied prior to edge detection 
to strengthen the grey level discontinuties between different 
land cover types, and to reduce the detection of edges in 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996 
   
  
   
  
  
  
  
  
   
   
   
   
   
   
  
  
  
  
   
   
   
  
    
  
  
   
  
  
   
  
  
  
   
   
  
  
     
  
   
  
   
   
   
   
   
   
   
   
  
   
  
  
   
  
   
   
   
  
    
   
    
      
areas of texture that are internal to regions. An algorithm 
used is an adaption of that outlined by Matsuyama et al, 
(1980) and Tomita et al, (1977). A window with nine masks 
(filters) is passed over the image and the variance is 
measured in 9 orientations (masks) around the central pixel. 
The orientation of minimum pixel variance is determined, 
and the mean of this is given to the central pixel. The 
selected orientation is never accross an edge. This is 
performed for each pixel in the image. The algorithm is 
iteratively applied to achieve maximum homogeneity of 
each region in the image as shown in Figure 2(b). 
2.2.2 Region Segmentation 
Segmentation is the splitting up of an image into regions 
which hold properties distinct from their neighbours, and it 
is generally approached from two points of view: by 
detection of edges that separate regions, or by the extraction 
of regions directly. Using a histogram derived from the edge 
preserving smooth image and thresholding it at value DN 38 
resulted in direct extraction of building regions including 
some clutter as shown in Figure 2(c). 
2.2.3 Edge Enhancement 
Edge enhancement determines, for each pixel in the image, 
its edge strength and the direction of the gradient of the edge 
at that point. This is obtained by image differentation, 
which is itself achieved by the convolution of various 
kernels with the image. The Sobel Operator is used which 
consists of two kernels (X and Y) and are passed accross the 
region segmentated Farnborough image. The strength of the 
edge at the central pixel of the kernel and its gradient is 
determined for each pixel in the image as: 
Strength = V[(Result of convolving kernel X)? + 
(Result of convolving kernel Y)2] (1) 
Direction = tan'![(Result of convolving kernel X) / 
(Result of convolving kernel Y)] (2) 
The result of edge enhancement is shown in Figure 2(d). 
2.2.4 Non Maximal Suppression 
The result of edge enhancement shows edges of two or more 
pixels thick. Non Maximal suppression seeks to remove 
those edgels (pixels) that are not local maxima, thus 
sharping the representation of edges effectively to thin the 
edge to a single pixel width. An algorithm is used which 
considers the edge strength and gradient direction 
information. The edge pixel is passed if the two 
neighbouring pixels along the gradient direction are less 
than or equal to it in strength, if not the pixel is set to zero. 
The result of Non Maximal Suppression algorithm is shown 
in Figure 2(e). 
2.2.5 Alter Directions of Edge Pixel Gradient 
The edge gradient direction is a useful element in the 
matching procedure, where edge pixel gradient directions are 
compared to map boundary pixel gradient directions to 
obtain good matches. However, there is always some 
rotational difference between the image and the map spaces. 
To compensate this, an adjustment is required to the gradient 
directions calculated for the edge pixels. Four control point 
pairs are selected in well distributed manner manually from 
the map and the image, and are used to define a similarity
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.