Full text: Technical Commission III (B3)

3. Join the disconnected edges guided by the digitized edges. 
A simple case of digitizing (inner and outer edges of the circular 
building) a portion of image is shown in the figure 3(a). The 
orange boundaries show the manually digitized inner and outer 
edge of the building. The upper and lower circles in magenta of 
figure 3(b) are the annular region defined with suitable 
threshold; the edges shown in green are obtained through Canny 
operator [Canny, 1986]. The boundaries obtained through 
Canny operator provide better accuracy than manual 
digitization. 
   
Fig. 3: (a) Digitized boundaries, (b) Refined boundary 
2.9 Matching of Edges and DSM Generation 
The DSM is obtained through the matching of the epipolar 
images at interval of four pixel units. Fig. 4 shows the obtained 
DSM. Fig. 5 shows the normalized DSM which is obtained 
from subtracting the derived DTM from the DSM. It is clear 
that the majority of the buildings can be detected in normalized 
DSM. It is intended to use the DSM as cue for building 
boundary extraction at later stage. 
The positions of refined edges are known in near nadir image as 
shown in Fig. 6(a), and the corresponding points are estimated 
in the other image using the image to ground and ground to 
image transformation as shown in figure 6(b). Figure 6(c) 
displays matched points. 
The position in one image is matched around the estimated 
position of the other image. The correlation threshold is chosen 
as 0.9. About 30 % of the points get matched. The height is 
computed for all these points which eventually represent the 
height of the building edge. The variations of calculated height 
at different points of the same roof top selected are of the order 
of Im. The average height of all these points is assigned as 
height of the object assuming roof to be a plane surface. The 
building height with respect to ground is obtained by 
subtracting ground height derived from DTM. 
     
Fig. 8(b) Estimated points 
on image acquired with 
26 deg view angle 
Fig. 8(a) points on refined 
edges in nadir image 
Figure 8(c) Matched points on the 
edges of the image acquired with 
26 deg view angle 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
2.10 Site Model Generation Process 
The flow chart of site model generation process is depicted in 
figure 7. The basic inputs are at least two Cartosat-2 multiview 
images of the area of interest. The attitude and position 
information is available in ancillary data files. Using the 
physical sensor model, the relative orientation parameters are 
estimated. The rational polynomial coefficients are computed in 
terrain independent mode. The epipolar images are generated 
for these views. A dense DSM is used for generating 
normalized DSM and Digital Terrain Model. The edges of 
buildings are delineated by 2-D digitization and refinement 
procedures. The points on the edges are matched in another 
image. The remaining unmatched edges are manually digitized 
and refined in 3D viewing mode. The height is computed for the 
matched edge pairs. The ground level height is subtracted to get 
the building height. The height, delineated buildings and digital 
Terrain model are inputs for object modelling and visualization 
software. 
Cartosat-2 multiview 
im age and ancillary 
info rm ation 
  
Sensor modeling and 
generation of rational 
polynomial coefficients 
  
  
Generation of epipolar 
images 
  
; Generation of 
dense DSM 
  
2-D Digitization of Generation of 
outline of buildings nDSM and 
Pdi es DTM 
M LA 
Refining digitized edges 
using Canny operator 
  
  
Y 
Geom etrically 
constrained matching 
of edge point 
  
  
  
  
  
3-D digitization of 
unmatched edges 
SEA SET 
Refining digitized edges 
using Canny operator 
v 8 
Computing the height 
of building 
Object Modeling 
and visualization 
  
  
Fig 7: Block diagram of site model generation system. 
3. RESULTS AND DISCUSSION 
3.1 Results of Relative Orientation 
Table 1 shows the results of relative orientation of multiview 
images. Fourteen conjugate points were identified on the 
overlapping images. Five points were used for computation of 
residual orientation parameters. The results are shown on 
remaining conjugate points. Starting with the image position in 
near nadir image, the image position of conjugate point IS 
estimated in the second image. The estimated positions are 
compared against the actual positions, the difference between 
  
  
  
  
   
  
  
  
  
  
  
  
   
  
  
  
  
  
  
  
  
  
  
  
   
  
   
  
  
  
  
   
  
   
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
   
  
   
   
   
    
   
  
  
  
  
    
the 
dire 
inl 
is 1 
Car 
Actı 
line 
  
Soot 
1 prom
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.