XIX-B3, 2012
'ocess is depicted in
-artosat-2 multiview
titude and position
ta files. Using the
ition parameters are
ents are computed in
nages are generated
sed for generating
odel. The edges of
ion and refinement
matched in another
e manually digitized
t is computed for the
t is subtracted to get
buildings and digital
ng and visualization
on of |
SM
ITEM
ind
1eration system.
SSION
ntation of multiview
> identified on the
| for computation of
ults are shown on
he image position in
' conjugate point is
mated positions are
: difference between
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
the actual and estimated position is shown in line and pixel
direction. The achieved average value is 0.02 and -0.001 pixels
in line and pixel directions respectively. The standard deviation
is 1.392 and 0.99 in line and pixel direction respectively.
Table 1 Results of Relative orientation of Multi-view images of
Cartosat-2 images (Washington)
[Actual Actual Estimated Residual error
line no pixel no (pixel units)
Line no Pixel no Along Across
track track
18000.6 | 154.037 | 18000.6 153.967 0 | 0.07
18164.3 509.96 | 18166.1 507.942 -1.8 | 2.018
18234.6 507.42 | 18234.2 508.151 0.4 | -0.731
18216.2 875.47 | 18213.5 874.369 2.7 |: 1.101
| 18235.6 | 1043.02 | 18234.8 1043.31 0.8 | -0.29
18247.2 | 1134.07 | 18248.7 1134.07 -1.510
18285.1 | 1429.53 18285 1429.88 0.1 | -0.35
18528.4 | 1966.17 | 18529.6 1967.36 -1.2 | -1.19
185184 | 2135.51 | 185177 2136.15 0.7 | -0.64
std dev 1.392 | 0.99
Average 0.022 | -0.001
3.2 Comparison between rational polynomial coefficient
and physical sensor model
The ‘step n stare’ mode of image acquisition is an asynchronous
mode of imaging. It is observed that in this mode of image
acquisition, the difference between the sensor model derived
image position and Rational Polynomial Coefficient computed
image position for same object point is of the order of 0.5
pixels.
Third order rational polynomial coefficients are fitted in terrain
independent mode (Tao, 2002). In this mode the image position
of a grid of equally spaced ground co-ordinates are computed
using the physical sensor model. At least 200 points with height
varying from minimum to maximum value in suitable step size
are computed. These points are used to fit the rational
polynomial coefficients. Image position of another set of ground
points which do not have any point common with the set of
points used for fitting the rational polynomial coefficients are
computed using the physical sensor model and rational function
model. The plot of the difference for near nadir image is shown
in Fig. 8(a). The RMS error is 0.1244 and 0.090 pixel unit in
line and pixel direction respectively for near nadir image.
The plot for image with a view angle of 26 degree is shown in
Fig. 8(b). The blue line represents the residuals in line direction
and the red line represents the residuals in pixel direction. The
RMS error is 0.6910 and 0.5043 pixel unit in line and pixel
direction respectively. These values are high compared to the
residuals error between physical sensor derived and RPC
derived positions for satellite like IRS-P6 where the mode of
imaging is synchronous (Nagasubramaniam, 2007, Liang, 2006)
Hesiduaton Near Nadir image
Fig. 8(b) Plot for nadir image
Residual on Image with26 deg in-track view angle
2
estu
(ird moist
Fig. 8(b) Plot for image with 26 view angle
Fig. 8 Plot of difference between rational polynomial coefficient
derived and physical sensor model derived image positions
3.3 Height Accuracies, Object Modelling and Visualization
The derived heights along with the building outlines are used
and orthoimage are used as input for object modelling and
visualization. Open source software Blender is used for creating
triangular mesh and adding texture to the building. The texture
is synthetic and symbolic; it may or may not represent the actual
structure of the building. Fig. 9 shows a view of the generated
site model. The height accuracies are evaluated with reference
to LIDAR data available from Google Earth website. We
observed that the average error is 0.8m and standard deviation is
1.8m.
Fig. 9 A view of generated site model of portion of Washington
city
4. CONCLUSION
The paper presents an overall schema and results for generation
of site model from spaceborne multiview/stereo images. The
results are shown for Cartosat-2 image. The relative orientation
procedure obviates the need of precise ground control if relative
measurements are suitable for the desired application. The
system is designed in a generic way to accommodate images
from other similar satellites if rational polynomial coefficients
are available. At present the manual digitization process is
starting point for capturing the outline of the buildings, this
process too can be automated as the derived normalized DSM
represents detection of buildings fairly well. Refinement of
edges with Canny operator improves the edge localization.
Geometrically constrained image matching procedure is