The spacecraft has three modes of image acquisition; these
modes are termed as spot, paintbrush and multiview modes. The
spot mode covers a swath of 9.6 km with strip length varying
from 6 to 290 km. The paintbrush mode provides a wider
combined swath by imaging adjacent strips from same orbit. In
multiview mode, the same area is imaged from two or three
different view angles from the same orbit as shown in Fig. 1.
This mode is useful for computation of height of the objects.
However, due to continuous variation of pitch rate, the along-
track resolution and the base to height ratio of multiview image
acquisition varies for each imaged line. Due to these factors
modelling the imaging geometry of spacecraft becomes
complex. A physical sensor model is developed that takes into
account the dynamic nature of imaging process of Cartosat-2.
Fig. 1: Cartosat-2 Multiview Imaging Mode
2.2 Physical Sensor Model for Cartosat-2
The Cartosat-2 spacecraft is equipped with satellite positioning
system, star sensors and gyros to provide the position and
orientation information at regular time intervals. The physical
sensor model utilizes this information in a systematic and
coherent manner. The model does not approximate the shape of
the orbit. The osculating nature of the orbit is accounted by
converting the position and velocity parameters to slowly
varying Keplerian elements, which are interpolated to know the
position at the time of imaging. The orientation information is
available as set of quaternions, which are converted to Euler
angles. The residual orientation error is modelled as bias in roll,
pitch, and yaw over a short segment of imaging. In case,
precise control points are not available the positional accuracy
is improved using control points identified from Enhanced
Thematic Mapper (ETM) orthoimages and SRTM DEM.
The model is based on well known collinearity condition which
states that the object position, image position and the
perspective centre lie on a straight line at the time of imaging.
Equation (1) represents the collinearity condition in
mathematical form.
x Xp
BE Y-X (1)
=f Z —Zp
In equation (1), x and y represent the image plane coordinates, f
is the effective focal length of the imaging system, X, Y, Z are
co-ordinates of the object point and X,, Y,, Z, are co-ordinates
of the perspective centre at the time of imaging. Scale factor is
denoted by s. M is the transformation matrix connecting the
image and the object space. The matrix M is formed by
multiplying a series of rotations connecting intermediate co-
ordinate systems.
2.3 Relative Orientation of Multiview Images
The residual orientation error in the multiview mode of image
acquisition is highly correlated for the overlapping images as
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
these images are acquired within a short interval of time from
the same orbit. Thus, it is possible to perform analytic relative
orientation of these images by considering the differentia]
orientation error as unknown. This approach has two significant
advantages; first, it is easy to identify the conjugate points in the
overlapping images and secondly, if precise ground control
points are not available, relatively oriented images can be used
for computation of relative heights of the objects.
The developed approach to relatively orient the multiview
images is based on coplanarity condition. It states that the
perspective centres and the object point lie on the epipolar
plane. Mathematically the coplanarity condition is expressed as
[d d, b] 20 (3)
where [,,] represents scalar triple product and d4, d; are the
vectors joining object point and the perspective centre of the
first and second images respectively, b represents the vector
connecting two perspective centres. Equation (3) is linearized
with respect to differential roll, pitch and yaw values. Three
pairs of conjugate points are sufficient to compute differential
correction. The position of the perspective centres and the
orientation information is primarily obtained from
onboard/ground processed measurements supplied with the
images.
2.4 Computation of Rational Polynomial Coefficients
Over the past decade, rational function models are being used as
alternate to physical sensor models. This is primarily due to the
fact that physical sensor models are complex; they need
information about the camera geometry and good understanding
of image acquisition process. On the other hand, rational
function models are easy to implement and supported by major
commercial satellite imagery providers. Moreover, using
rational function model in place of physical sensor model makes
the system truly sensor independent. However, it is important to
quantify the results acquired with physical sensor model and the
rational function model.
Rational polynomial coefficients are computed using terrain
independent approach (Tao, 2002). The physical sensor model
computes the orientation parameters as per the method
explained in previous section. The image space co-ordinate is
obtained for the given object space co-ordinate using the
linearized form of equation (1) (Mahapatra et al, 2004). The
image positions for a given set of uniformly spaced grid of
object space co-ordinates are estimated using the physical
sensor model. The set of object points and corresponding image
positions are used to compute the rational polynomial
coefficients. The derived set of rational polynomial coefficients
are used for relating image and object space. Since the rational
polynomial coefficients are used for further processing, it is
possible to use commercially available stereo images obtained
from satellites such as Geoeye-1, Worldview-1/2, and IKONOS
for site model generation.
2.5 Image Matching Techniques
Digital image matching techniques are used for extraction of
digital surface model and automatic identification of conjugate
points. Digital image matching is considered as mathematically
ill posed problem. This problem can be transformed to well
posed one by imposing regularizing constraint. One possible
technique is to reduce the domain of probable match by
introducing geometric constraints. In building reconstruction
problem line segments are automatically extracted and matched
using geometric and photometric constraints (Baillard C, Park,
info
matt
to €
roof
to n
edge
Buil
nadi
info
relati
ima;
If t
disp
reli
mat
to f
ang
due
Ima
acq
Tesc
cast
den
per!
mat
ima
maf
sim
size
nor
two
nor
thai
is c
mai
acc
Coe
the
spa
Th
is
rep
is.
poi
int
ori
2.6
lar
otl
wi
Cal