International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
Figure 3 Visualization of initial sensor position and orientation
uncertainty
In order to estimate this angle we use a queries scheme instead
of classical photogrammetric techniques in the pixel level. The
query scheme is a two-part process, one part using single object
query scheme while the second part processes a multi object
configuration. For further information on our single query
approach the reader can refer to [Stefanidis A. et al, 2003],
while for the multi object queries [Stefanidis A. et al, 2002].
After the estimation of the parameters we run a least squares
adjustment and produce accurate coordinates for the camera
position and rotation.
Figure 4 Representation of the anchor frame procedure
In figure 4 we can see a representation of how the anchor frame
orientation scheme works. The top image is the one captured by
our sensor, in the middle image we can see the panorama
created with the help of the virtual model, The highlighted
portion of the middle image depicts the position of the captured
image as computed using the single and multi object queries.
Finally the bottom image shows the sensor's location and
orientation after precise matching is performed in the query
results.
In intermediate frame orientation, which is the focus of this
paper, we aim to recover the orientation of intermediate frames
by orienting them relative to the nearest anchor frames. In order
to accomplish this goal we developed a framework to translate
object representation variations (i.e. changes in an object's size,
location, and orientation within an image frame relative to the
same objects image in an anchor frame) into orientation
variations (i.e. changes in the orientation parameters of the
corresponding frame relative to the anchor frame). Thus we
develop a dynamic image orientation scheme that allows us to
recover image orientation for every frame in our feed using
few, select oriented anchor frames. The nature of our data
collection modus operandi (sensors roaming urban scenes)
implies that small differences will occur in sensor location and
rotation between consecutive frames.
This process is visualized in figure 5 where we see a portion of
a 3-dimensional virtual model of an urban scene. Using anchor
frame orientation in an orientation-through-queries process we
have already determined the orientation of the sensor in
position A. using the second step we will determine the
orientation in position B. In figure 6 we can see the two
captured images, left image captured in position A, and right
image captured in position B. Our objective in this case is to
compute a relative orientation between the two captured images
and using the orientation information about position A to
compute the.new position B.
Figure 6 Consecutive frames captured from sensor, with the
facade of a building delineated in them.
3. PROPOSED APPROACH
In this section we are going to analyze the procedure that allows
the computing of relative orientation between two consecutive
frames. For that procedure we assume that we have absolute
orientation information for the first image and also that in the
first image we know the real world coordinates for the objects
that appear in it. We also assume that we for each building
facade we know their corner points in both images. Our
observations are object facades, which we consider to be planar
elements. We are going to follow a two step procedure. The
first step is to compute the rotation angles between the two
sensor positions, while the second one will allow us to compute
the translation between the two sensors.
Internation
ue EN
For the con
vanishing |
that can wc
case scenar
that we can
assume a |
image is tht
the Y axis.
facade (hei;
the sensor
figure 7.
As shown i
the three ro
the object -
figure 8 we
The rotatior
following e
C=" Xp
tan x =-
tan @ = —
Yi
tan = —
Xp
After the de
computation
use the pre
rotation ang
compute the
Hon
Ls x
Rit,
Ww
Ry3xi,