IMAGE ACQUISITION CONSTRAINTS FOR
PANORAMIC FRAME CAMERA IMAGING
H. Kauhanen * * P. Rénnholm?
* Aalto University School of Engineering, Department of Surveying and Planning, Finland —
(heikki.kauhanen, petri.ronnholm)@aalto.fi
Commission III, WG III/5
KEY WORDS: Close range photogrammetry, Perspective error, Camera geometry, Simulation, Concentric, Eccentric
ABSTRACT:
The paper describes an approach to quantify the amount of projective error produced by an offset of projection centres in a panoramic
imaging workflow. We have limited this research to such panoramic workflows in which several sub-images using planar image sensor
are taken and then stitched together as a large panoramic image mosaic. The aim is to simulate how large the offset can be before it
introduces significant error to the dataset. The method uses geometrical analysis to calculate the error in various cases. Constraints for
shooting distance, focal length and the depth of the area of interest are taken into account. Considering these constraints, it is possible
to safely use even poorly calibrated panoramic camera rig with noticeable offset in projection centre locations. The aim is to create
datasets suited for photogrammetric reconstruction. Similar constraints can be used also for finding recommended areas from the image
planes for automatic feature matching and thus improve stitching of sub-images into full panoramic mosaics.
The results are mainly designed to be used with long focal length cameras where the offset of projection centre of sub-images can seem
to be significant but on the other hand the shooting distance is also long. We show that in such situations the error introduced by the
offset of the projection centres results only in negligible error when stitching a metric panorama. Even if the main use of the results is
with cameras of long focal length, they are feasible for all focal lengths.
1. INTRODUCTION
Panoramic images are considered as images or image sequences
with large field of view (FoV). Typically, the FoV of panoramic
images is between 100 degrees to complete 360 degrees.
Applications of panoramic images are various, such as virtual
museums (Zara, 2004), virtual travel (Yan et al, 2009),
architecture visualizations (Hotten and Diprose, 2000), 3D
object reconstruction (Luhmann and Tecklenburg, 2004), and
robot navigation (e.g., Yen and MacDonald, 2002; Briggs et al.,
2006), just to name few.
Several methods for creating panoramic images do exist, such as
using fish-eye or other large FoV lenses, stitching several sub-
images (e.g, Deng and Zhang, 2003) collecting data with
rotating line camera (Huang et al., 2003) or reflecting captured
image through rotating, spherical, conical, hyperbolic or
parabolic mirror (e.g., Svoboda et al., 1998; Gaspar and Victor,
1999; Nakao and Kashitani 2001; Fernandes et al, 2006; Fan
and Qi-dan, 2009). In addition, similar methods can be used for
the creation of extremely large images using low resolution
cameras, even if the FoV does not exceed 100 degrees (Kopf et
al., 2007). Such approach, typically, requires a long focal length
(Kauhanen et al, 2009). In this article, however, we are only
discussing about stitched panoramic images.
Usually, panoramic imaging in photogrammetric applications
calls for a tedious calibration setup to eliminate any geometric
errors. Metric panorama is considered to require a stable
panoramic rig with precise adjustments to accurately place the
* Corresponding author.
projection centre into a correct place. Such concentric imaging
setup ensures that perspectives of all sub-images are identical
(Póntinen, 2004). Only then it is possible to construct a
panoramic image that meets the criteria of an ideal geometry.
However in all cases, concentric imaging is not possible or even
desired. For example, if camera clusters are preferred for
simultaneous sub-image capture, it is physically difficult to make
such a system that fulfils the requirements of concentric imaging.
Examples of such camera clusters are Dodeca 2360 camera
system, OPTAG and DVS Panoramic Viewing System.
Camera-based geometric distortions of sub-images can be
calibrated and corrected (Brown, 1971). On the contrary, if sub-
images are not acquired concentrically, perspective differences
remain. Such perspective errors cannot be corrected without a
complete 3D model of the scene. The amount of perspective
differences defines how well sub-images can be stitched together
into a seamless panoramic image. In some cases, however, even
if perspective differences of sub-images are large, stitching can
be done with acceptable accuracy.
In this paper, we describe a simulation method to accurately
quantify the error introduced by offsets of projection centres in a
panoramic imaging process. The paper is motivated by our
previous work with long focal length panoramic images where
the shooting distance was longer than in usual close range
photogrammetric applications. That work yielded good results
and we came into conclusion that we need to be able to calculate
beforehand perspective errors caused by an eccentric rotation of
projection centres.
Non-ic
centres
error c
camer:
In this
calcule
offset.
project
simula
is clos
plane |
amour
camer:
projec
specif
offset
Once
rotate
The id
projec
consta
during
the ro
the pr
path a
uniqui
Figur
unit «
and 3
is illt
color
blue
The
proje
of ot
plane
and '
the |
COO
persp
situa
shoo
the
distin