Full text: Proceedings, XXth congress (Part 3)

  
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
The camera system EYSCAN used within our investigations 
was developed by the KST GmbH in a cooperation with the 
German Aerospace Centre. The system, which is depicted in 
Figure 3, is based on CCD line mounted parallel to the rotation 
axis of a turntable. Thus, the height of the panoramic image is 
determined by the number of detector elements of the CCD line. 
In contrast, the width of the image is related to the number of 
single image lines, which are captured during the rotation of the 
turntable while collecting the panorama. In our experiments, 
this resulted in an image height of 10.200 pixels, while during a 
360? turn of the camera more than 40.000 columns were cap- 
tured. Since the CCD is a RGB triplet, true color images are 
available after data collection. The spectral resolution of each 
channel is 14 bit, the focal length of the camera is 60mm, and 
the pixel size is 7um. 
2.2 Geometric Processing 
In order to map the visible faces of the buildings to the respec- 
tive image patches of the panoramic scene, corresponding image 
coordinates have to be provided for the 3D object points of the 
building models. In accordance to the processing of standard 
perspective images, the object coordinates X, are linked to the 
corresponding camera coordinates x based on the well known 
collinearity equation 
Le. 
RE Xk) 
which defines a transformation between two Cartesian coordi- 
nates system. In accordance to the approach described by 
(Schneider & Maas 2003), a cylindrical coordinate system is 
additionally introduced to simplify the transformation of pano- 
ramic imagery. In this system, which is overlaid to the picture of 
the EYESCAN camera in Figure 3, the parameter & represents 
the scan angle of the camera with respect to the first column of 
the scene. The radius of the image cylinder is given by the pa- 
rameter r. In the ideal case, this parameter is equal to the prin- 
cipal distance of the camera. The parameter n represents the 
height of an image point above the xy-plane. Thus, this parame- 
ter is related to the vertical distance of the object point to the 
camera station. The transformation between the cylindrical 
camera coordinates r,&,n and the Cartesian camera coordinates 
is then given by 
x=[xiy 2] =[r-cosé —r-siné j| 
In the final step, the transformation between the cylindrical 
camera coordinates and the pixel coordinates m,n is defined by 
= thy 
m=— 
d, 
Y 
Similar to the processing of frame imagery, the pixel coordinate 
n in vertical direction is determined by the corresponding com- 
ponent n, ofthe principal point and the vertical resolution d, , 
which is defined by the pixel size. In contrast, the horizontal 
resolution d, required to compute pixel coordinate m in hori- 
zontal direction is defined by the rotation angle of the CCD line 
per column during collection of the panoramic image. 
The required exterior orientation parameters of the scene were 
computed from a spatial resection. In order to allow for an in- 
teractive measurement of the required control points the avail- 
566 
able 3D building models were approximately mapped to the 
panoramic image. Based on this approximate mapping, a suffi- 
cient number of visible faces, which were distributed over the 
complete scene were selected and manually measured in the im- 
age. In principle, the geometric quality of the EYSCAN camera 
allows for accuracies of point measurement in the sub-pixel 
level (Schneider & Maas 2003). Still, in our experiments only 
accuracies of several pixel could be achieved. In our opinion, 
this results from the fact, that the fit between model and image 
after spatial resection is not only influenced by the geometric 
quality of the image, but also from the accuracy, level of detail 
and visual quality of the available 3D building models used for 
the provision of the control points. While our urban model pro- 
vides a reliable representation of the overall shape of the visible 
buildings, the amount of detail is limited especially for the fa- 
cades. As it is discussed earlier, this situation is typical for 3D 
city model data sets, which are collected from airborne data. 
  
Figure 4: Building models mapped to panoramic image. 
Figure 4 exemplarily demonstrates the geometric quality of the 
mapping process based on the result of spatial resection for a 
part of the panoramic image. 
  
Figure 5: Scenes generated using buildings with texture from 
panoramic image. 
Interr 
  
Figur 
image 
textui 
ate th 
applic 
objec 
polyg 
analy: 
Usual 
projec 
specti 
rectifi 
these 
proce 
tions. 
As it 
ages ( 
from 
lected 
for vi 
scene 
will 1 
minin 
favou 
essing 
restriz 
geoer 
3.1.1 
One i 
ity of 
While 
ramic 
meast 
altern 
collec 
feasib 
This t 
al 20( 
used i 
buildi 
expen 
geore 
a 3D 
that c 
be ref 
the de 
In orc 
rect g 
an ele 
good 
ber of 
eas. S 
config 
Addit 
multi; 
GPS 1 
10m 
provi
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.