Full text: Proceedings, XXth congress (Part 3)

NT 
| USING 
of projection 
ckground we 
ve mappings 
t. This paper 
rr application 
nation can be 
| for multiple 
  
Modeling 
distortion 
  
  
  
- | based on 
? | position 
in image 
space 
- | based on 
> | position 
in object 
space 
  
  
taxonomy of 
pinhole cam- 
nodel, where 
ile viewpoint 
his results in 
into account, 
All cameras 
is of the gen- 
rm projective 
ective model. 
distortion. 
ewpoints, not 
to image dis- 
Its influence 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
or error can be calculated in dependency on their image co- 
ordinates (image space based). No information about the 
scene structure are needed. An example for such an pro- 
jection is the general used model for the optical distortion. 
Strictly, it has not a single viewpoint, but the accuracy of 
this approximation is well enough. 
3. Class 3 is formed by projections with non single viewpoints 
which do not preserve straight lines. The projection rays are 
no straight lines and they do not intersect in one point. But 
their envelope forms a locus of viewpoint in three dimen- 
sions which is called caustic surface or just caustic (Swami- 
nathan et al., 2001). The resulting image distortions are 
called caustic distortions. Their exact determination bases 
on the position of the observed feature in object space (ob- 
ject space based). Therefor information about the scene 
structure are necessary to determine the influence of the dis- 
tortions. Imagesystems like wide-angle, fish- eye and cata- 
dioptric cameras with a spherical and conical reflector based 
design (Nayar et al., 2000), camera clusters, strict model of 
objective distortion and multi media geometry (e.g. air and 
water) belong to this class. In section 2 we will present the 
caustic of a multi-media system. 
4. The combination of a non single viewpoint with an invari- 
ance of straight lines is under the valid physical laws not 
possible. 
The influence of image distortion using imaging systems with 
non single viewpoints is object space based, that means it cannot 
be determined or corrected without any information of the scene 
structure. If no information about the scene structure are given, 
it is necessary to make some assumption about the scene struc- 
ture (e.g. (Swaminathan et al. 2003)). For the mapping process 
between the object and the image space, special algorithms are 
needed. For example the iterative algorithms for the multi media 
geometry in (Maas, 1995), which could be very complex. 
Another method is to replace the non single viewpoint by a sin- 
gle viewpoint, so that the mapping process can be modeled with- 
out any information about the object space. Swaminathan et al. 
presented in (Swaminathan, 2001) a method to determine a sin- 
gle viewpoint by estimation the best location to approximate the 
caustic by a point for catadioptric cameras. This methods based - 
on the determination of singularities of the caustic. 
A method which is used here to define a single viewpoint is first 
mentioned in (Wolff and Fórstner 20004 and was published in 
more detail in (Wolff and Fórstner 2001): the explicit strict phys- 
ical model with non single viewpoints is replaced and approxi- 
mated by a less complex projective mapping with a single view- 
point. Therefor no pre-informations about the scene structure are 
needed. The estimation of the approximation is posed as the 
minimization of the back projection error in image space. The 
introduced approximation is applicable for all kinds of optical, 
non projective mappings. The degree of approximation can be 
augmented by partitioning the object space into small segments 
and calculating a local approximation for every part of the object 
space separately. For this partitioning we need the extension of 
the observing area approximately. The method was presented in 
(Wolff and Fórstner 2001) used for a matching process based on 
the trifocal tensor. 
1.2 Goal of this paper 
In the context of non projective projections, the paper makes the 
following key contributions: 
607 
e Under the background of the taxonomy of imaging systems 
we survey the non projective multi media geometry (project- 
ing rays passes different media e.g. air, perspex and water). 
It belongs to class 3 with a caustic as a non single viewpoint. 
e We presend a new image point matching algorithm for a 3D 
reconstruction using multiple views, based on geometrically 
constraints alone. The method uses all images simultane- 
e. Sida bbss EEE a vir- 
tual, projective camera is used for the image point matching 
process for multiple views with multi media geometry. As 
we will see , this is implemented without loosing the quality 
of the strict model significantly. 
e Different quality tests for the approximation and the point 
matching algorithm are realized. 
1.3 Projective Geometry 
We use multiple-view geometry as it has been developed in recent 
years and is documented in (Hartley and Zisserman 2003). 
Assuming straight lines preserving mappings, the projection of 
object points X to image points x' can be modeled with the direct 
linear transformation (DLT): 
X 2 KR(J| - Z)X 
for object points X: represented in Plücker coordinates. P is the 
projection matrix, K the calibration matrix, R the rotation matrix 
and Z the projection center of the camera. 
2 GEOMETRY OF IMAGING SYSTEMS WITH NON 
SINGLE VIEWPOINTS 
2.1 Caustics as Loci of Viewpoints 
For the modeling of point projection we need two relations: 
I. A projection relation predicting the image point x’ of a given 
object point X. 
2. An inverse projection relation, giving the mapping ray L in 
the object space. In case of projective mapping a light ray is 
build by the projection center and the image point. In case 
of non projective mappings only that part of the broken ray 
is important, which intersects the object point. 
For Class 1 and 2 of our classification the realization of these two 
relations is geometrically trivial. The mapping ray is built by the 
object point or rather the image point and the projection center. 
In the case of image distortion a correction of the image points 
can be calculated image space based. 
For class 3 relation 2 is also trivial. The projecting light rays 
change their direction because of refraction and reflection (see 
Fig. 1). These changes can be directly determined using the 
Snell’s refraction law and reflection law. Relation 1 is not as 
trivial like the others, because the direction of the ray coming 
from the object point is not directly determinable if the object 
point and the physical pupil of the lense is given alone. But, as 
seen in Fig. 1, the envelope of the rays, which do not intersect in 
one point, forms a locus of viewpoints in three dimensions, the 
so called caustic. The light rays in object space are the tangent 
on this surface. Each point on the caustic surface represents the 
three-dimensional position of a viewpoint and its viewing direc- 
tion. Thus, the caustics completely describes the geometry of the 
catadioptric camera (Swaminathan et al., 2001). 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.