Full text: XVIIth ISPRS Congress (Part B5)

rsion 
+1) 1S 
ionT 
n Eq. 
inear 
tR= 
ation 
iz 
(9) 
ist six 
linear 
he six 
tions 
eriva- 
points 
er. To 
set of 
ion is 
ation 
ant to 
y and 
erva- 
for R 
tion is 
pint is 
n. An 
estimate of the parameters R and T is calculated and 
the observation point is moved according to those 
parameters. The estimation is repeated with the new 
starting position of P until the parameter changes of 
T and R converge to zero. 
To improve estimation stability, dependencies be- 
tween rotation and translation parameters were can- 
celled out through the introduction of a center of rota- 
tion G. The rotation of an object around an arbitrary 
rotation center can be separated into a rotation of the 
object around the object's center of gravity and an 
additional translation of the object. Such a decompo- 
sition leads to an independent estimationof Rand T 
and improves convergence of the solution. 
The system should be robust against noisy measure- 
ments or measurements which are erroneous due to 
invalid model assumptions. Therefore the mean tem- 
poral intensity is computed and observation points 
with high intensity errors are excluded from the re- 
gression in a modified least squares fit. The measure- 
ment certainty of each parameter can be estimated 
through evaluation of the error covariance matrix of 
the regression [Hótter, 1988]. When a parameter has 
an uncertain estimate it can be excluded from the 
regressionto ensure a stable estimate forthe remain- 
ing parameters. The analysis was calculated from a 
monocular image sequence only. It has been tested 
successfully on a variety of tasks for object and cam- 
era motion tracking [Kappei, 1988],[Liedtke, 1990], 
[Welz, 1990]. When including the stereoscopic se- 
quence information the quality of the analysis is ex- 
pected to improve further. 
| en orinuible dept 3L 
scene model 
For each image pair of the sequence a depth map Dk 
can be calculated by stereoscopic analysis together 
with its associated confidence map Cy. The 3D scene 
model contains the approximated scene geometry 
that can be moved according to the camera and scene 
motion. It is now possible to fuse the depth measure- 
ments from multiple view points into the 3D scene 
model to improve estimation quality. The confidence 
value C is converted into the weight S that can easily 
be accumulated throughout the sequence. Each con- 
trol point of the scene objects holds not only its posi- 
tion Pojg in space but also its corresponding confi- 
dence weight Sg. When a new measurement 
becomes available, the scene motion is compensated 
and the new depth estimate Pew With corresponding 
confidence weight Snew is integrated by weighted ac- 
cumulation. Sçuse lepresents the accumulated quality 
measure and Pruse the new control point position. 
C 
with S=—— (10) 
Stuse = Sold + Snew 1—C 
Pad 2 Sola T P. : Sicw 
and Pose = S + S 
old new 
The information fusing process described above can 
only be applied to an existing surface. When new 
objects and prior unseen object surfaces appear, the 
433 
surface mesh must be extended from the new depth 
map. Once the surface is built, the fusing process can 
continue. 
First results of the sequence analysis are shown in 
Fig. 3d with the sequence "house". The house was 
rotated on a turn table and 90 stereoscopic views of 
the house from all directions, each view displaced by 
4 degree rotation, were taken. Starting with the 3D 
object shown in Fig. 3c, the 3D motion and rotation of 
the house was estimated successfully. At present the 
sequence analysis was tested with objects generated 
from a single depth map only. The object part visible 
from from one camera position was generated and 
this object part was tracked throughoutthe sequence, 
integrating the depth measurements from the differ- 
ent view points. The resulting object surface after 
integration from 6 different view points (0, 4, 8, 1 2,16, 
and 20 degree rotation) is shown in Fig. 3d. The object 
is rotated to a side view to show the still existing shape 
deviations. 
We are currently working to improve the motion esti- 
mation by fully exploiting the stereoscopic sequence 
information and to enhance the integration process. 
It is necessary that the 3D object surfaces are gener- 
ated not only from a single depth map but incremen- 
tally when new surfaces appear. Additional quality 
measures can be thought of that govern the global 
surface shape and allow to introduce scene specific 
knowledge. 
ACKNOWLEDGEMENT 
This work has been supported by a grant of the Ger- 
man postal service TELEKOM. 
REFERENCES 
Aloimonos, J., Shulman, D.,1989. Integration of Visu- 
al Modules, Academic Press, San Diego, USA. 
Hotter, M.,Thoma, R., 1988. Image segmentation 
based on object oriented mapping parameter estima- 
tion, SIGNAL PROCESSING, Vol. 15(3), pp. 
315-334. 
Kappei, F., 1988. Modellierung und Rekonstruktion 
bewegter dreidimensionaler Objekte aus einer Fern- 
sehbildfolge, Ph.D. Thesis, University of Hannover. 
R. Koch, 1990. Automatic Modelling of Natural 
Scenes for Generating Synthetic Movies, Eurograph- 
ics '90, Montreux, Switzerland. 
Liedtke, C. E., Busch, H., Koch, R., 1990. Automatic 
Modelling of 3D Moving Objects from a TV Image Se- 
quence, SPIE Conf. Sensing and Reconstruction of 
3D-Objects and Scenes, Vol. 1260, pp. 230-239, 
Santa Clara, USA. 
Terzopoulos, D., 1988. The computation of visible— 
surface representations, IEEE Trans. Patt. Anal. 
Mach. Intell., Vol 10, pp.417—438. 
Welz, K. 1990. Beobachtung von Verkehrszeichen 
aus einem bewegten Fahrzeug, Proceedings of the7. 
Aachener Symposium für Signaltheorie ASST '90, 
Aachen, F.R.G. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.