Full text: XIXth congress (Part B3,2)

  
Isao Miyagawa 
  
SHAPE RECOVERY FROM HYBRID FEATURE POINTS WITH FACTORIZATION METHOD 
Isao MIYAGAWA, Shigeru NAGAI, Kazuhiro SUGIYAMA 
NTT Cyber Space Laboratories, Japan 
miyagawa @marsh.hil.ntt.co.jp 
KEY WORDS: Factorization Method, Hybrid Feature Points, Shape from Motion, 3D Digital City 
ABSTRACT 
We are developing shape recovery technology that can semi-automatically process image sequences using the idea of 
“Shape from Motion”. In this paper, we investigate an acquisition and recovery method that processes hybrid feature 
points extracted from both high-resolution images and video images using the factorization method. The 3D object 
models are assumed to have N-point column shape, and are recovered on a plane formed from some points on the ground 
(delta points). The delta points correspond to orthometric height data captured by a camera parameter measurement sensor 
system. Acquired and recovered 3D models are mapped on Tokyo Datum. The 2D feature points measured using both 
hybrid feature tracking and normal feature tracking are decomposed into 3D object points using the factorization method. 
Comparison experiments show that the hybrid method is more effective and accurate in acquiring and recovering 3D 
object shape than a earlier method. Moreover, we confirm that the hybrid method yields accurately shaped top surfaces of 
building. 
1 INTRODUCTION 
3D cartography and geographical data are often used in various applications such as ray-tracing simulations, wind-flow 
mapping, sunshine occlusion, finding refuge roads after natural disasters, and so on. 3D stereo-scopic tools, e.g. (Martin 
Reddy, 1999), have been developed for 3D digital city projects. Aerial photogrammetry is necessary to acquire and 
reconstruct the many object models of buildings. Human operators who use photogrammetric equipment to acquire 
spatial data for a Geographical Information System (GIS) must perform the key tasks manually. It is very difficult to track 
feature points on aerial photogrammetry images. Automation of these tasks is difficult due to object complexity and the 
need for some expert knowledge. 
High-resolution cameras have become recently available and used in watching system, robot vision etc. While the frame 
rates are lower than 30 [fps] and the captured images are 8 or 10 bit monochrome, image quality is very high, for example 
resolution length can correspond to 0.1 [m] per pixel in aerial images captured at altitude 300 [m]. The idea of "Shape from 
Motion" is important in realizing an automatic or semi-automatic tool to perform 3D cartography. As the feature points 
on high-resolution images are clear as 2D coordinate system values, 3D reconstruction from these images is expected to 
be accurate. We have investigated the effects of time-redunduncy for high-resolution images on the 3D reconstruction of 
buildings. We have shown that adequate time-redundancy is generated by video images with transformation coefficients. 
In previous work, I showed that 3D feature points could be acquired from 2D feature points extracted from images, using 
an improved factorization method in combination with sensor information. To realize accurate shape recovery, we propose 
a hybrid feature tracking scheme in the next section. In it, 2D feature data is decomposed into shape matrix using a camera 
matrix composed from sensor information. The acquired 3D shape data are mapped onto Tokyo Datum using coordinate 
transformation. Moreover, each 3D object shape is reconstructed on the plane formed from delta points. Experiments 
compare the former method to the proposed method. 
2 PREVIOUS WORK 
The factorization method (Tomasi & Kanade, 1992)(Poelman & Kanade, 1997) was proposed as an effective and robust 
way of utilizing the time-redundancy of image sequences. This method recovers both camera motion and object shape. 
Thought random noise is present in the images, the measurement matrix is decomposed into a camera motion matrix and 
object shape matrix, using the rank-3 property of singular value matrix. 
The camera in the helicopter commonly experiences significant disturbance by winding, which yields a form of random 
noise. Therefore, this current method could not acquire and recover 3D shape accurately from aerial video images. We had 
developed an improved factorization method (Miyagawa, 1999)(Miyagawa, 2000) that utilizes sensor information. Sensor 
information means camera orientation information: yaw, pitch, and roll degree based on sensor system measurements 
The camera motion matrix is calculated from the rotation parameter. We have shown that this method is effective in 
recovering the shape of objects from aerial images. The height of each object is acquired relative to the delta plane, which 
  
608 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 
is fori 
Asm 
is not 
systel 
we de 
3 H 
Our i 
video 
same 
image 
can ra 
‘resolu 
are tre 
of im: 
points 
the 2I 
model 
must | 
can as 
these | 
sequei 
31 T 
The or 
on the 
examp 
ground 
and cai 
the gro 
The de 
The orl 
ground 
and ext 
Tsai’s 1 
that is | 
CX es 
can be 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.