Full text: XIXth congress (Part B3,2)

IETHOD 
ig the idea of 
ıybrid feature 
he 3D object 
on the ground 
rement sensor 
ed using both 
ation method. 
ecovering 3D 
op surfaces of 
ns, wind-flow 
s, e.g. (Martin 
o acquire and 
ent to acquire 
fficult to track 
lexity and the 
hile the frame 
h, for example 
f "Shape from 
feature points 
is expected to 
'onstruction of 
n coefficients. 
| images, using 
ry, We propose 
using a camera 
ing coordinate 
. Experiments 
tive and robust 
d object shape. 
tion matrix and 
orm of random 
mages. We had 
mation. Sensor 
measurements. 
| is effective in 
ta plane, which 
  
Isao Miyagawa 
is formed by three delta points. 
As most video image sequences have poor image quality, the recovered shapes are distorted case-by-case. Obviously, it 
is not enough to acquire and recover 3D models from time-redundancy images. In other words, There is no hardware 
system that can capture high-resolution images and store them on a computer disk. To require accurate 3D object shape, 
we develop the factorization method with high-resolution and time-redundant images. 
3 HYBRID FEATURE TRACKING 
Our idea is that time-redundant high-resolution image sequences are created by combining high-resolution images and 
video images. Aerial image sequences are acquired by a high-resolution camera and a video camera, both carried in the 
same helicopter. Time coding links the high-resolution images to the video images. Most feature points in high-resolution 
images are greatly displaced from one frame to the next, due to helicopter (camera) movement, for example, the variation 
can range from 50 to 100 pixels. The trajectories with feature points are created by combining the quality of the high- 
resolution images with the time-redundancy of the video images. 2D feature points extracted from high-resolution images 
are tracked in the video images. Different image tracking algorithms are used to extract feature points from the two types 
of images. To handle the occlusion in aerial images, we choose object points, for example, edge, corner, etc, and earth 
points lying on the ground, as 2D feature points. As the high-resolution and video images have different coordinate values, 
the 2D feature points from the high-resolution images are projected onto the video image plane using a transformation 
model. As each camera has different internal parameters (focus length, image center, and so on), the captured images 
must be corrected individually by a camera calibration method. As both cameras are mounted on the same helicopter, we 
can assume that the calibration model does not change. 2D points are first extracted from a high-resolution image and 
these points are then tracked in the corresponding video images. This allows us to track feature points in the two image 
sequences. 
tracking on high-resolution image 
      
    
frame No.f 
  
frame No.f+1 
transformation model 
  
  
  
tracking on low-resolution image 
  
  
frame No.m 
frame No.m+1 frame No.m+2 
frame No.m+n 
Figure 1: Hybrid Feature Tracking 
3.1 DELTA POINTS IN ORTHOMETRIC HEIGHT DATA 
The operator must set feature points on the initial high-resolution image. Feature points are composed from some points 
on the top surfaces of buildings and basic points on the ground. Each object is assumed to have N-point column shape (for 
example, a circle column is formed by the many points). The number of delta points is at least three, and they lie on the 
ground plane. Here, delta points are made to correspond to orthometric height data using the camera internal parameters 
and camera external parameters (( Xy, Y», Z»): camera position by sensor system). When the operator set delta points on 
the ground, the 3D coordinate system values of these points are assessed using the corresponding orthometric height data. 
The delta points are not tracked on the images, because the 3D coordinate values can be projected onto 2D image planes. 
The orhtometric height data for the ground plane is measured on a roughly 1 [m] mesh. The 3D points (.X,, Y,, Z,) on the 
ground are made to correspond to 2D points (x;, y;), or (x j: yj) on the high-resolution images using the camera's internal 
and external parameters. The internal parameters, lens center: (Ce, Cy), focus length: f, and so on, are calculated using 
Tsai's method (Tsai, 1986). The external parameters: (.X,, Y», Z,) and ( X,, Y,, Z,), are measured using a sensor system 
that is based on Differential-GPS. (A7, Yp, Zp) is linked to time-coded, captured image in a one-to-one correspondence. 
(Ages Yges Zge) on the ground can be projected onto an image center (C, , C). All orthometric height data (.X,, Y,, Z, ) 
can be projected into 2D coordinate values: (z;, yi) on the high-resolution image, assuming the pin-hole camera model. 
  
International Archives of Photogrammetry and Remote Sensing. Vol, XXXIII, Part B3. Amsterdam 2000. 609 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.