Isao Miyagawa
If 3D point ( X,, Y,, Z;) is known, 2D points (z;, yj) in another camera view (.X,, Y,, Z,) can be established . In other
words, this projection allows delta points to be matched to 2D points in each image.
im
search domain
Q(Xq, Ya, Zq)
P (Xp, Yo, Zp)
(Cx, Cy)
(Xoc, Ygc, Zgc) ; p Orthometric Height
Zo |" 77 En
(Xo, Yo) XY
-W
Figure 2: Delta Points on Orthometirc Height
Figure 3: Search Domain
3.2 TRACKING ON HIGH-RESOLUTION IMAGES
In high-resolution images, it has enough resolution to achieve accurate tracking in simple template matching method.
Therefore, the complex operations are not needed, we set windows to search feature points of the top surfaces of buildings.
We assumed that surface size is nearly same for each frames because helicopter have maintained altitude for optical axis.
In searching process, the center of balance: (x, y) (calculated by the feature points composing the top surface of building)
is corresponded to the center of window to search feature points. Template matching is done between I(x,y) on this
window and J (x + [, y + m) in the next window. The texture data in a template window is extracted from shape formed
several points on top surface of building. Feature tracking is realized using this texture data. To put it concretely, template
pattern: /(x, y) in one image is matched to texture pattern: J(x + [,y + m) in next image. When this matched value :
D(l, m) is a minimum value, (/, »n) is displacement between the two images. As the high-resolution image have adequate
resolution, simple operation is enough to match pattern.
1 dj d;
Dil, m)y= Qd 1y NT E n I(x +dx,y + dy) + J(x +1+ dx,y +m + dy) (1)
The size of search window: d; is determined by the distance from a center of balance to the most far point in all feature
points. (/, m) is parameter in search domain. If delta points can be oriented on the image, the search domain in searching
feature points except delta points, becomes (-W < ! < W),(-W < m < W) to (xdelta — dr, € 1 € zat
dxs), (Ydetta — dÿ1 € m € Yaelta + dy2).
33 TRACKING ON VIDEO IMAGES AND TRANSFORMATION COEFFICIENTS
The displacements in video images are identified using the Lucas-Tomasi-Kanade method (Tomasi & Kanade, 1991)(Jianb
Shi & Tomasi, 1994). This algorithm is robust and simple method. Feature tracking is done in video image sequence
frame No.m, m+1, - - -, m+n in Figure 22.
A helicopter carries two cameras parallel. All 2D feature points: (2p, ym) on each calibrated high-resolution image
are transformed to 2D feature points: (r,, y,) on the calibrated video image as follows. The transformation modd
is simple; we consider translation, scaling, and rotation. The transformation coefficients are calculated as follows.
(toy Mui im 10-:-N are measured in the video image, (2g (i), Yui), t = 1---N are measured in high-resolution
image linked to one video image. p = x, y, j means a Moore-Penrose inverse matrix.
1
Tul) Mw(1) I PH(1)
XH ay be to De p T2) 442); 1 PH(2) )
- l Ti D UAE C
UH dy dy Yv y D : :
P
So(N) Yv(N) l PH(N)
610 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000.
The |
featu
in the
More
resol
video
4 SI
We h
gawa,
regist
matri
methc
More:
4.1
It is i
the di
acquit
in Fig
level.
delta |
side)).
groun
|