Isao Miyagawa
4.2 MAPPING ON THE COORDINATE SYSTEM IN JAPAN
We introduce Tokyo Datum to map 3D object models. Tokyo Datum is the 3D-GIS coordinate system of Japan, and has
been applied to many 3D-GIS applications (e.g. car navigation systems, orthometric height data on the earth, GPS). The
X-axis in Tokyo Datum is along north, the Y-axis is along east, and Z value Japan is the geoid height relative to sea level,
Acquired 3D values ( X,., Y,, Z,) are points on a relative coordinate system. These 3D values must be projected to
absolute coordinate system (Tokyo Datum). Here, the 3D delta points acquired using the factorization method are
AounsYra, Zuibtz- ,,2,:-:N,the3Ddelta oints measured on orthometric height data are ( X; ..(;). Y5esti» Zest
riy 1 riy erii)? pot à . g jestip Tjes(i)s Zjestih
1.2,:-, NL Gun Yrs Zr) can be transformed into ( X; (i), Yjestiy Zjcsti)) as follows.
: : - t
Xj ax) Gx3 ax X X ap su tra) Er Fie
y. cs 1 es 2 ax 3 > + S apo X r(2) ) r(2) Zw 2) 1 pi es(3)
jes = y ^9 y 3 r 0 , = 3
~ aps . . . . :
Lies azi az2 473 Zr Zo P t i hse : *
2 Xen) Yen) Zum) ! PjestM)
(4)
5 EXPERIMENTS
In field experiments, we captured image sequences and sensor information as follows. The width of the earth plane cap-
tured by the camera ranged from 200 [m] to 250 [m]. The helicopter moved along one direction, for example, from south
to north. The Differential GPS (DGPS) mounted on the helicopter captured rotation parameters as sensor information.
The two kinds of images, high-resolution and video images were calibrated using Tsai's method (Tsai, 1986).
Frame No.083
Figure 6: Aerial Images
Table 1: Field Conditions
Fs | Specification | Remarks |
Camera video : F2 6.7 [mm], Full Color 640-by- 480 pixels, 30 [fps]
high-resolution : F=20.0 [mm], Gray-scale 2000-by-2000 pixels, 2 [fps]
DGPS Orientation : yaw, pitch, roll degree | measurement error : +0.2 [deg]
Helicopter | average velocity ranged from 40 to 50 [km/h] altitude of about 300 [m]
The high-resolution images are shown in Figure 6. The image labeled Frame No.083 is an initial image, on which th:
operator set some feature points. We used 10 images without frame-out or occlusion to acquire 3D object shape. W
estimated the accuracy with 3D acquisition and recovery from both feature points and hybrid feature points. The 3D re
construction models are shown in Figure 7. The accuracy estimation is given in Table 2. Method A means paraperspectiv
factorization method (Poelman & Kanade, 1997), Method B means paraperspective factorization method with hybrid fee
ture points, and Method C means paraperspective factorization method that utilize sensor information (Miyagawa, 2000)
612 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000.
Some
whicl
from
The r
Metlx
object
A com
3D rec