Full text: Proceedings, XXth congress (Part 5)

   
anbul 2004 
ebook PC 
T PDA 
  
SOT 
in market, 
"ing image 
1394). For 
display of 
display the 
| 
| ] degree. 
ted it with 
compatible 
>s than PC 
in Table 3 
CD; 
ic, RTK* | 
hcy* é15 
nsees 
y» (30m 
es 
ee() 05» esi 
S(Static) "i 
er Image, 
ndition> 
on 
0 > 
tres © 
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part BS. Istanbul 2004 
(a) Photographed Image (b) Finder Image (c) Superimposed Control Points 
ip gag BNR Wie 
>] gui f 
  
  
  
    
v SEE Put TES 
fn Fd in 
{N Camera Information 
Ya Legal Homes 3 ii 
1 E 
n i tac 
f. 
         
filz Final Be 
CTRL TE, 
(p) Measurement Accuracy 
: "HE 
AfcC ato ei s 
; meuii ^1 
Cue LEE LE i 
E, 
  
Eu 
  
  
(d) 3D Model I 
Figure 3. Displayed Image of VGS 
The camera's position (X,Y,Z* ts measured by GPS, and the 
camera's posture (Yaw, Pitch, Law) is measured by angle 
sensor of three axes (Figure 3(e)) From digital camera the 
finder image is sent to the host PC through interface and 
displayed on PC (Figure 3(b)). Processing the imported image 
data, the host PC now calculates where the control points' 
coordinates (which is input in advance) ought to be on the 
display and superimposes it on to the finder image, thus 
creating the real time moving image on the display (Figure 3(c)). 
In this way we can find how many of the control points would 
appear at the time of shooting. We can also display the model 
made in advance in accordance with the angle of camera's 
position and posture (Figure 3(d)) In this way we can 
determine the shooting position, knowing exactly what is 
missing or defective in the parts of the modelling. Again, the 
host PC, as it preserves the data of the image previously 
photographed and the data of shooting position, now can 
calculate from these data the overlapping rate and also calculate 
the accuracy of conformity between the finder image and the 
image previously obtained (Figure 3(g)). 
The accuracy can be obtained in the following equations. 
AXY =H*ô5p/f 
AZ =H* H'*§p/ (fix B) 
ôp is the resolution capability of digital camera or scanner's 
reading capability. f is the focal distance which is 
predetermined by the type of camera. 
The base length B (the distance between the cameras) is 
calculated from the position measured by GPS of the 
photographed image (Figure 3(a)) and from the position of the 
GPS to be photographed (Figure 3(b)). The shooting distance H 
can be calculated from the position of GPS, if there are control 
points or 3D model. If not, we input the approximate value. 
From these parameters we can calculate even before taking 
picture the accuracy of the stereo-model after measurement. 
2.3 The Flow of the measuring system 
The following is the explanation of the flow of the production 
of a 3D model by the photos taken from low altitude and on the 
ground as in the Figure 4. 
(1) Aerial photo can be anything taken by airplane, helicopter 
or kite balloon. In our experiment we also used a parachute- 
glider with engine (powered paraglider). However, in all cases 
the interior orientation of camera should be precognitive. 
(2) On the ground we must obtain the necessary control points 
by TS or GPS. The measure value of TS should be harmonized 
with the coordinate system of GPS. 
(3) We measure 3D model by PI-3000 from the image obtained 
in the air. We use the control points obtained in (2) as needed. 
(4) We take digital pictures on the ground guided by the 
guiding system (VGS). For this process we input into VGS in 
advance the data of the control points obtained in (2) as well as 
3D model data obtained in (3). In this way, when we determine 
the shooting position, the image as seen by camera can appear 
in real time on the display of VGS (Figure 3(b)). And on this 
image the measured control points (2) are superimposed (Figure 
3(c)) and the 3D model created by (3) will be displayed as seen 
from the camera (Figure 3(d)). Simultaneously we can also 
check the measuring accuracy and overlapping precision 
(Figure 3(g)). With all these operations the camera position will 
be guided to the position most appropriate for posterior process. 
(5) We create a 3D model as seen on the ground by stereo- 
measuring the image of VGS by PI-3000. 
(6) We create a 3D model of the total object. For this purpose, 
we make a simultaneous bundle adjustment of all the pass- 
points and tie-points of the each stereo-model photographed in 
the air and on the ground to unify the coordinate system and 
produce a 3D model of the total object. Actually this process is 
performed automatically by bundle adjustment in (5). 
  
(1) Taking Aerial Photo 
EL 
(2) Measurement of Control Points 
At 
(3) 3D Measurement by PI-3000 
| 
| | 
> 
(4) Taking Ground Photo by VGS 
LL 
(5) Creating a Terrestrial 3D Model 
il 
(6) Creating 3D Model of Total Object 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
Figure 4. Flow of Measuring System 
   
     
   
  
  
  
  
  
  
  
  
  
   
    
   
   
    
    
    
   
    
   
    
    
    
    
   
   
   
  
    
  
    
       
  
  
   
   
    
    
      
    
   
    
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.