Full text: Close-range imaging, long-range vision

  
„HZ. 
a) s 
Here, f is focal length and D is the physical size of one pixel of 
the image sensor at the focal surface. 
Equations (2) to (4) are used to calculate the 3D position of a 
target point when performing stereo measurements by two 
omnidirectional images. In Fig. 6, B is the length of the stereo 
baseline while R is the distance from the line connecting the 
shooting positions of each image to the target point. Here, depth 
R can be calculated as follows from baseline B and phase 
angles p, and p, in each image. 
n B 
=. > @) 
cot p, — cot p, 
In addition, relative height h of the target point can be 
calculated as follows from computed R and either phase angle 
p, and elevation angle @, at Position 1 or phase angle p, and 
elevation angle @, at Position 2. 
— Sin p, tan j, 
AR 
»- sin p, - tan 9, 
R 
h (3) 
(4) 
The above equations can be used to calculate the height and 
depth of all feature points of a target building. The shape of the 
building can be reconstructed by configuring the surfaces of the 
building from the feature points whose height and distance have 
been determined. 
Figure 5(c) shows that part of the process flow that re-projects 
the shapes of the reconstructed building onto an omnidirectional 
image like the one shown in Fig. 3. The system then extracts the 
re-projected area and transforms it into an orthogonal projection 
according to the projection formula of an omnidirectional 
camera. In this way, a texture image of each building can be 
acquired and texture can be mapped to the building model. 
4. RECONSTRUCTED BUILDING MODEL 
Before constructing our mobile mapping system, we created a 
simple fixed-point camera system for evaluating the shooting 
time of an omnidirectional image and the reconstructed building 
model. This system also uses an omnidirectional camera and 
GPS equipment, and mounts the camera and GPS antenna on 
tripods to take omnidirectional images and perform positioning. 
Figure 8 shows an example of multiple building models 
reconstructed by this fixed-point camera system. A building 
model in this reconstruction is configured with one surface for 
each building side and features no unevenness with texture 
simply pasted on. Nevertheless, this example demonstrates that 
urban space with buildings of various heights can be 
reconstructed using an omnidirectional camera. 
Our mobile mapping system can perform mobile image 
capturing from a vehicle and can perform wide-range imaging 
in a short time compared to a fixed-point camera system. In an 
actual case of image capturing for about 500 meters along a 
certain road, the time taken for image capturing was about 20 
minutes for the fixed-point camera system but only about 2 
minutes for our mobile mapping system, a roughly 10-fold 
improvement in speed. 
  
Figure 8. Example of 3D city space 
Figure 9 shows the kind of building-model reconstruction that 
we are targeting using our mobile mapping system. This ideal 
model reconstructs not only building shape but also uneven 
building features such as balconies. Texture mapping will also 
be performed for each building. Reconstructing urban space in 
this way using building models having detailed shapes and 
high-quality texture will enable this mobile mapping system to 
support background-model applications such as walkthroughs 
and various kinds of urban simulations. 
—202— 
Uneve 
fetails 
1 
We | 
synchr 
positic 
spacec 
Anu 
GPS v 
buildit 
positic 
in pla 
positic 
image 
accurg 
A p 
betwe 
from 
camer 
obtain 
substi 
A sh 
be ac 
obtair 
heigh 
The 
resolu 
to be 
Con: 
acqui 
the a 
acqui 
squar 
time 
short: 
costs 
AR 
syste: 
This
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.