Full text: Close-range imaging, long-range vision

  
University of Athens, has been used to survey segments of the 
national road network amounting to 1.300 km, 500 km of which 
have already been assessed (Psarianos et al., 2001). Installed in 
a mini-van, this system consists of a GPS system, inclinometer 
and three synchronised video cameras on the car roof, two look- 
ing sideward and one forward, from which composite images of 
three views of the road and surrounding area are produced. All 
information collected by the system is managed in a specially 
designed Transportation GIS. Here, the main photogrammetric 
task was to formulate and evaluate a simple approach for esti- 
mating lane width from geo-referenced frontal images. 
2.1 Mathematical Model 
Mounted stably on the roof of the car, the camera records with a 
horizontal axis, initially assumed parallel to the road axis. With 
the assumptions given above, the equation for estimating lane 
width from measurements on the image is very simple. The only 
a priori knowledge required is the height of the projective centre 
above ground. The basic geometry is seen in Fig. 1, in which O 
denotes the perspective center and M is the image center. 
  
  
  
  
  
B1,B2 
A 
Figure 1. Recording geometry with horizontal camera axis. 
  
  
  
The X axis in space and the x image coordinate axis are normal 
to the plane of the Figure. The camera constant is denoted by c, 
while yg is the y image coordinate of points B;, B; on the road 
surface defining lane width. If Yo is the camera height above 
ground level, image scale at the distance Zg is expressed as 
Sy ARS. (1) 
Z5 Yo) AT: 
with AXg denoting lane width BB, and Axg the corresponding 
length measured on the image (the principal point is ignored). 
Unfortunately, this simple geometry is not retained as a moving 
vehicle oscillates on its suspensions. Hence, small instantaneous 
camera rotations are expected. Effects of small rotations « about 
the camera axis and $ about the vertical image y-axis may be 
practically neglected here. On the other hand, even small tilts c 
about the horizontal camera x-axis are of importance, since the 
projective rays form small angles with the road, and large errors 
may occur. Introduction of a small o-angle into the collinearity 
equations modifies Eq. (1) as follows: 
Ax ycosoQ -- csino y + co 
iu ~- Q) 
AX Yo Ys 
  
It is noted that Ax is not affected considerably by small rotations 
$ about the vertical axis, especially if x; + Xx; is a small quantity. 
The obvious way to estimate an c-tilt is by using the vanishing 
point F in the Z-direction of depth, determined graphically on 
the frame by exploiting road delineation. In Fig. 2 it is seen that 
small w-tilts can be adequately approximated as follows (note 
that y = —ctano is the equation of the horizon line): 
bid 
tano - — RO > COO*-yr (3) 
  
  
  
  
  
  
Figure 2. Image geometry with tilted camera axis. 
If the x, y image measurements are performed directly in pixel 
dimensions according to Fig. 3, the introduction of Eq. (3) into 
Eq. (2) finally yields: 
  
AX AX (4) 
Y — VE 
  
»x 
  
L---. horizon 
  
  
  
  
  
  
  
Figure 3. Measurements on the video frame. 
Eq. (4) thus connects a lane width Ax measured on the image (at 
a certain y image coordinate) through the vanishing point F of 
the road direction with the corresponding actual lane width AX. 
As opposed to the similar approach of Tao (2001), this equation 
is independent of the camera constant c which may remain un- 
known. Hence, the tilt o itself cannot be recovered, yet its effect 
is taken into account (in all frames used, this effect is taken into 
account through the instantaneous vanishing point F). Besides, 
this equation uses only image coordinate differences Ax and Ay, 
i.e. the principal point is also not necessary. 
Furthermore, the eventual affine image deformation may also be 
bypassed. Rather than measuring the camera height Yo directly, 
it is precalibrated employing Eq. (4) reversely and a known lane 
width AX (cf. Southall & Taylor, 2001). Of course, this value of 
Yo does not represent the actual camera height but is affected by 
image affinity. Yet, if this value of Yo is used afterwards in Eq. 
(4) for the same camera setup, the fact that measurements in the 
two image directions differ in scale has no effect upon the 
accuracy of subsequent lane width estimation. 
2.2 Calibration and evaluation 
Before employing the described approach on a routine basis, the 
calibration process and the accuracy had to be evaluated with an 
ordinary video camera (frame size: 640 x 512). For this purpose, 
10 frames from different sites were first used to estimate camera 
222. 
em 
d 
t 
/ 
a 
f 
€ 
t 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.