Full text: Proceedings, XXth congress (Part 3)

A New Method for Depth Detection Using Interpolation Functions 
Mahdi Mirzabaki 
Azad University of Tabriz, Computer Engineering Departement, Faculty of Engineering, Iran, Tabriz 
E-mail: Mirzabaki@ Yahoo.com 
KEY WORDS: Depth detection, Digital Camera, Analysis, Measurement, Accuracy, Performance, Interpolation 
ABSTRACT: 
There are some different methods used for depth perception. In this paper, a new method for the depth perception, by using a 
single camera based on an interpolation, is introduced. In order to find the parameters of the interpolation function, a set of 
lines with predefined distance from camera is used, and then the distance of each line from the bottom edge of the picture 
(as the origin line) is calculated. The results of implementation of this method show higher accuracy and less computation 
complexity with respect to the other methods. Moreover, two famous interpolation functions namely, Lagrange and Divided 
Difference are compared in terms of their computational complexity and accuracy in depth detection by using a single 
camera. 
1. INTRODUCTION 
Depth finding by using camera and image processing, have 
variant applications, including industry, robots and 
vehicles navigation and controlling. This issue has been 
examined from different viewpoints, and a number of 
researches have conducted some valuable studies in this 
field. All of the introduced methods can be categorized into 
six main classes. 
The first class includes all methods that are based on using 
two cameras. These methods origin from the earliest 
researches in this field that employ the charactristics of 
human eye functions. In these methods, two separate 
cameras are stated on a horizontal line with a specified 
distance from each other and are focused on a particular 
object. Then the angles between cameras and the 
horizontal line are measured, and by using triangulation 
methods, the vertical distance of the object from the line 
connecting two cameras is calculated. The Main difficulty 
of these methods is the need to have mechanical moving 
and the adjustment of the cameras in order to provide 
proper focusing on the object. Another drawback is the 
need of the two cameras, which will bring more cost and 
the system will fail if one of them fails. 
The second class emphasize on using a single camera [6]. 
In these methods, the base of the measurement is the 
amount of the image resizing in proportion to the camera 
movement. These methods need to know the main size -of 
the object subjected to distance measurement and the 
camera's parameters such as the focal length of its lens. 
The methods in the third class are used for measuring the 
distance of the moving targets [1]. In these methods, a 
camera is mounted on a fixed station. Then the moving 
object(s) is(are) indicated, based on the four senarios: 
maximum velocity, small velocity changes, coherent 
motion, continuous motion. Finally, the distance of the 
specified target is calculated. The major problem in these 
methods is the large amount of the necessary calculations. 
The fourth class includes the methods which use a 
sequence of images captured with a single camera for 
depth perception based on the geometrical model of the 
object and the camera [7]. In these methods, the results will 
be approximated. In addition, using these methods for the 
near field (for the objects near to the camera) is impossible. 
The fifth class of algorithms prefer depth finding by using 
blurred edges in the image [4]. In these cases, the basic 
framework is as follows: The observed image of an object 
is modeled as a result of convolving the focused image of 
the object with a point spread function. This point spread 
function depends both on the camera parameters and the 
distance of the object from the camera. The point spread 
function is considered to be rotationally symmetric 
(isotropic). The line spread function corresponding to this 
point spread function is computed from a blurred step- 
edge. The measure of the spread of the line spread function 
is estimated from its second central moment. This spread is 
shown to be related linearly to the inverse of the distance. 
The constants of this linear relation are determined through 
a single camera calibration procedure. Having computed 
the spread, the distance of the object is determined from 
the linear relation. 
In the last class, auxilary devices are used for depth 
perception. One of such methods uses a laser pointer which 
three LEDs are placed on its optical axil [5], built in a pen- 
like device. When a user scans the laser beam over the 
surface of the object , the camera captures the image of the 
three spots (one for from the laser, and the others from 
LEDs), and then the triangulation is carried out using the 
camera's viewing direction and the optical axil of the laser. 
   
    
   
     
    
   
    
   
    
  
    
    
    
   
    
    
   
   
    
   
    
   
  
   
   
   
   
   
    
   
    
   
   
   
    
    
   
   
    
    
   
    
     
  
    
Interi 
  
The 
auxi 
cons 
This 
an i 
hori: 
to ce 
In tt 
cam 
a he 
picti 
othe 
the 1 
edge 
seca 
cam 
Nov 
met 
is ct 
f( 
In t 
cam 
eval 
In t 
and 
pix 
(the 
the 
cou 
fun: 
The 
beth 
Thi 
pre 
a)l 
b) | 
like 
c)I 
dI 
d) | 
fixe 
app 
e) ' 
Situ 
f) 1 
tar
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.