Full text: Proceedings, XXth congress (Part 5)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
Active optical devices are based on an emitter, which produces 
some sort of structured illumination on the object to be 
measured, and a sensor, which is typically a CCD camera and 
acquires images of the projected pattern reflected by the object 
surface (Rochini, et.al, 2001). In most cases the depth 
information is found by triangulation, given the relative 
positions of the emitter and the sensor. The passive measuring 
methods function like human vision system. In MEDPHOS, a 
combination of active and passive methods has been used. 
5. SYSTEM CONFIGURATION 
Considering the error propagation from the imaging component 
only, the positioning accuracy of object point coordinates is 
mainly determined by four factors: the object distance, the base 
line, the focal length, and the mean square error of image 
coordinate measurements. 
The selection of the CCD cameras is critical in terms of pixel 
spacing and sensing area specifications. Pixel spacing affects 
the accuracy of image coordinate measurements, while sensing 
area of the camera eventually influences the parameter setting 
related to base line and focal length since the positioning 
accuracy is dependent on base line and focal length of the 
cameras. The larger the focal length, the larger the image scale. 
On the other hand, the use of a shorter focal length will increase 
the lens distortion errors. In addition, using a larger focal length 
instead of a larger base line will reduce the ambiguities of 
image point matching and improve the point measurement 
accuracy. However, a tradeoff should be made in order to 
maintain a certain field of view (Tao, 1999). If a large sensing 
area is used, either the focal length can be increased or the field 
of view can be extended. Thus, the base line can also be 
extended, provided the same image overlap is maintained. It is 
obvious that the large sensor cameras offer better performance 
in terms of sensing area, pixel spacing and number of pixels. 
Consequently, if large sensor cameras are employed, the 
settings of imaging parameters will be more flexible and the 
total performance of the system can be improved. However, the 
problematic aspects of the low image capture rate and high 
storage requirements have to be taken into account. In addition, 
the geometric performance of different types of CCD cameras 
may vary in terms of electronic noise. 
An analysis for optimal configuration shows that the maximum 
length of the base line is restricted by the desired overlap 
percentage, the overlap percentage is affected by the tiled of 
view angle, and the filed of view angle is determined by the 
focal length and the sensing arca of the camera. Regarding the 
above considerations and the average size of medical objects to 
be measured, an overall compromise was achicved to reach an 
optimal configuration. 
5.1 Multi Camera Concept 
Multi image geometrical configurations have been recently 
proposed to overcome the limitations of binocular vision 
(Faugeras, 2001). The limitations including mainly relatively 
poor reliability and low accuracy of reconstruction process. If 
third and fourth cameras are added, the geometry becomes 
much more richer than that of two camera system. By applying 
the trifocal and four-focal constraints, point correspondence can 
be found robustly. If the cameras geometries are known, 
transfer is done in a straightforward fashion by three 
dimensional reconstruction and reprojection (malian, et. al., 
2002). 
5.2 Structured Light 
Surface measurement of skin involves some problems regarding 
targeting especially if the object is dynamic or alive. Structured 
light means the projection of patterns onto an object surface. It 
provides in some cases the only reasonable approach for surface 
reconstruction. Different technologies can be used to produce 
structured light pattern: laser emitters, light projectors, slide 
projectors, and video projectors. In particular, target projection 
is used for any object surface that does not lend itself for target 
placement or does not have rich texture. This also avoids the 
time consuming placement of retro reflective targets and the 
placement and alignment of the corresponding light source 
required for retro targets. These targets are detected and 
localized by specific image processing techniques. The pattern 
used should satisfy some characteristics: it should not be 
drastically altered by small variations in photometric and 
geometric conditions, the detection, localization and 
discrimination of its constituent features from the reflected 
image should be easy and accurate and so on. The size of the 
object is limited by the possibilities of the projector and the 
environment, that is, by the strength of the projector 
illumination and environmental light. Project planning must 
therefore take into consideration not only the characteristics of 
the camera such as depth of ficld and filed of view, but also 
those of the projector. This idea is used in MEDPHOS as dot 
target projection using a slide projector that serves also as an 
active camera with known calibration parameters. 
5.3 Prediction 
By going beyond the classical binocular vision, the problem we 
now address is to predict how the scene would look like from a 
third and a forth camera. In other words, given the calibration 
information of third and fourth cameras and image coordinates 
of an object point in one image, predict the locations of 
corresponding image points on other images. The transfer 
concept is used in MEDPHOS; The homologous point of a 
selected image point lies on the epipolar bands of the other 
images. In other words, for one or more image points in a given 
image set, the corresponding points in other image sets can be 
predicted using Essential and Fundamental matrices. Width and 
length of the epipolar band can be restricted with information on 
the error budget and approximate depth, the latter is estimated 
by applying the MEDPHOS algorithm on a few non-ambiguous 
dots distributed within the projected pattern. 
6. PROCEDURES 
MEDPHOS consists of four digital and calibrated cameras 
mounted on a rig that allows required rotations. The cameras arc 
activated in a synchronized manner. The base lines of the 
cameras can be set in different lengths. The pattern projector is 
fixed at the center of the system (Figure 6). It can accept various 
pattern types. A total calibration of the system by series of 
convergent photography and self-calibration bundle adjustment 
(malian, 2000) provides the relative position and attitude of the 
cameras and the projector as well as the epipolar geometry for 
any image point in any camera. The captured images are 
directly fed to the computer where the related software 
processes the data in real time. The designed dot pattern is 
projected onto the object and recorded by the four camera 
system. To reduce the effects of specular reflectance, 
homomorphic filtering is applied to the images (malian et.al, 
2002). The observed light pattern is then used to detect the 
image coordinates of the dots using an optimal thresholding 
314 
    
    
   
    
   
  
   
  
   
  
  
  
  
  
  
  
  
   
  
  
  
   
  
   
   
   
   
   
   
  
  
  
   
   
   
  
   
   
   
    
  
  
  
  
  
  
   
  
  
  
  
  
  
   
   
   
  
   
  
  
  
  
   
    
   
  
   
   
Inter. 
  
techr 
appli 
perfc 
the g 
Estal 
recor 
corre 
refer 
matc 
proje 
came 
matc 
malc 
the 1 
matc 
prop 
glob: 
cons 
mult 
cons 
In g 
inten 
can | 
degr 
stabl 
cond 
low- 
the | 
to a 
Thes 
para 
kno 
mea: 
not 
Feat 
obj €i 
on : 
winc 
are 
assu 
VIEW 
phot 
inter 
assu 
relat 
In N 
emp 
the i 
geor 
used 
imag 
A 1 
COI 
bo
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.