Full text: Close-range imaging, long-range vision

APPING AREA 
ed and grouped, 
for each group. 
transformation a 
d. This can be 
CP | ] or the 
ethods work on 
points, therefore 
n the ground of 
spin-images. In 
m is presented. 
^a between view 
> estimate of the 
idual distances 
IP (5) 
a generic group 
1 matrix R and 
ernion represen- 
this approach, a 
malizing by the 
ce between two 
in be obtained. 
ormation would 
verage distance, 
groups with a 
' distance lower 
efore one could 
g a low number 
stimate. 
method. 
  
vs with Horn 
plemented. For 
zh the transfor- 
Next, for each 
determined. If 
esolution, then 
common area. 
pping zone for 
ansformation is 
By comparing euclidean distances, above procedure allows to 
select the best rototranslation among the whole set of transfor- 
mations, obtained by surface matching based on the use of spin- 
images. Given only one transformation, a further refinement of 
view pair registration can be undertaken, using a global aligne- 
ment method , such the ICP algorithm. Since ICP belongs to the 
class of greedy algorithms, it needs an initial good approximate 
of rotoranslation parameters in order to avoid the convergence 
toward a local minimum, rather than to global one. Therefore, in 
presented work, another global alignement method was applied, 
before the refinement with the ICP. The Frequency Domain 
algorithm can be profitable used just to provide such good 
initial estimate. Even this algorithm works on common areas 
rather than on point correspondences, therefore the overlapping 
area between two views must be determined. To that end, 
original range data, which make up the two polygonal meshes, 
are considered at this step. A procedure like the one previously 
described for overlapping area detection, is applied. In this case 
the point clouds represented by the range data of the two views 
are registered each other, using the previously estimated 
transformation. Although it is not very accurate, one can 
expects that the applied rototranslation would put points, on the 
common area between the two views, very close each other. 
Again, this zone can be identified as the one composed by the 
set of corresponding points which euclidean distance is less than 
a certain threshold. A point on view A, which don't belongs to 
the overlapping area, will have its closer corresponding point on 
view B at a distance greater than choosed threshold. Also in this 
case, the threshold value was set to 1.5 times the mesh 
resolution. 
In figure 7a-7b an example of common areas detected with 
described algorithm is displayed. The point cloud of fig. 7b 
represents the data source used to create the mesh of figure 5. 
The meshes obtained by both figures were aligned with the 
rototranslation parameters estimated by Horn method. 
  
+4 4 8.8.88 
  
EH 
  
8 
Li 
   
   
  
Fas s 
2 
  
EE A 
T 
  
  
  
  
" A i 
-90 -80 -100 -120 
   
Figure 7a-7b: example of common area detection btw two views 
  
© 2 ... 5 E 
s toe : : 
**99459mw06e* ° - - 
a... : ^ 
80 fe p putat Doo : s 
LE. me yr cere te 6 111130000001 S 
k**vb. D$ .. . - 
9 Ay v es. =; - 
1 se dois : 
. &* : - 
tpl at $Y. : : 
DS esed 9 > 
> uS state Sj ME : 
E... 7 S 2*?o.$ "» .9. o o =" .....>.. 
60 * MAPA SIs ; 
+ - 283 °°. ee : 
fetes 000°... 
Ba9,9,9 o 0" * * "^ Doc, 2 
PRR EE faite e. 
CC ... *. : 
Seo ogo 8 Neo = 3 + cy : . 
0: --- TRIES 6 Nt Se on oF Ta pe, 
0m 00034405 Cadet ses 
ES Swen pe. Wert? co? 
pass = 30e dire ee 
es oies = sges Poe... . 
em ak m ve ET EEE 
20H ----- bod DEL go CF I) 202 20% cee. 0 * 
wt Chr ES “ils 6 © 
T. ce ee SO ... ie 
e^ Sevres age +»... 
ci. A7 Smet de 0 0 
hes tives seo eles .. 
ea ga AT SA 00 
TO iwi 24330033 : 
e $'9*e ap 4$.09-299 = 
* ».6 9 e" ese 
EL, .. > . 
Pe $? $9.2,» 9 £^ e . 
AY) TS 20ma 4e 0 8 uma a, «À - 
roues t 0,0 ‘2 % Bruce, ew - 
—20 rss pose. eei. Loo. a fe. A. $e, 2, 9 P $y 
[38 eee - = re y py Neun 
edis : 5 — esso P imo P 
es ; CET 
be fae o ates 
Ps 8a) e 
FEA ds oo > : e > 
—40 375° ze. ne b e eie eU s nis T i. = 
> ... «nit 2 
sam e se Li ws 
? 94 West * 
we A m ont rt 8% 
oa’ © o° $99, es. 
e s)" > Mes. 
—60 }- - -- +0 —- so shed vy Met: 0e 
9 >. 
tee are mee 
! e 9 9 S int : 
*» e. "uet T qe 
eve = ® . 
$49 79 5.7 
p.c esa 
80 ----- "beet uU a 
+. *$ ee $4 
ex vs e 
ae Ses 9099 
e c T, a 
"aseo oo, i i 
  
20 0 -20 —40 —60 
Figure 8: alignement of two views of figure 7, based on 
detected common areas. 
CONCLUSIONS 
Surface matching is the process that compares surfaces in order 
to detect similarity among them. It plays a prominent role in 3D 
modeling of real objects, as it is used for registration of view 
pairs. In this work an alternative procedure for automatic 
alignement of view pairs has been presented. The method is 
based on the use of an innovative solution in the research field 
of automatic registration: spin-images. Spin-images provide a 
new kind of object shape encoding, where the global properties 
of the object are retained despite of the specific position of 
surface points. Since spin-images are constructed with respect 
to specific surface points, they are coordinate system indepen- 
dent and more discriminant respect with its base points. 
Furthermore, as they are constructed without surface fitting, 
spin-images can be easily implemented. Given these features, 
spin-images represent a tool that can be profitable employed in 
registration algorithms. 
Thus, on the ground of spin-image concept, a registration 
method has been developed. Firstly, point correspondences are 
determined, ranking the similarity measure between spin- 
images of the poligonal meshes of view pairs. Next the common 
area between these views is estimated through the result of 
previous step. The method was tested on several real objects, 
such statues. Although these objects were characterized by very 
complex and irregular shapes, the proposed registration system 
was succesfull in all test. These result could be achieved 
adopting specific processing strategies, such the analisys of 
similarity measure histogram and point correspondences 
—319— 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.