The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3h. Beijing 2008
Figure 22. Tracking failure Figure 23. Tracking recovery
392
object recognition errors but also 3-D model generation errors.
The error in Approach 1 contains uncertainty about model
geometry in CAD model generation. The errors in Approaches
2 and 3 depend on the accuracy of the stereo matching
procedure.
5.4.2 Object recognition in gaze tracking
A 3-D model is used as reference data in object recognition.
Therefore, an object can be recognized from its opposite side,
as shown in Figures 18 and 19.
Figure 18. Acquired sheen Figure 19. Opposite side
6. CONCLUSION
We have presented a real-time gaze tracking system with VVV,
which enables its active stereo camera to recognize 3-D objects
without markers. The combination of rough matching and
precise matching in camera position estimation can gaze and
track objects continuously. Moreover, Hyper Frame Vision can
perform stable gaze tracking of moving objects in real time.
In this research, we have described three approaches, and
conducted corresponding experiments. Two approaches used a
gaze tracking procedure with a known 3-D model. The third
approach used a gaze tracking procedure without a known 3-D
model. From our results, we have confirmed that our
methodology can gaze and track objects successfully. Moreover,
the proposed system achieves high-resolution 3-D spatial data
acquisition and recognition, relative object behavior detection,
and wide range covering. We plan to improve the automation of
actual GIS operations for 3-D map generation and 3-D map
reference. In addition, we plan to mount an active stereo
camera in autonomous navigation systems.
The object can be recognized even if occlusion exists, as shown
in Figure 20. These additional results are shown in Figure 20. REFERNCES:
Figure 20. Occlusion
[1] B.K.P. Horn and B.G. Schunck. Determining optical flow. AI Memo
572. Massachusetts institue of Technology, 1980.
[2] M. Proesmans, L. Van Gool, E. Pauwels and A. Oosterlinck.
Determination of optical flow and its Discontinuities using non-linear
diffusion. In 3rd Eurpoean Conference on Computer Vision, ECCV'94,
Volume 2, pages 295-304, 1994.
[3] T. Camus. Real-Time Quantized Optical Flow. Journal of Real-Time
Imaging, Volume 3, pages 71-86, 1997.
Figure 21. Object recognition with occlusion
[4] McCane, B., Novins, K., Crannitch, D. and Galvin, B. (2001) On
Benchmarking Optical Flow, Computer Vision and Image
Understanding, 84(1), 126-143.
[5] Yoshihiro Nakabo, Idaku Ishii, and Masatoshi Ishikawa: Moment
feature-based three-dimensional tracking using two high-speed vision
systems, Advanced Robotics, Vol. 17, No. 10, pp. 1041-1056, 2003
[6] Chris McGlone, with Edward Mikhail and James Bethel, Manual of
Photogrammetry, 5th Edition, pp.629-636, 2004.
[7] Emmanuel P. Baltsavias, A comparison between photogrammetry
and laser scanning, ISPRS Journal of Photogrammetry & Remote
Sensing, 54(l):83-94, 1999.
[8] Nobuyuki Kita, Francois Berenger, Andrew Davison, Real-time
Pose and Position Estimation of a Camera with Fish-eye Lens,
Demonstration Session of International Conference on Computer
Vision, 2003
When the area of occlusion is large, object recognition fails to
track the object, as shown in Figure 22. However, the 3-D
object recognition system recovers from object recognition
error when the object appears again in the stereo images, as
shown in Figure 23.
[9] Tomita,F. Yoshimi,T. Ueshiba,T. Kawai,Y. Sumi,Y. Matsushita,T.
Ichimura,N. Sugimoto,K. Ishiyama,Y., “R&D of versatile 3D vision
system VVV”, Systems, Man, and Cybernetics, 1998. 1998 IEEE
International Conference on, pp.4510-4516 vol.5, 11-14 Oct 1998
[10] Yasushi Sumi, Yutaka Ishiyama, Fumiaki Tomita, “Robot-vision
architecture for real-time 6-DOF object localization”, Computer Vision
and Image Understanding Volume 105 - Issue 3, pp.218-230, 2007
[11] K.B. Atkinson, J.G. Fryer, Close Range Photogrammetry and
Machine Vision, pp.78-104