Full text: Real-time imaging and dynamic analysis

st squares 
'|E Vol.595, 
pe Analysis 
its’ use in 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 
A VISION SYSTEM WITH MULTIPLE SENSORS IN INTELLIGENT ROBOT 
AND PHOTOGRAMMETRIC CONTRIBUTIONS 
Dr. Guoqing Zhou 
Department of Civil & Environmental Engineering and Geodetic Science 
The Ohio State University 
470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210-1275 
Fax: +1 (0) 614 292-2957, Tel: +1 (0) 614 292-6683, Email: zhou.77@osu.edu 
ISPRS Commission V, Working Group WG V/1 
KEY WORDS: Vision system, Space robot, Multiple sensors, Camera calibration, Natural landmarks, 3D reconstruction 
CAD, Line photogrammetry. 
ABSTRACT 
For the purpose of providing sufficient and reliable vision information for Space Intelligent Robotic Manipulators, a 
vision system called Space Intelligent Vision Equipment-SIVE was developed. The paper firstly describes the SIVE 
outline including systemic design, SIVE function, hardware environment, software assembly line, algorithm (software) 
characters, then photogrammetric contributions to SIVE, including camera calibration using line features, CAD-based 
objects reconstruction using linear photogrammetry, are focused. 
The camera calibration approach based on natural landmarks was used for SIVE. In the proposed scheme, three pairs 
of parallel straight lines are used to solve rotation parameters and internal parameters. The coordinates of a distinct 
feature point from 3-D to 2-D, together with a length of a line segment, are used to solve translation parameters of the 
camera. 
CAD-based object reconstruction using line photogrammetry was used for SIVE. In this algorithm, assuming that an 
object in CAD is constructed by Boolean set operators of primitives (CSG), and each face is described by Boundary 
presentation (B-pre). Straight and curved lines and planar and curved surfaces in 3D space are described by parametric 
equations. In the mathematical model of reconstruction, we consider geometric elements as unknown parameters, and 
match images (2D) and objects (3D) directly. A lot of simulations and practical experiments were performed. 
1. INTRODUCTION 
Our vision research group (Dept. of Computer Science and 
Technology, Tsinghua University) was fulfilling a project of a 
vision system design for the Chinese Aerospace Industrial 
Department. The aim of system is to provide sufficient and 
reliable vision information for Experimental Testbed of Space 
Intelligent Robotic Manipulators. We call this system Space 
Intelligent Vision Equipment -SIVE. 
The SIVE is required to 1) recognize and locate CAD-based 
objects and obstructions in space; 2) guide correctly robot to 
finish various operations including autonomous operation, 
master slave operation, share & traded operation and 
coordination of two arms operation; 3) provide enough vision 
information so that operator can interact the control system of 
robot; 4) provide the help for tele-operation with virtual reality. 
Regarding the space environment, SIVE shows following 
characters besides common ones of machine vision: 
I. Micro-gravity, long time-delay, vacuum, no uniform light, 
and drift (instability) object. 
2. SIVE can implement autonomous operation, master slave 
operation, share & traded operation and two-arm coordination 
operation. 
3. SIVE can provide helpfulness for tele-operation by means 
of virtual reality. 
4. SIVE has higher robust and reliability since it almost is 
impossible (very difficult) to repair and to change equipment due 
to far distance between airborne and earth. 
In so huge a vision system, I, as a unique photogrammetrist, 
participated in the project. I would like to present it to 
photogrammetrtists, especially close-range photogrammetists, so 
as to let many colleagues know of that photogrammetry can play 
à large role in robot vision/computer vision. 
201 
This paper firstly describes the SIVE outline, and then 
photogrammetric contributions, including camera calibration 
using line features and CAD-based object reconstruction using 
line photogrammetry, are focused. 
2. OUTLINE of SIVE SYSTEM 
SIVE consists of two PUMA/560 robot arms, six CCD 
cameras and three programmable structural lights. Each arm is 
mounted on a moving platform; each platform has a tilt and 
rotation capability and can move along a two rail linear track. 
Two wide angle CCD cameras of six are mounted on ceiling so 
as to obtain spread view. Each pair of cameras is mounted on 
robotic arm respectively, each programmable structural light 
controlled by computer is mounted in the middle of stereo 
cameras in order to obtain accurate position, and increase robust 
of three-dimensional information when using passive and active 
vision. As illustrated in Fig. 1. The proposed hardwares of SIVE 
are composed of as follows [see figure 2]: 
1. Cameras and structural light source; 
2. Image grabber and low level image processor; 
3. Visual information fusion and image understanding; 
4. Visual information output and display. 
2.1 Cameras 
How to select the camera types (such as focal length, field 
angle) and how to arrange cameras' position with considering 
precision of object location, space environment, systemic 
reliability, robotic operations, information fusion, and so on, are 
seriously discussed by our research group. The final decision is: 
1) two wide angle video cameras (model JE2362, black and 
white Javelin cameras) are mounted on the ceiling above (see 
cameras 1, 2 in Fig. 1) so as to provide a general view of robotic 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.