e can see that the new
ne accuracy level result
resection. So it could be
nd for indoor computer
cation and autonomous
se of its simplicity of
ng geometric controls.
the merging image with
w method might be the
the exterior orientation
y to provide more line
database.
NCES
1991, Determination of
Object Shapes, IEEE
utomation, Vol. 7, No. 1,
IN, J., 1989. A Simple
n Determination Using
attern Recognition, Vol.
C., 1991. 3-D Camera
Point Concept, Pattern
57-67.
Calibration Technique
1es, Machine Vision and
©
, 1988. Techniques for
r and Image Center for
Asion Metrology, IEEE
No. 5.
;RAWONG, K., 1994.
ic Object Construction,
ammetry and Remote
PRS, Bethesda, Md., I,
HAIL, E. M., 1988.
f Linear Features, In:
1ational Society for
Sensing, 1988, Kyoto,
JGNANI,J. B., 1988. A
e-in-Hand Robot Vision,
rnational Society for
ote Sensing, 1992,
ssing to determine the
rn Recognition, vol. 14,
6. A new approach to
s, Pattern Recognition,
Road following using
ision, Graphics, Image
y Camera Calibration
7 3D Machine Vision
V Cameras and Lenses,
n, vol. 3, no. 4, pp. 323-
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998
3D-NET - A NEW REAL-TIME PHOTOGRAMMETRIC SYSTEM
T. Clarke, L. Li, and X. Wang.
Optical Metrology Centre
City University, United Kingdom
E-mail: t.a.clarke(g)city.ac.uk
Commission V, Working Group V/1
KEY WORDS: real-time, 3-D, photogrammetry, metrology, digital signal processor
ABSTRACT
The move from computer/ frame-grabber/ camera combinations to networked cameras with onboard processing is progressing
rapidly. This development is long overdue and will produce a significant change in the way in which embedded close-range
photogrammetric systems operate and what they are capable of. It will become feasible to track the 3-D position of multiple objects
over large areas with high accuracy and reliability. This will be increasingly important for applications such as: virtual reality
environments, tracking surgical instruments during surgery, or monitoring assembly processes in the manufacturing environment.
This paper describes the development of a number of intelligent camera nodes designed for photogrammetric measurement purposes.
Each node consists of a video processor board, which performs real-time extraction of targets locations from images and a digital
signal processor which recognises targets and calculates their sub-pixel locations. The target locations are then transferred to a host
computer for 3-D estimation. Each camera system is capable of producing 2-D estimations of target image locations at a sustained
rate of over 170 targets every 1/25 of a second.
1. INTRODUCTION
A programme of development of a network based real-time
measurement system began at City University in 1994. Some
initial results were published (Pushpakumara, 1995; Gooch, et
al, 1996(a & b); Pushpakumara et al, 1996; Wang & Clarke,
1996;) concerning this work and an overview paper was
presented (Clarke, et al, 1997). This paper discusses the
ongoing development of this system that uses a number of
networked intelligent cameras.
2. 2-D PROCESSING
2.1 Hardware
The 2-D processing hardware is based on the Analog Devices
ADSP-21xx family of processor. This modular system consists
of a DSP module (DSP-90), a general I/O (GPIO-90) module, a
video feature extractor (VFE-90) module, an Ethernet
communications (ETH-90) module and a power supply unit
(PSU-90) module. Each camera contains an embedded DSP-90
system where images of retro-reflective targets are processed
and sub-pixel 2-D co-ordinates of the targets are calculated
(Figure 1).
The VFE-90 module is a hybrid circuit, comprising both
analogue and digital circuitry. By performing processing at
hardware level the data requiring processing is reduced
considerably. This makes it possible to achieve real-time
photogrammetry at a reasonable cost. After processing by the
VFE-90 module, only the line-by-line video signal (A-D
converted into 16 bits words) which is above the threshold level
is stored in a First In First Out (FIFO) buffer. If there is no
object above the threshold a value denoting the end pixel
location is placed into the FIFO. For a line with a target a pixel
29
location together with the intensity of the first edge along with
all subsequent contiguous pixel intensity of each target image
are also stored in the FIFO. A bit flagging the beginning of
each new frame is encoded into the pixel location word (The
intensity is a 10-bit quantity leaving 6 bits free for other uses).
For interlaced imagery the odd-even field output of the
synchronisation stripper is used to direct the data to one of two
FIFO's (A and B), one for odd line data and the other for even
line data. For a camera which is imaging a number of targets
evenly distributed throughout the image the FIFO's will be
filled in the following way. The FIFO's are reset; this has the
effect of emptying them. Data is read from the FIFO's until data
starts going into FIFO B. A new frame can be extracted at the
point when FIFO B has been filled with one field's data and
FIFO A is just filling up. Data from FIFO's A and B must be
combined to produce image data corresponding to a frame. Odd
and even lines can be taken from both the FIFO's by reading
them alternately. This means that there is a delay of 1/50 of a
second before processing can begin on the frame (figure 2).
Figure 1. Image of DSP-90 networked camera system.