nd
gle
ite
elf
hin
ble
the
be
ich
and
me
ich
hese
Zach
ages
o an
gital
ed in
'eeds
180
elow
was
74
/
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004
5.2.2 Case 2: Space Exploration
Canada plays an important role in the Space Program and the
National Research Council of Canada (NRCC) has been a key
player in Canada's contribution to the space initiatives. A key
technology is the Space Vision System (SVS) used on-board the
Space Shuttle. It tracks the small black dot targets visible in
Figure 12. Because the locations of these features on the object
are known, object position is computed from their relative
positions in the video images using spatial resection. Accuracy
and ruggedness to harsh lighting conditions (sun illumination
and earth albedo) and environments became important issues
during several missions.
Figure 12. The existing Space Vision System uses the known
locations of the B/W targets to compute the pose of objects
(Photo reproduced with permission from NASA).
In the summer of 1999, an NRCC built laser scanner prototype
(triangulation) addressing those issues was successfully
interfaced to the current SVS. The scanner uses two high-speed
galvanometers and a collimated laser beam to address
individual targets on an object. Very high resolution and
excellent tracking accuracy are obtained using Lissajous
scanning patterns (as opposed to raster scanning). Laser
wavelengths at 1.5 jum (eye-safe), 0.82 jm (infrared), and 0.523
um (green) have been tested. The system automatically
searches and tracks, in 3D, multiple retro-targets attached to an
object. Stability of the photo-solution is equivalent to the results
obtained using existing video cameras but with the added
feature of generating robust pose solutions in the presence of
strong background illumination. With this success, Neptec
Design Group and the NRCC, in collaboration with the
Canadian Space Agency, built a space-qualified version of the
laser scanner prototype, that was flown in the payload bay of
the shuttle Discovery during mission STS-105 (English et al.,
2002). Obviously, compatibility with current B/W targets and
the processing unit are key aspects in order to adopt a laser
Scanner system solution on-board the space shuttle. The
combined solution can operate in triangulation-based mode
from short to medium distance («5 m) and photogrammetry-
based mode (spatial resection). An advantage of the laser
Scanner is that it can also operate in imaging mode to produce
dense 3D images of objects for inspection and maintenance. For
longer range (above 30 m), the optical design can accommodate
a TOF unit for improved photo-solution.
The system works as follows. The coordinates (X, Y, Z) from
the laser scanner are transformed to pseudo-angular values i.e.,
raios X/Z and Y/Z where Z is the range. These new photo
coordinates are fed to the processing unit as if they were
coming from a video camera. This insures compatibility with
the on-board SVS. The main reason why the triangulation-based
laser scanner achieves impressive results for pose computations
Éven at medium range distances is that the ratiometric
computation almost completely removes the dependencies on
981
the laser spot position uncertainty (see Eqn. 1) An
improvement close to an order of magnitude was obtained
compared to computing the pose with the standard (X, Y, Z)
coordinates. Details can be found in (Blais et al., 2000).
6. DISCUSSION: ABOUT STANDARDIZED TESTING
The issue of standardized testing is very important but at the
same time a sensitive one. Surely, no manufacturer of 3D
scanners (laser or not), modelling and inspection software tools
wants to be seen in category with a bad connotation. Industry,
academia and user groups will have to find a way to generate
these standardized tests in order to create user confidence and
market acceptance in using for instance laser scanning alone or
in combination with other techniques. Barber et al. 2001 discuss
current state of laser scanning, associated practical issues, the
need to test in standardized way laser scanners, data processing
and integration with other sources of information. Though
photogrammetry is seen as a mature technology, let us not
forget that the appearance on the market of high quality non-
metric digital cameras made with CCD and CMOS sensors pose
their own set of challenges in terms of resolution, accuracy and
reliability (important topic at many ISPRS sponsored
conferences).
7. CONCLUSIONS
This paper addressed the topic of integration of laser scanning
and close-range photogrammetry from a multi-sensor and
information fusion point of view. The literature surveyed
though not exhaustive shows the interest in this topic from
different research communities. A summary of the basic theory
and best practices associated with laser range scanners, digital
photogrammetry, processing, modelling were reviewed. We
emphasized laser scanning because one specific laser scanner
can’t be used for volumes of different sizes and therefore,
performance aspects of the different laser scanning solutions
must be understood. One of the critical aspects of sensor fusion
is to deal and manage the uncertainties link to the sensing
devices, the environment and a priori information (e.g. a
particular user). To justify the increased cost and complexity of
a multi-sensor solution, one has to minimize the impact of those
uncertainties in order to get the most out of the multi-sensor
platform. Two categories of applications were covered, i.e
information augmentation and uncertainty management.
.5
Three-dimensional laser scanning, like many new technologies
in the past where novelty is often enough to attract interest, has
been used in many projects as a way to produce models for
visualization only. As the novelty effect diminishes, more
people are looking at using that technology in practical
applications and exploring new business models. This is a
natural trend as a new technology, like laser scanning, shifts
from its early developers and users towards mainstream users
and services providers. This latter group can benefit from the
knowledge generated in all the projects initiated in the last 20
years, e.g. scanners and software developments but also what
works and what doesn't. In this process, sensor fusion becomes
important as this relatively recent technology is integrated with
more mature ones, like photogrammetry, CAD, etc.
ACKNOWLEDGEMENTS
The author wants to acknowledge the various collaborators that
have participated in the results discussed in this paper: F. Blais,
L. Cournoyer, S.Fi El-Hakim M. Picard, M. Rioux from the