———
AUTOMATION IN MARS LANDING-SITE MAPPING AND ROVER LOCALIZATION
Fengliang Xu
Dept. of Civil & Environmental Engineering & Geodetic Science, The Ohio State University
470 Hitchcock Hall, 2070 Neil Ave., Columbus, OH 43210 - xu.101@osu.edu
Commission VI, WG VI/4
KEY WORDS: Extraterrestrial, Planetary, Exploration, Close Range Photogrammetry, Mapping, Block Adjustment, Automation,
DEM/DTM, Triangulation, Rectification
ABSTRACT:
Our project aims to automate Mars mapping and localization using robotic stereo and descent imagery. Sterco vision is a well-
studied domain. However, most efforts aim only at a general scene; little work has been done toward a natural, extraterrestrial
environment through consideration of its special geometry and features. Our methodology utilized the properties of piece-wise
continuity of natural scene and monotonously decreasing parallax between horizontal-looking stereo cameras. In outline, our
automation process for processing robotic stereo imagery is: 1) interest points are extracted as features and matched between intra-
and inter-stereo images, 2) tie points selection, 3) images with various illumination are balanced through use of tie points, 4) DEMs
interpolation from 3-D interest points using Kriging and TIN, 5) orthophotos generation with DEM, and 6) landmarks (i.e. rocks) are
extracted and occlusions are marked. Then, with the help of orthophotos, landmarks from different locations can be identified.
Finally, robot localization is accomplished through use of rigid transformation and bundle adjustment of matched landmarks. For
descent imagery, lower-level images are resampled and registered to higher-level images. Elevation is then estimated from multiple
observations. This methodology has been used in the NASA 2003 Mars Exploration Rover Mission (MER) for precise robot
navigation and mapping in support of the MER 2003 science and engineering team objectives.
1. INTRODUCTION
1.4 Background
The current hotspot of extraterrestrial exploration is Mars.
Compared with other planets of this solar system, Mars is the
one most similar to Earth, so it becomes the first stop in
searching for extraterrestrial life. Water and life are closely
related, and the search of water evidence on Mars is an
important task in Mars exploration. In 2004, two twin rovers,
Spirit and Opportunity, arrived at the equator of Mars, one on
each side, and began their long journey to check rocks and
craters for traces of water.
The distance between Mars and Earth is 5.57—40.13x 107 km; it
takes a radio signal around 20 minutes to finish a one-way trip.
This delay, plus the limited window of communications
between Mars and Earth (as the rovers use solar panels for
power), makes it very hard to handle the rover directly from the
Earth. Autonomous navigation is the most feasible way to make
efficient use of the Mars rovers. Currently, instructions,
including target location and approximate route, are sent to the
rovers; rovers then use hazard avoidance techniques and
navigation instruments to approach their target. Route planning
and localization need detailed higher precision and higher
resolution maps that can not be provided through satellite
imagery.
The MER rovers are equipped with four of stereo cameras:
Navcam, Pancam, front Hazcam, and rear Hazcam. Navcam
parameters are: 1024x1024 pixels, 0.82 mrad/pixel, 45° FOV,
and 15cm baseline (Maki et al., 2003). Pancam parameters are:
1024x1024 pixels, 0.27 mrad/pixel, 16? FOV, and 25cm
baseline (Bell et al, 2003). The valid measurement range (1m
distance measurement error with mismatch level at 1/3 pixel in
parallax) is around 27m for the Navcam and around 52m for the
Pancam. Thus the Navcam is used for close-range mapping and
the Pancam is used for long-range mapping.
Each rover takes photos at different looking angles. Camera
rotations around the mast (azimuth) and around the camera bar
(tilt) are recorded and serve as the initial orientation parameters.
These photos can form a panorama, which normally consists of
10 Navcam pairs or 27 Pancam pairs. The overlap between
neighboring image pairs (inter-stereo) is around ten percent.
Overlap between left and right images of a pair (intra-stereo) is
around ninety percent for Navcam and seventy percent for
Pancam. These images can be linked through tie-points.
1.2 Brief review
Stereo vision is a well-studied problem. Existing methods can
be classified into local methods, global method, and occlusion
detection (Brown, 2003), or into three steps: matching-cost
calculation, aggregation, and optimization (Scharstein, 2002).
Most methods are aimed at making a parallax image for the
overlapping area of two images in a general scene scenario. The
global methods include dynamic programming methods (Ohta,
1985), which consider only constraints in the scanline direction,
and graph cut methods (Roy, 1998), which apply constraints in
both the scanline and the inter-scanline directions. The first
method is limited; the second gives better result, but is very
time consuming.
For robot navigatiori, there are both local and global methods of
stereo vision, as well as active vision (Desouza, 2002; Jensfelt,
2001; Leonard, 1991). Most of these are used for indoor
applications (Kriegman, 1989); some are used for unstructured
1042
PS TA o SA ftr | =
junk
px
in
in
an
si
ur
to
bu
Pl
an
to
We
Gt
po
the
Th
pai
def
lox