Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B4-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B4. Beijing 2008 
974 
(PDS) Geosciences node and at the USGS in Flagstaff, Arizona. 
For ease of use, mosaics of BIDRs with various scales and 
formats covering areas ranging from 5°x5° to 120°xl20° were 
made both by the Magellan mission and, later, by the USGS 
(Batson et al., 1994). The mosaic series (known as MIDRs and 
FMAPs, respectively) have received wide distribution though 
the PDS and are available online (http://pds-geosciences.wustl. 
edu/missions/magellan/index.htm, ftp://pdsimage2.wr.usgs.gov/ 
cdroms/magellan/) Both the BIDRs and the various mosaics 
were prepared in Sinusoidal projection for most of Venus, with 
additional projections used to represent the poles. 
2.2 Radargrammetry Implementation 
Numerous approaches to radargrammetric processing of the 
Magellan images have been proposed (e.g., Hensley and 
Schafer, 1994; Herrick and Sharpton, 2000). Although the 
USGS briefly considered using an analytic stereoplotter to work 
with hard copies of Magellan BIDRs (Wu et al., 1987), the 
large volume (30 GBytes) of same-side stereo data makes a 
digital or “softcopy” approach desirable if not essential. After 
working with two such systems, the VEXCEL Magellan Stereo 
Toolkit (MST; Leberl et al., 1992; Curlander and Maurice, 
1993), and the SAIC Digital SAR Workstation-Venus (DSW-V; 
Wu and Howington-Kraus, 1994) we set out to develop a 
processing capability that would combine the best features of 
each, the automated image-matching capability of the MST and 
the geometrically rigorous sensor model of the DSW-V. To do 
so, we made use of both the USGS digital cartography system 
ISIS (Eliason, 1997; Gaddis et al., 1997; Torson and Becker, 
1997; see also http://isis.astrogeology .usgs.gov), and the 
commercial digital photogrammetric software SOCET SET (® 
BAE Systems) (Miller and Walker, 1993; 1995). We use ISIS 
to ingest the raw images, prepare them for use (e.g., by 
decompression, radiometric calibration, geometric distortion 
correction, as needed for a particular sensor), and export them 
and their a priori orientation metadata in formats that can be 
ingested by SOCET SET. SOCET SET then provides tools for 
bundle adjustment to improve the geodetic control of the 
images, production of digital terrain models (DTMs) by means 
of flexible and continuously evolving algorithms for automatic 
image matching (Zhang and Miller, 1997; Zhang, 2006), 
display of the images and overlaid DTM data on a stereoscopic 
monitor for interactive quality control and editing with point, 
line, and area tools, and production of orthoimages and 
orthomosaics. We normally export the DTMs and orthoimages 
back into ISIS for final processing and analysis. This workflow 
draws on the strengths of both systems (rapid in-house 
adaptation to new planetary missions for ISIS; rigorous 
stereogrammetric calculations and 3D display and user input 
with special hardware in SOCET SET) and forms the basis for 
our processing of numerous types of optical images from lander 
cameras (Kirk et al., 1999) to orbit (Kirk et al., 2008a). It is 
also the basis of our approach to processing SAR data from 
multiple missions described here. The main difference is that 
the three “generic” sensor models for different camera types 
(frame, pushbroom, or panoramic) that are provided with 
SOCET SET suffice to process the full variety of optical images 
we have encountered so far. In contrast, each of the radar 
systems described here has unique characteristics that require 
the development of a separate sensor model. Although the 
geometry of SAR image formation is the same in each case, 
differences in how the data have been projected, combined, and 
catalogued make it necessary to handle each case individually. 
Sensor Model. Mathematically, a sensor model is a function 
that specifies the transformation between image space (lines, 
samples) and object or ground coordinates (latitude, longitude, 
elevation). As implemented in software, a sensor model must 
also include “bookkeeping” functions to obtain all the 
information needed to carry out the mathematical transfor 
mation and to communicate with the rest of its software 
environment. The Developers’ Toolkit (DevKit) makes it 
relatively straightforward to implement new sensor models as 
“plug ins” to extend the native capabilities of SOCET SET. 
Our goal in creating a SOCET sensor model for the Magellan 
SAR (Howington-Kraus et al., 2000) was to make it both 
physically rigorous and flexible enough to work with all types 
of Magellan data. 
The variety of data formats, including multiple types of mosaics 
as well as single orbit strips, is only one obstacle to working 
with the Magellan images. This can be handled by defining a 
Magellan data set for use in SOCET SET as a collection of one 
or more BIDR strips in Sinusoidal projection, with no 
restrictions on scale, extent, or center longitude. An additional 
complication arises because all of the images have been map- 
projected based on whatever spacecraft trajectory data were 
available at the time of processing and partially orthorectified 
based on a low-resolution, pre-Magellan model of Venus’s 
topography. Our sensor model, based on the one we helped 
develop for the DSW-V, deals with this processing by using a 
database containing metadata obtained partly from the mosaic 
being used and partly from the BIDRs in that mosaic. 
Specifically, for a given ground point, the sensor model first 
determines which orbit strip (BIDR) the ground point is 
contained in, and then which radar burst from that BIDR, by 
comparing the lat-lon coordinates to strip and burst outlines in 
the database. Once the radar burst is identified, the burst 
resampling coefficients and spacecraft position and velocity at 
the time of observation are obtained from the database. Next, 
the spacecraft position and velocity are used to calculate the 
range and Doppler coordinates at which the ground point would 
be observed. This is the physical process of image formation 
that we must model, and, unlike the approximate rectification 
that was done in the original processing, it can incorporate 
adjustments to the spacecraft trajectory. In this way, we allow 
for bundle-adjustment of the BIDR strips to improve the 
positional accuracy of the resulting DTM, even when using 
images that have been combined in an uncontrolled mosaic. 
The geometric range just calculated is next corrected for 
atmospheric refraction. Finally, the resampling coefficients 
associated with the burst are applied to the range and Doppler 
coordinates to determine the image coordinates at which this 
range and Doppler point would have been put into the image. 
Procedures. Topographic mapping with Magellan data begins 
with ingestion of the BIDR, MIDR, or FMAP images into ISIS. 
The full-resolution FMAP mosaics can be used for most DTM 
production, but in potential problem areas within a mosaic, 
where pixels are lost at F-BIDR seams, it is necessary to collect 
DTMs from the unmosaicked F-BIDRs. The BIDRs are also 
essential if strip-to-strip ties are to be collected for bundle 
adjustment, and they must be read in the first time a new area is 
mapped, because they contain the auxiliary data needed to 
populate the database described above. Only the image data for 
the latitudes being mapped needs to be retained from the pole- 
to-pole BIDR strips. Information about the spacecraft position 
and velocity can be taken either from the BIDR headers or from 
separate NAIF SPICE kernels (Acton, 1999; data are available 
from ftp://naif.jpl.nasa.gov/pub/naif/MGN/kemels/), letting us
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.