The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B4. Beijing 2008
One possible way to discriminate between secondary and
primary craters is through depth-diameter information on each
individual crater. Crater cataloguing with 3D data, has never
been attempted before due to the lack of such high quality
information but the authors consider this to be most worthwhile
for such purposes as well as other research areas such as the
physical mechanisms underlying impact crater formation. The
release of HRSC stereo imagery makes this idea feasible.
However, it is not easily achievable considering the relatively
poor vertical accuracy of stereo DTMs, the limited resolution of
raw stereo images and the contamination of HRSC image by
compression errors. We address these problems by
implementing a 3D stereo matching system incorporating re
constructed 2D crater boundaries derived from HRSC images.
2. ALGORITHMS
2.1 Algorithms for 2D crater data base extraction
There are a relatively small number of fundamental techniques
underlying all of the automated crater detection systems which
hav been developed to date. These can be classified into two
broad groups, namely supervised and unsupervised. Supervised
methods rely on machine learning from human input whereas
unsupervised perform the detection process autonomously.
The unsupervised methods usually comprise three main
algorithmic steps: 1) focusing, 2) feature extraction, 3)
classification.
The focusing stage is aimed at reducing the computational cost
required by differentiating between areas to be eliminated and
areas to be processed further. Techniques for achieving this
include edge detection, edge direction analysis and texture
analysis. During feature extraction, all possible features are
identified and selected from the results of the focusing stage.
Most algorithms employ some form of Hough Transform to
achieve this, although genetic algorithms are another possibility
used by Cheng (2003),whilst Kim et al (2005) used conic
section fitting. The final stage is to classify the output from the
selection of candidate features. This involves a trade-off
between over and under detection, the goal being to maximize
detections while minimising false positives.
Supervised systems begin with a learning or training phase
where examples of features being sought are fed to the
algorithm. The algorithm is then applied to an unlabelled data
set.
This research contains the following material re-organised and
summarised from Kim et al (2005a) and Kim (2005) which both
describe the algorithm in greater detail.
The 3 stages in the overall algorithm is as follows:
• a focusing stage to define target edge segments in
regions of interest (ROIs)
• an organization stage to find optimal ellipses
• a refinement and verification stage to remove false
detections
The focusing stage reduces the search space by extracting edge
magnitude and direction using a Canny operator and then
extracts regions of interest (ROIs) using a Grey Level Co
occurrence Matrix (GLCM) texture classification.
The organisation stage takes the preliminary crater rims created
in the focusing stage and organises them into optimal ellipses
using graph-based conic section fitting methods. First, the
Direct Least Squares (DLS) fitting method (Fitzgibbon et al,
1996) is applied, as being the most computationally efficient. In
the final fitting stage the osculating ellipse (OE) detection
algorithm (Kanatani and Ohta, 2004) is used to find the best
conic section from candidate craters.
Verification and refinement, the final stage, first refines the
feature selections by employing a template matching scheme
where the maximum correlation for each crater against pre
specified templates are chosen, provided they exceed a given
threshold. The final operation is to remove false detections
using eigencrater construction (Turk and Pentland, 1991) and
neural network template recognition involving training vectors
representing a set of craters and a set of non craters.
However, application of such a set of impact crater detection
algorithms over a single image strip do not provide 2D crater
GIS over extensive areas. For this, multiple image strips are
required together with a merging process to compile the
detection results together from individual images.
Adjacent images invariably have areas of overlap which means
that most, if not all, craters detected in the overlap region will
appear duplicated in the set of results for each image. These
duplicates must be identified and resolved to a single crater.
This could be done manually but the ultimate aim is total
automation of the entire process, so a simple version of
automated duplicate detection was needed. It is likely that the
actual crater descriptions for duplicate craters will not be
absolutely identical, due to variations between the images,
algorithmic artefacts or co-registration issues. This means that
two definitions of the same crater may differ in terms of the
centre locations, the radii or both.
To incorporate craters from different catalogues, Salamuniccar
and Loncaric (2007) proposed a simple function, very similar to
that used in Vinogradova et al (2002), which relates diameter
and radius such that variations in the most significant of either
of these can be compared against a critical value and used to
determine whether the definitions in each catalogue are likely to
refer to the same crater. Their method consists of two stages:
firstly, obtain a quantifiable measure of the differences between
crater radii and centre point location and then compare this
against some critical value. If the measure is less than some
critical value then it is likely that a duplicate has been identified.
This is formalised in the two equations below (Salamuniccar
and Loncaric, 2007), both conditions of which must be satisfied.
'2 ^ ) (1)
< ./;
Where fin is the difference measurement factor, rl and r2 are
the radii, chosen such that rl > r2; d is the distance between the
crater centres and fc is the critical factor value.
The selection of a suitable critical factor is important. If the
value is too high, too few duplicates or spurious duplicates will
be found, and true duplicates will be missed. They suggest that
the critical value should be less than the smallest value for the
distance measurement factor fin found in either dataset. The
distance or overlap between duplicates, although small, often
exceeded that found internally within the data sets and so would
result in duplicates not being correctly identified. A factor of 2.0
was found to be suitable. After a set of potential duplicates is
identified, all possible combinations are tested and the two with
the smallest distance/radius difference measure are resolved into
a single crater using the mean centre and radius.
2.2 3D crater DTM extraction
For this purpose, we use the 2D crater boundary as a priori
knowledge for stereo matching. The usual area-based stereo
image matchers are not robust over the distorted surface. One
possible solution is finding the estimated disparity surface using
the approximate 3D shape of the impact crater. Then the image
matching will be performed along the surface of the pre-
= max