are the
uw and
] is per-
eviating
listribu-
t statis-
ic mod-
er a dis-
n likeli-
inite di-
. model.
citly ac-
to pro-
à priori
he same
happen
equally
, to the
"| down-
squares
'ease of
modifi-
iterion.
ined on
rvation
| obser-
arelli et
> signif-
icant, only for the observations directly affected by outliers
and for a small other group around (roughly speaking, all
the observations closely connected by the functional model
to erroneous ones). This means that, apart from patholog-
ical situations, only a small percentage of the weights will
change from two successive iterations. In this frame, se-
quential updating becomes again an attractive proposal for
outlier removal. A weight change will be obtained by remov-
ing from the equation system the same (normal) equation,
with weight equal to the given weight change.
Sequentially building an equation system is a widespread
technique in many areas of scientific computing: this is for
instance the case in all dynamic measurement processes,
where on-line data acquisition is often requires to control
in real (or near real time) the process evolution. In re-
gression analysis (Draper, Smith '61), when testing for the
significance of the parameters involved in determining the
observed quantity, the obvious way of modifying the func-
tional model is by using sequential algorithms.
In photogrammetry this approach became interesting with
the advent of on-line triangulation, where the possibility of
direct data acquisition on the computer and the opportu-
nity of having a quick check and repair of measurement and
identification errors strongly suggested the use of such tool.
Many algorithms have been presented and investigated to
this aim in the last decade, among which Givens transfor-
mations (Golub, van Loan '86) are perhaps the most pop-
ular, in order to meet the specific requirements of on-line
triangulation.
Robust procedures and sequential strategies are very use-
ful when data collection was sequential (even in a kinemat-
ics way) too; moreover, the same procedures and strategies
may be profitable for adjustment (that means, for densifica-
tion of an already existent network or in optimization meth-
ods) and for interpolation and approximation (that may be,
for progressive or selective sampling).
Digital photogrammetry and image processing offer more
interesting occasions to this approach; in fact many steps
(e.g. image quality control and assessment, features ex-
traction and parsing, image/map/object matching, surface
reconstruction, form descriptors) foresee robust procedures
and sequential strategies as important tools, useful in the
whole process from data acquisition to data representa-
tion, taking into account data processing (including pre-
processing and post-processing), testing and archiving too.
There are many reasons which confirm the actual trend;
the power of electronics and computer sciences improves
the use of soft images (remotely sensed or acquired by dig-
ital scanners, as well as obtained by hardcopy scanning),
emphasizing the mathematical treatments instead of some
analog methodologies.
Finally a more refined and conservative procedure has
been recently presented by statisticians (Rousseeuw, Leroy
'87); it goes over the capacity of classical robust estimators,
because it has a very high breakdown point. This means
that outliers of bigger size and in a large number may be
considered: therefore because, as already said, blunders,
leverages and small outliers occur often in the observations
and they must be identified and eliminated, in order to get
the expected results, the applications of robust estimators
with a very high breakdownpoint to photogrammetry and,
in generally, survey and mapping in its many fields are wel-
come.
2. The method
The most promising robust estimators are, among the
downweighting methods, the redescending estimators, spe-
cially when their breakdown point is very high. In fact
outliers of bigger size and/or in a large number may be
considered; moreover different explanation can be set up,
when the anomalous data, after rejection, show a homoge-
neous behaviour.
The basic idea follows some suggestions of Hampel for in-
troducing a rejection point in the loss function, so that the
data outside the interval get, automatically, weights equal
to zero. On the contrary, the data inside get weights equal
to one, if they belong to the inner core of data, or rang-
ing from one to zero, if they stay in intermediate region of
doubt.
There are many ways to concretize Hampel suggestions.
The easiest is represented by the Generalized M-estimators,
where some suitable weight functions correct the behaviour
of the M-estimators, as defined by Huber. Unfortunately
this strategy (called by some authoritative authors: mini-
max), although it increases the breakdown point, is unable
to raise it substantially.
The best modality is represented by the least median of
squares, where the median of the squares of the residuals is
minimized in order to obtain the expected results:
A : 2
à = min med (vi)
kzium
being M the number of observations. Unfortunately this
strategy is, at present time, computationally too expensive,
because no efficient algorithm is known to solve (m sys-
tems, selecting 7? observation among the Mm ones, forming
the sample, when 7 is the number of unknown parameters.
An advantageous alternative is represented by the least
trimmed squares, where the average of the squares of the
145
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B1. Vienna 1996