(the arrangement of spectral bands can be chosen at will),
a; are unknown model parameters, E; is the white noise
component. I; is some neighbour index shift set excluding
unknown data (0, ÿ,0), (m* — m7, 7,0) @ Ie, where m* is
an approximation line (see (18)) and m^ is a reconstructed
line, respectively.
Note that although the model reconstructs a mono-spectral
corrupted line, the model can use information from all other
spectral bands of an image (d > 1) as well. For mono-spectral
images (e.g. radiospectrograph data) d — 1.
Let us denote another multi-index t — (m,n,d) and choose
a direction of movement on the image plane to track the bad
linet—1=(m,n—1,d),t—2 = (m,n —2,d),... Ft is the
white noise component with zero mean and constant but un-
known dispersion £2. We assume that the probability density
of E; has a normal distribution independent of previous data
and is the same for every time £. Let us formally assume the
knowledge of the bad data, then the task consists in finding
the conditional prediction density p(Y;|Y/'7?) given the
known process history (2) and taking its conditional mean
estimation Y for the reconstructed data.
Yi YY, Yi Ye (2)
where Z is defined by (7). We have chosen the conditional
mean estimator for data reconstruction, because of its opti-
mal properties [Broemeling, 1985]:
SE y uh (3)
Let us rewrite the regressive model (1) into a matrix form:
Y= Pl Ze Be. (4)
where
Pl =[m,..., a4] (5)
is the 1 x # unknown parameter vector.
B = cardl; (6)
We denote the 3 x 1 data vector
Ze Michel (7)
Data arrangement in (7) corresponds to the arrangement of
parameters in (5).
Assuming normality of the white noise component £;, condi-
tional independence between pixels and an a priori probability
density for the unknown model parameters chosen in the form
(this normal form of a priori probability results in analytically
manageable form of a posteriori probability density)
p(P, 07 |y) = (2x)-* 1gj-2£?
T
sp { Juin (7) Vo fn) ; (8)
where Vo is a positive definite (8 -- 1) *(8--1) matrix and
30 »9-—2, (9)
810
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996
we have shown [Haindl, 1992] that the conditional mean value
is:
Qm Z. (10)
The following notation is used in (8) and (10) :
Ba = V aeo , (11)
Vici = Viet + V , (12)
S oT
ae ( remo Plena cm
Vzyte=r) Vatt—1)
t—1
Vyce=n) = > ViVi, (14)
ki
t—1
V ram Rs (15)
k=1
t—1
Veit) = 3A (16)
kzd
It is easy to check [Haindl, 1992] also the validity of recursive
(17).
B, = Ba +(1+ Ze VL Zr
Vitale = BE)" an)
To evaluate predictor (10) we need to compute the parameter
estimator (11) or (17), but we do not know the past necessary
data Y;, because they are those to be reconstructed. On
the other hand the data from Z; in (10) are known: we
can select a contextual support of the model in such a way
to exclude unknown data. This problem is solved using the
approximation based on spatial correlation between close lines
Yos DT .z. (18)
where P, 4 is the corresponding parameter estimator (11),
(17) for the nearest known line (including known contextual
neighbours (7)) to our reconstructed one in the spectral band
d . Note the different Z (7) in (18) and (15), (16), (17).
This approximation assumes similar directional correlations on
both lines, but not necessarily a mutual correlation of these
lines themselves.
3 OPTIMAL MODEL SELECTION
Let us assume two regression models (4) M; and M, with
the same number of unknown parameters (fi — f» — f)
and mutually symmetrical neighbour index shift sets I, ;, I, ;
with the missing line being their symmetry axis. According
to the Bayesian theory, the optimal decision rule for mini-
mizing the average probability of decision error chooses the
maximum a posteriori probability model, i.e. a model whose
conditional probability given the past data is the highest one.
The presented algorithm can be therefore completed as in
(19):
wl
fo
»(
pr
Sh
Le
Sp
pi: