and estimate u from the likelihood function:
L = I Na (?;, u) 2 max; u estimator for u (2)
This formulation is - within the classical theory of Maximum Likelihood -
only justified if 7, :N,. This latter condition is obviously not truo,
considering the outlyers in Z,. So we have to reinterprete the Maximum
Likelihood Method, in that:
We estimate U; only from there measurements Z;, which are sufficiently
clustered around a central value, E(Z):
2 i 2Na(Z ;; E(Z), a) jtí (3)
4. DERIVATION OF SOLUTION
Taking the logarithm of (2), we obtain:
F(û) = 1nL = Eln Ne (Z;,û) => min, (4)
or with #(Z;) = - In Na(Z;)
F(4) = I#(Z;) => min . (5)
with $(Z;) = (Z, m E(Z,)) for (Zi - E(Z;)) (a
R else; R = » (6)
The function (5) is a semi-convex function, and if the constant a is chosen
sufficiently large there exists a unique vector ü, for which it holds:
F(û) « F(v) for all v.
The estimation problem (5), (6) can now be solved by conventional methods
of nonlinear programming/optimization.
The information matrix (Wilks, 1962) for û is equal to:
ô21nL
1(1,3) = > ECS, 64
)
Assuming validity of our argument (3), this information matrix is equal to,
(Kubik, 1970):
I = B'PB
with P being a diaponal matrix:
a for (Z, = E(Z,)) <a
PCI, =
050 o for (2, - E(Z,)) > a
- 406 -
Syl
Th
Alt
im)
ob
In
an