1Cp)1 2Cp)2 3Cp)3 4Cpd4
1(p>2 2Cp)3 3Cp)4 4Cpd0
1(p>3 2Cp)4 3Cpd0 ,
1€p4 2(p»0
4 0
and S) SCIO A.
J=1
In which, the sub-composition form ( 1(p)1 2(p)2
3(p)3 4(p)4 ) represents probability distribution
subspace of first class enhanced results,which can
fully reflect the grade of actualities signals
hidden in original image. ( 1(p)2 2(p)3 3(p)4
4(p)0O ),...and so on respectively represents vari-:
ous probability distribution subspaces responding
to those enhanced results reduced to a lower class
orderly,in which , enhanced result responding to
( 1(p)O ) is low class.
3. THE CALCULATION OF ENTROPY H(a)
AND INFORMATION LEVEL IL.
Entropy H(@ ) is a indeterminateness measure of
random experiment a. If a has n mutually
incompatible results having respective probability
Pi, then according to the entropy definition, the
entropy value whose unit is bit can be obtained by
following formula:
n
H(a)=-Z Pi: log Pi, ZPizl. (1)
i=1
Where,the logarithm base number is 2,in results of
equal pobability (Pi distribution is uniform), the
indeterminateness is maximum, its value is logn;
when some one Pi equals 1 (Pi distribution is
highly concentrated) , the decisivity is maximum,
its entropy H(a)z0.
Placing composite probabilities I(p)j into formula
(1) respectively, can obtain the combined entropy
H(Y,X) of experiment Y and X.
In respect to the actualities signal Y at input
end, its probability distribution composition form
can be written by its grade I as follows:
1 2 3 4
PyCl> PyC2d PyC3) PyC4)
Py(I) are probabilities corresponding to mutually
incompatible results I (I-1, 2, 3, 4),which can be
obtained by composite probabilities I(p)J:
Py(1)=1(p)1+1(p)2+1(p)3+1(p)4+1(p)0
Py(2)=2(p)2+2(p)3+2(p)4+2(p)0 (2)
Py(3)=3(p)3+3(p)4+3(p)0
Py(4)=4(p)4+4(p)0
To place Py(I) into formula (1) respectively, the
a-priori indeterminateness of experiment Y, or
entropy H(y), can be obtained.
In respect to the gain signal X transformed by a
certain function at output end, the following
probability composition form can bè written
by grade J as follows:
1 2 3 4 0
Px(1) PxC2) Px(3) Px(4) Px(D)
Probabilities Px(J) can be obtained by following a
set of formulas:
Px(1)=1(p)1
Px(2)=1(p)2+2(P)2
Px(3)=1(p)3+2(p)3+3(p)3 (3)
Px(4)=1(p)4+2(p)4+3(p)4+4(p)4
Px(0)=1(p)0+2(p)0+3(p)0+4(p)0
64
To place Px(J) into formula (1),can obtain entropy
H(X) of experiment X .
Under conditions of understanding random
experiment X at output end, the posterior
indeterminateness of experiment Y, or conditional
entropy H(Y/X), can be obtained. the indeterminat-
eness H(Y,X) combined by random experiment Y with
X, should be sum of X experiment one H(X) and Y
experiment posterior one H(Y/X), thus:
H(Y/X) zH(Y,X) -H(X)
The transform from H(Y) to H(Y/X) explains that
the Y signals indeterminateness is reduced as a
result of function transform, from the information
theory definition, this absolute reduced content
is just information content (I)concerned with such
signals Y which are contained by signals X,thus:
(I)zH(Y)-H(Y/X) zH(X) -H(Y) -H(Y/X)
Function Information Level IL, or probability of
its obtaining information in the Y signals a-prior
indeterminateness, oan be obtained by following
formula:
IL=(1)/H(Y)=[H(X)+H(Y)-H(Y/X)] “H(Y) (4)
IL reflects total enhancement benefit of function,
and will not vary with different information
source , thus it represents function reliability,
should be used as a reliable basis for evaluating
enhancement effect.
4, ANALYSIS OF THE IL CALCULATION FORMULA.
With respect to the IL calculation formulas (1)---
(4) , whose theoretical base is reliable. From
analysis of formula (4) , H(Y) represents a-prior
indeterminateness, which does not relate to
funcfion transforming, any function having higher
IL value must embody that its H(Y, X) wants small
and H(X) wants large. those I(P)J concentrating in
any range of probability space all can send H(Y,X)
becoming small, but H(X)can not necessarily become
large. the composition of formula (3) has limited
concentrative ranges of I(P)J destribution, only
such I(P)J distribution concentrating in ( 1(p)l
2(p)2 3(p)3 4(p)4 )and ( 1(p)2 2(p)3 3(p)4 4(p)O )
subspaces mostly oan send Px(J) distribution
trending toward uniformization and cause H(X) to
become large. It may be seen , that any function
having higher IL value can actually or
approximately reflect the attainable enhanced
grade of signal Y hidden in original image.
5.THE CALCULATION PATTERN FOR FIRM STATISTICS
I(P)J distribution is concentrated in probability
space for the function self enhancement feature.
When the statistical number N ( or probability
denominator) reach a certain number,even if adding
some I(C)J results into N,the I(P)J total distrib-
ution range can not be changed for this reason, in
the mean time I(P)J change rate is small,IL calcu-
lation places oneself in the firm state. Theoreti-
cally N should be infinite, when the N is finite,
reducing signal dividual grade or applying
Feedback Dynamic Recognition Pattern can attain
the purpose of firm statistics.
This pattern is to add statistic number N
progressively and quantitatively , to observe the
dynamical variation range of calculated IL value
for each time, if and when the calculated values
oscillate in smaller dynamic range time after
time, the
consider
Through
errors <
value Ce
IL%
30
FIG.2
The fig
with fu
enhance
N is
progres
zin,
distrib
is 6296 ,
3n, 4n--
349% rar
with 1.'
When ca
area co
program
in 2n--
range i
with 1.
When N
oscille
fuzzils
thus wt
That ce
N is mc
for ref
value
statist
6.7
Any aı
actual.
to not!
enhanc:
connec
establ
to say
operat
respec
and si
can ot
for on
fully
image.
Suppos