Full text: XVIIth ISPRS Congress (Part B3)

  
Lim Gn = H. 
N-> oo 
Once the entropy has been computed, the 
redundancy is defined as one minus the 
ratio of the measured entropy to its 
maximum (Shannon, 1949c), 
Redundancy - (1 - (Gn / MAX)). 
Computation of the conditional entropy or 
"equivocation" across spectral channels is 
defined by Theorem 6 (Shannon, 19494). 
The conditional entropy assumes that 
probabilities representing uncertainties 
are conditional upon co-occurring events. 
THEOREM 6: Let p(Bi,Sj) be the 
probability of sequence Bi followed by 
symbol Sj and pBi(Sj) = p(Bi, Sj) / p(Bi) 
be the conditional probability of Sj after 
Bi. Let 
Fn - -). p(Bi,Sj) log pBi(sj) 
1,7 
where the sum is over all blocks Bi of N-1 
symbols and over all symbols Sj. Then Fn 
is a monotonic decreasing function of N, 
Fn = NGn - (N - 1) G(n-1), 
N 
NZ Fn, 
1 
Fn <= Cn, 
Gn 
Il 
and Lim Fn = H. 
N->00 
What Are We Seeing? 
Perhaps the most remarkable aspect of 
visualizations of the metrics of image 
information is the extent to which the 
visualizations are easily recognizable 
images of the natural world. This is in 
spite of the fact that the metrics of 
information are mathematical abstractions 
calculated from tables of uncertainties. 
Concepts such as "information," 
"redundant" or "equivocal" do not 
effectively describe what we are seeing 
since these concepts imply complex 
unconscious value judgements which have 
little meaning relative to the 
mathematical abstractions visualized. 
Paradox in the use of familiar language 
and concepts is inevitable. Probably the 
best approach on casual inspection is to 
just view the visualizations as "pictures" 
which reveal the world in a different 
light. 
In visualizations of image entropy, bright 
areas represent the "mathematically most 
interesting" parts of an image, while dark 
areas represent the "mathematically least 
interesting” parts. Another 
interpretation is that bright areas 
delineate regions conveying the most 
information about the image field, and 
dark areas convey the least. 
In visualizations of image redundancy, 
more subtlety of interpretation is 
involved. Redundancy provides a measure 
684 
of the amount of information which is 
pre-determined by context. For example, 
the redundancy of written English is 
approximately fifty percent, meaning that 
almost half of what we write is 
pre-determined by the structure of the 
language. In a redundancy visualization, 
bright areas represent regions which 
convey the least information relative to 
the image field. In satellite imagery, 
the redundancy is dominated by extremely 
low probability events and is ideal for 
locating point events which may be 
spectrally subtle yet statistically 
prominent, such as fires or hot spots. 
The redundancy is a logarithmic rather 
than a linear inverse of the entropy. 
The equivocation (or conditional entropy) 
requires the most subtlety in 
interpretation. In information theory, 
equivocation is a measure of information 
lost during transmission as a function of 
channel capacity and noise. It represents 
the ambiguity or residual uncertainty 
associated with our measurement. Shannon 
calls it "...the uncertainty when we have 
received a signal of what was actually 
sent." In an image visualization, the 
equivocation seems to represent the 
coherence of statistical structures. 
Isolated clouds, for example, generally 
have very low to zero equivocation values 
since their multi-spectral statistics are 
generally very specific (unequivocal!) 
relative to those of other image 
components. The equivocation can probably 
best be thought of as an inverse measure 
of "clustering" relative to the 
statistical background. The equivocation 
occasionally reveals structures absent in 
both the entropy and the redundancy. 
Conversely, structures visible in the 
entropy or redundancy may be absent in the 
equivocation. Noise in the imaging 
process is a definite component of the 
equivocation. 
The Entropy Function and Biological Visual 
Systems 
An interesting analog to visualizing the 
metrics of information in digital 
multispectral imagery is the processing of 
the inverted image projected upon the 
retina of biological visual systems. It 
must be emphasized that information theory 
is not a model of retinal processing. 
Visualizations of the metrics of 
information theory do share enough 
characteristics with known retinal 
operations at the cellular level to make a 
discussion of retinal processing relevant 
to their visual interpretation. 
In the human visual system, information 
which reaches the visual cortex in the 
brain has already been subjected to 
extensive information processing by the 
time it leaves the eye. This processing 
occurrs in the retina, a thin membrane 
lining the back of the eyeball. The 
retina is an extension of the neural 
architecture of the brain and consists of 
two types of photoreceptors (rods and 
cones) and several layers of specialized 
neurons. This layering of distinct neural 
Mc ODO <0 NON = ctr Wn 
cn No onm A 0 nm 
thoH Un o'Odn.t' 56 r£ F.-- 2000 HRNIRZN
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.