Full text: XVIIIth Congress (Part B2)

the 
su- 
B. 
otherwise it may generate a great deal of redundant 
data which increase a burden of computer storage 
and operation. In addition, the shortest sampling 
interval of an image is restricted by the grain 
size of light-sensitive emulsion on the film base 
and the resolution of the available digitizer. 
The continuous imagery is transformed into a series 
Of the grey value of discrete pixels through samp- 
ling operation but the range of grey variation re- 
mains continuous. Consequently we need to quantify 
those values in order to turn the infinite possibi- 
lities taken by the grey variable into the finite 
grey levels with a constant interval. The method of 
quantization is based on the fidelity factor requi- 
rements of receiving the digital signals, namely 
One must select an appropriate quantizing unit un- 
der the conditions of decreasing the quantization 
errors as to ensure the discrete message to have 
enough levels to reflect the details of amplitude 
variation for a continuous mergsage, As stated be- 
fore, we choose an exponent to the base 2 as the 
scope of grey level and round the grey values off 
to the nearest levels against them. Currently many 
Fcanning digitizers use the rcale quantized up to 
256 levels. Because this division is proper for 
the representation of one byte in the computer 
storage it is very useful for digital image proces- 
sing. 
4. THE OPTIMUM CODING OF IMAGE DATA 
The volume of data generated from the Photo digi- 
tization is of huge bulk. For instance, using a 
scanning record of the airphoto of size 23x25 cm? 
with F4 grey levels and selecting the pixel of O.1 
XO.1 mm* make the information capacity per unit area 
amount to 6X10 bite/cm!, and there are about 3.2X 
107 bits in a whele rhoto, With ?56 grey levels 
and the pixel of 0,.0?5x0,0?5 mm? the whole photo 
can contain 6,8x10$bite, This causes the practical 
difficulty for transmirsion efficiency and storage 
roome of data. 
It is one of the major concerns about digital map- 
ring as well to utilize the optimal imare encoding 
for the effective storage and transmission of data 
. Information theory and some related considera- 
tions can render a method of tbe optimum coding 
and become a theoretical guidance to the best sto- 
rage and transmission of digital message. 
The ultimate goal of image encoding is to compress 
the data volume and improve the transmission effi- 
ciency. Usually a frame of imagery involves a lot 
of redundant data. The tactics of data compression 
are commonly based on the probability distribution 
Of source signals or their grey levels and the to- 
433 
ne eta i 
lerable distortion approved by the information re- 
ceivers or users. In most cares compression will 
incur the information loss more or less. However, 
the approach to data compression devised according 
to Information theory enables that loss to be re- 
duced to a minimum, The lost information ie almost 
nonsignificant. 
In the following we briefly describe a few kinda 
of digital image encoding which are widely used in 
various image processing systems. 
Every line of digital image ie actually an arran- 
gement of the pixel grey levels such as Xi ,X2 2 
<.,Xm . When one carefully examines any row of 
that grey level array it is able to find out rome 
pixel strings, long or short, in which each string 
is composed of the same grey level. Those sequen- 
tial pixels with an equal grey value are known as 
a run length, thus a line of successive grey le- 
vels may be split up into several run lengths, for 
example k lengthe (k« m). Run length coding means 
to map a pixel series in a scan line x x ... X 
onto a sequence of integer paire (E; 43. ), (Zap la ), 
t (E 1k ) where g; is a certain grey level and 1; 
indicates the number of times in successive occur- 
ence of that grey level or the amount of same pi- 
xels. As a result the message made of m pixels on 
a scan line is able to be conveyed by only k inte- 
ger pairs. When k is much less than m, it is accom- 
plished to compress data remarkably, 
In differential coding it is to hold the grey le- 
vel difference between two successive pixels ins- 
tead of the original quantity of each pixel. Since 
the changing range of possible differences is smal- 
ler than that of the originals the encoding may 
pick less number of bit. Usually it may realize 
ROme cutdown of data, 
Fractal data comprersion is one of the most effi- 
cient image coding in recent years. For an image 
which consists of a single well defined object one 
hopefully recognires a large degree of determinis- 
tic self-similarity, and so constituent parts of 
the representation can be formed from transforma- 
tions of the object itself. To ensure stability, 
such transformations have to be contractive, and 
they consist of translations, rotations and scal- 
ings whose parameters need to be found. It is said 
that a data reduction may arrived at more than 1000 
times. 
Huffman coding is a well-known compact one in the 
variable length coding. The Huffman encoding pro- 
cedure of the binary data is ae follows: 
(1) Arrange N messages in terms of the order in 
which their probabilities appear from large to 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B2. Vienna 1996 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.