to visual quality criteria and desired frame rate to range between
10 and 50 times. The effect of the method on the geometric
quality of the imaged targets both for the 2-D centrifuge case
and in 3-D photogrammetric measurement is discussed and
evaluated below.
3.1 The JPEG Baseline Method
JPEG's most commonly proposed lossy image-compression
standard is called the Baseline method and is based on a
Discrete Cosine Transform (DCT). The specification also
contains two other compression procedures: quantization of the
spatial frequency amplitude components from DCT and;
Huffman run-length encoding of the quantized spatial frequency
amplitude. The flowchart for the JPEG Baseline method is
shown in figure 7.
DCT-Based Encoder
i Blocks ...........———PF—.>—ÖÖ_
: anti Entropy ;
t
Hs r7 ERBE {=o Cuatizer wn Encoder -4 |
Source ee an * ....... : Compressed
Image Data , Image Data
Q-Tables Huffman
Table
Figure 7 JPEG image compression flowchart
The working principal of the JPEG Baseline method can be
described as follows. Firstly the greyscale image is divided into
8x8 pixel blocks. This reduces the complexity of the subsequent
processing steps and enables faster implementation of the
algorithm. Each sub-image block is processed individually by
inputting to the Forward DCT (FDCT). The FDCT converts the
8x8 blocks of grey scale image information into an 8x8
frequency domain block which is a function of the two spatial
dimensions x and y. The output of the FDCT is a set of 64
coefficients from the original 8x8 matrix. Each coefficient
represents the magnitude of the cosine basis function at a
particular frequency. For colour images the process can be
regarded as the compression of multiple grey scale images
which are either compressed entirely one at a time, or by
alternately interleaving 8x8 sample blocks from each image
band in turn.
After the FDCT, each of the 64 DCT coefficients are quantized
. to a corresponding value in a predetermined quantization table
(Q table). This is carried out by dividing each DCT coefficient
by the corresponding quantization element and rounding the
result to the nearest integer. This quantisation process
constitutes the major lossy part of the JPEG compression
procedure. The choice of quantization parameter (Q factor) is
crucial to achieve a best compression in terms of data storage
and information loss in the image.
The final DCT-based encoder processing step is entropy coding.
This step achieves additional compression losslessly by
encoding the quantized DCT coefficients according to their
statistical characteristics. Huffman coding techniques are used
in the JPEG Baseline proposal. Huffman coding requires that
one or more sets of Huffman code tables be specified by the
application. The same tables used to compress an image are
needed to decompress it. Huffman tables may be predefined or
computed specifically during an initial statistics-gathering pass
through the data prior to compression.
The JPEG algorithm can be implemented in either hardware or
software. For this evaluation, a standard TIFF (Tag Image File
Format) software library has been taken from the public domain
(FTP site: ftp.sgi.com//graphics/tiff) and integrated into an in-
house PC based photogrammetric measuring system. The
software can support various image compression schemes
including a standard public JPEG software library (FTP site:
ftp.uu.net//graphics/jpeg).
3.2 Analysis of Single Images
The two main applications of image compression are in image
transmission and storage. In the centrifuge application the
storage of many long image sequences is currently the major
concern. Typical centrifuge images have a high information
content so that conventional lossless compression has a very
low compression ratio. For example, the LZW lossless method
can only provide a compression of 1.8 times. The influence of
the JPEG method on target location has been tested in a series
of laboratory experiments using both retro-reflective and
conventional targets under different conditions. Experimental
results have been analysed according to target location quality,
not the conventional visual quality to which JPEG is optimised.
Q-Factor Compression | RMS image | Max. image
ratio discrepancy | discrepancy
20 20.4 0.083 0.952
30 16.8 0.069 0.643
40 14.6 0.063 0.247
50 13.0 0.056 0.165
60 11.6 0.048 0.133
70 9.9 0.041 0.125
80 8.1 0.032 0.102
90 SS 0.024 0.096
100 1.9 0.002 0.020
lossless 1.8 - -
Table 1 Geometric performance of JPEG with different Q
factors for a typical centrifuge image
An image similar to that in figure 2 was used to provide a
conventional target image for compression analysis. The image
was compressed using Q factors ranging from 20 (high
compression) to 100 (low compression). Target image
measurements for each Q factor were computed and compared
with those from the uncompressed original image. Table 1
demonstrates that JPEG compression performance is closely
related to the mean RMS image discrepancy. Even given image
compression ratios of 10:1, the mean RMS image discrepancy is
of the order of 1/20th of a pixel. Figure 8 illustrates discrepancy
vectors between the original image measurements and those
from the compressed image at a Q-factor of 60. When compared
Figure 8 Discrepancy vectors produced by compressing the
image in figure 2 at a Q factor of 70
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996
2
R
~ 2
: GN 1
ATO ROM ^ E A RUR TUR
A
=
a
5
a
o
ge
Ss
-
=
c
A
©
2
ui
ES
wi
a :
si
E
Som BN h *& ors
a
=
=
a
2
2
2
a
target before
with figur
smaller th:
successive
The main «
in the JPE
intensity v
computed
not as gre
achieves rm
low freque
the backgr
has little i1
and 9b shc
and after c
target ima;
contrast ha
3.3 JPEG
For a phot
testfields
retro-reflec
230mm ah
lengths. A
diameter, '
testfield. T
Figure 10 a)
case, consi
Pulnix TM
was used
imaginary :
in each ex
compressic
factors ran,
A free net
compressic
for the los
compressic
be directly
the same C
and RM
photogram