2
Lateral single pixel sensitivity is investigated
with monochromatic light irradiation (for example:
436 nm; 559 nm; 626 nm) by imaging a diffraction-
limited light spot (diameter: 1.1 um; 1.3 mj 1.5
um) on to a pixel of the CCD to be investigated.
During the measuring procedure the light spot
step-by-step is moved over the chosen pixel and
its environment in column and row direction. The
characteristic curve of lateral sensitivity is
obtained by recording the videosignal at every
approached position. In case of an extremely high
Spot positioning accuracy (£520 nm) it is possible
to determine the real pixel geometry from the
lateral sensitivity characteristic curve [1].
It can be stated that the expected trapezoidal
shape of the lateral sensitivity characteristic
curve will often not appear.
In the following I would like to demonstrate a: few
examples on the figures 2 ... 9.
It can be established that no significant diffe-
rences could be found in the shape of the lateral
sensitivity characteristics of the measured pixels
within one CCD component under constant conditions
of measurement.
3. MEASUREMENT OF LATERAL PIXEL (SUBPIXEL) SENSITIVITY - PIXEL GEOMETRY
But obvious differences in the lateral sensitivity
curve shape appear under constant measuring condi-
tions on different CCD components
- components of one and the same type but of
different producers (WF, F79/F89)
- components of one and the same producer but
different batches (F79, F89)
- different component types (WF, F79, F89,
matrix)
and also in the scope of one CCD when sampling
direction or wavelength were changed (WF, F79,
F99, matrix).
Analogous measurement was carried out on sensors
of different operating modes (for example: CID
arrays), too.
The presentation of the special results caused by
the operating principle would go beyond the scope
of this discussion.
4. SPECIAL INVESTIGATIONS ON FRAME GRABBERS
For tasks in measuring image processing it is of
essential importance how imaging of light-sensiti-
ve pixels is realized on frame grabbers Such a
measurement is realized by moving the spot image
mentioned in item 3 over the whole matrix along
the rows or columns and by interpreting the CCD
video signal and the occupation of the frame
grabber memory simultaneously (see figures 19; 11)
On the figures (12; 11) you can see that not all
light-sensitive pixels of the matrix were imaged
on to the frame grabber memory.
Additionally a displacement of the geometric
centre of the matrix relatively to the memory
centre occurs on such an image.
In imaging pixels on to the frame grabber memory
the operating mode (pixel-synchronous, non-pixel-
synchronous) is of great importance.
From the non-pixel-synchronous operating mode it
follows that a single pixel irradiated in accor-
dance with the method described in item 3 stati-
stically can occupy several adjacent memory loca-
tions of a row in the frame grabber memory.
Furtnermore in the non-pixel-synchronous mode the
Space between two matrix pixels can be imaged.
differently within a row of the frame grabber
memory.
That means, in the extreme case a circle imaged
optically on to the matrix can appear on the frame
grabber as an ellipse. Pixel-synchronous mode
excludes effects of such kind and thus ensures an
image pick-up true to the geometry. This is an
unavoidable condition for measuring image proces-
sing [2].
During the investigations of the pixel synchronism
the following essential feature of frame grabbers
becomes obviously.
The analog channel located in the input unit of
the frame grabber causes a type-specific smearing
of spot or edge images within a row of a frame
grabber memory over several pixels (figure 12,
figure 13).