394
395
It must be pointed out that the distributed implementation of MCTM and
RCTM is possible thanks to the HSPP ring bus protocol. As explained in
(2), each ring bus slot is statistically allocated to a group of memory
chips. Such a protocol realisation has the property of high rate
sequential, random or deterministic data communication via time
multiplexed subflows thus avoiding memory access collision at chip level.
Hence medium speed low power consumption Cmos random access memories can
be used. :
Synchronisation of the entire process is performed in a very simple way.
Every time a set of lines is entered by input ABI's into MCTM, a flag is
set in each PBI by input ABI's. When this flag is set, azimuth processors
can begin processing the corresponding data and reset the flag. Since the
input instrument has a fixed rate, input ABI's are not supposed to test
the flag. Azimuth processors must have finished their job in time.
Similarly, every time a block of data is ready in RCTM, Azimuth processor
sets a flag to one in its PBI. When deskew processors have finished a set
of tasks, they test this flag looking for another set of tasks. Doing
this, they create a negligible extra traffic on the ring busses.
The total ring bus throughput is about 18 MB/s: 16.7 from input ABI's to
MCTM and mean Doppler processor plus 1.5 from RCTM to deskew processors
ABI. Such a throughput is compatible with the 20 MB/s system performance,
essentially because the chosen ring bus protocol ensures no waste on bus
traffic (see 1.). For higher throughputs, multiple bus HSPP coninuations
such as those described in 2 would have been needed. Total throughput on
all local busses is about 50 MB/s but does not constitute a bottleneck
since it is divided over 28 abonnee local busses.
For the purpose of protecting the processing system against failures,
spare modules should be added.
A main advantage of HSPP system as described in 3 is that ring buses are
software reconfigurable with no loss in performance in case one abonnee
fails. Hence a reliable architecture where all functions are provided
with a spare module includes 11 input ABI's instead of the minimum of nine
necessary, 23 PBI's and 21 clusters of 8 azimuth processors instead of 20,
2 deskew ABI's and 2 clusters of 6 deskew processors instead of one.
Mass and power consumption estimations of the entire processing system are
indicated in table 3 for which semi custom implementations of ABI and PBI
modules have been assumed. 8 K x 8 bit Cmos static rams have been
considered. Due to low frequency accesses to these memories, their power
consumption is close to their standly power consumption (0.5 mW). Only
switched on processors have been taken into account for estimation of
power consumption. Nevertheless a large part (70%) of system power
consumption is due to digital signal processors. Two remarks can be made
concerning this point: Firstly, the presently available N-mos Texas
TMS-320 have been assumed. In fact, a Cmos version of TMS-320 has been
announced which runs at one instruction every 120 ns instead of one
instruction every 200 ns. Hence, even if chip power consumption is not
reduced, total system power consumption will be reduced with about 40%
since 96 azimuth processors will be needed instead of 160. The second
observation is that TMS-320 have been assumed for every type of
computations, including FFT's. Compared to presently available
alternatives, they are quite attractive even for FFT's and have the
advantage of being standard HSPP components. However, it is likely that
in a near future efficient FFT chips like the one described in (4) will be