r base to
very high.
iced their
would be
industry,
rk would
y limited
high risk
| the high
cen in the
ith space.
decades -
1e remote
putting it
lications,
ing space
he space
nponents
; for their
it steady
mand for
the space
nces, like
disaster
1e from a
company
e have to
satellites
have to
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part Bl. Istanbul 2004
streamline these standards through mass-production and
generate more and more similar or identical hardware to bring
unit-costs down. We should also generate more and more space
hardware using open research institutions and universities to
enable the free flow of information and flight hardware to
peaceful and scientific missions. Once that is done, we can see
that the use of satellites for business will get to be as common
as the use of computers. And do not forget that computers were
in the same place that the space industry is today. When you
look at the quote from the IBM CEO Thomas Watson from
1943 where he says "I think there is a world market for maybe
five computers," we can see that many industries have fallen
into the same pitfalls, yet later established themselves as
industrial and economical strongholds.
2. THE COST OF A SPACE MISSION TODAY
The cost of a space mission today is the sum of many factors.
There is the cost for R&D for the non-standard payloads and
subsystems, the cost of acquiring the hardware, the integration
costs, the cost of the launch system and the cost of operations.
[n many cases the potential industry customer can control the
costs of development, integration and testing yet has almost no
control over the cost of operations or launch. These are factors
that have spacecraft size, weight and the amount of autonomy
of the spacecraft as parameters and can only be made smaller
by changing the design of the satellite. In most cases the cost
per weight ratio stays the same for the customer. Therefore the
first area of improvement is the development and the hardware
procurement itself.
3. ASTANDARD ARCHITECTURE
3.1 Solutions for Structures
The structures subsystem supports all other spacecraft
subsystems and its design must satisfy all strength and
stiffness requirements imposed on it. Traditionally, the
structures subsystem design process follows the following
iterative procedure (Wertz and Larson, 1999);
Identify requirements
Develop packaging configurations
Consider design options
Chose test and analysis criteria
Size Members
Check if requirements are met and iterate as needed
9 t RUD MC
The structure design must account for loads exerted in all
mission phases: manufacturing and assembly, transportation
and handling, testing, pre-launch, launch and ascent and
mission operations. In most cases, the critical loads that drive
the primary structure design are those found during the launch
phase of the mission:
— Steady-state booster acceleration
— Vibration and acoustic noise during launch and
transonic phase
— Vibrations from the propulsion system engines.
— Transient loads during booster ignition and burn-out,
vehicle manoeuvres, propellant slosh and stage and
payload separation
— Pyrotechnic shock from separation events
For a given set of satellites that have comparable masses,
altitude and launch vehicles, the requirements imposed on the
structures subsystem are very similar and a set of enveloping
conditions and loads can be defined. A standard structure that
meets these enveloping requirements can be designed and
tested. This structure would incorporate a “best-practice”
approach and would also include interfaces to different launch
vehicles. The use of such a structure would reduce the number
of design iterations needed for the satellite design not only for
the structures group, but also for other subsystems. The result
would be a reduction in design time and cost. On the flip side,
the resulting spacecraft would have a structure that is not
optimal for the mission and has more mass than actually
needed, leaving less mass for other sub systems.
3.2 Solutions in On-Board Data Handling
In the area of on-board data handling big savings in design can
be made. Not only does cheaper hardware lower overall
satellite costs, but well-written software and new technologies
in computer science and electronics engineering make it
possible to operate the spacecraft more autonomously, thus
reducing the cost of operations.
First of all, we have to realize that electronically speaking, a
satellite is not the most complex system in the world. Actually
the amount of work the command system of a satellite has to
do given a time frame would not come close to the amount of
work done by other commonplace applications, like a game
console or a high-end PDA, yet, compared in cost, the systems
in satellites are far more expensive than the $200 system
sitting right underneath the television.
Terrestrial computer systems and electronics have it easy on
our planet. They don’t have to deal with the harsh atmosphere
outside of our atmosphere. As a result they are not right out-of
the box usable for space programs where radiation, vacuum
and atomic oxygen might affect their reliability and life-time.
So what is the way to shield our satellite computer against
these environmental hazards? In the past, the thing to do was
to put a big (huge in satellite terms) heavy shield around the
computing system of the satellite and keep the board voltage
and the energy density high on the board itself. These
measures then quickly contributed to the overall mass of the
satellite as well as the power consumption, leading to more
solar cells, bigger batteries, more heat and therefore, active
thermal management.
Today many other critical industries use a far better approach
to the computer systems that are exposed to hazards that can
bring down a computer system. The key concept in this
particular case is “tolerance” as opposed to "shielding". It is
far easier to build systems today that are tolerant to the effects
of the space environment then to shield them completely
against these. With just a mere fracture of the weight of a
shielding system of the on-board computer system, two more
CPUs and two more memory chips can be installed and thus
create the ability of an election system where the results of all
three systems are compared to each other and if one deviated
from the other two, that result being discarded. Such systems
are easily implemented and a lot of research has been done in
computer science on the area of parallel processing to enable to
use of certain algorithms to ensure data quality in case of a
single event upset of one of the computers. With the added
tolerance the energy volume needed on the printed circuit
boards can be reduced thus actually decreasing the overall
power usage. This tolerance also can enable the use of more