Interchange of Digital Information Between Different CAMS
Digital information interchange is one of the major problems
of current GIS. Many private and commercial CAMS have
been used for several years for collection and manipulation
of the digital data, each one of them having their own base
structure and their own capability to exchange data stored in
different systems. This makes it very expensive and
unpractical for the exchange of digital information among
many potential users.
In 1982, the National Committee for Digital Cartographic
Data Standards (NCDCDS) was organized under the
leadership of the U. S. Geological Survey with the patronage
of the American Congress on Surveying and Mapping. This
committee published standards in the January, 1988, issue of
the American Cartographer, covering three different
subjects:
€ spatial data structure
€ cartographic features
€ digital cartographic data quality
The basic unit, for these standards, is the feature, which can
be graphic or non-graphic. For the graphic feature, the
primitive elements are line, curve, symbol and graphic text.
Some 2-D or 3-D features can be transferred through this
format. Non-graphic features are attribute data which in
general are text.
The Canadian Council on Surveying and Mapping has also
developed standards for digital data exchange. Such
standards provide a national format for the exchange of
topographic data. The basic unit of this format is the feature
which has graphic components and optional attributes and
information about spatial relationships with other features.
Primitive graphics are points, nodes, lines, and areas. Each
feature has a CCSM feature classification code and a unique
identification number.
In 1992 we expect the Spatial Data Transfer Standard
(SDTS) to be adopted for the exchange of spatial data among
U.S. Agencies.
SYSTEM COMPONENT
It is widely known that a GIS was developed due to the
need to retrieve and manipulate spatial information more
effectively with the help of computers. The main hardware
factors that influence the performance and capacity of a
computer system are word length, main memory size,
processing speed, size of external storage, and data or
exchange rate between external and main memory.
Traditionally computers were classified as [Lee, 1989]:
9 large main frame super computers - ie. the Cray X-MP or
Cray Y-MP
€ main frame super computer - ie. the IBM 370
€ super minicomputer - ie. the VAX 11/780, and MicroVax
II
€ mini super computer - ie. the NPL and the VAX 8978
® mini computers - ie. the PDP 11/70
630
® microcomputers - ie. the IBM PC, and other personal
computers
Recent developments over the last few years have blurred
traditional distinctions among computers. Annual mainframe
performance/cost ratios grew by 16% whilst that of
minicomputers grew by 34%. New CPU chips for desk-top
work-stations have 64 bit word lengths (like mainframes) and
are capable of addressing huge amounts of memory. Main
memory size (RAM) continues to grow. Sixteen megabytes
(MB) in microcomputers is not uncommon and many desk-
top stations have over 64MB of memory. Processing speeds
continue to grow (on average by 1.5 times/year). Several
single processor CPUs have over 70 MIPS of processing
power and 200 MIP work-stations have recently been
announced. By 1993 we will see 1,000 MIPS of processing
power, and 400 megaflops of floating point performance. By
1995 we will see the adoption of much faster parallel
processing technology. All this means that polygon cleaning
operations that took 8 hours two years ago will be done in
ten minutes this year and within three minutes by 1995.
Storage capacities are also on the rise with 2.5GB 5.25"
disks available. Transfer speeds from disk to main memory
are also rapidly rising with 10MB/second capability. Even
so, disk transfer speed continues to slow applications.
Evolving systems such as disk striping and parallel disk
arrays will help transfer speed keep pace with other related
technologies.
Backup tape devices are also maturing. Digital D2
technology allows 165 GB/tape, with transfer rates of
16MB/sec. A 27 terabyte robotic jukebox is available that
will store 100 million 300 page books in compressed format.
As an example a project to establish a 1 point/sq. meter
digital terrain model (DTM) for the land portion of the earth
covering 1.0 x 10% km? would contain 10" points. It
requires approximately 16 bits to document each elevation so
the total database would be 1.6 x 10 bits (equivalent to
1200 digital D2 tapes or two DD2 robotic jukeboxes). A
further example of a sustainable development database for
South America would contain 3-15 terabytes (3-15 x 10'?) or
one DD2 jukebox. Storage is thus vastly improved over what
we could envision in 1990 and the technology is finally
reaching a level where hemispheric and global GIS databases
of sufficient detail for planning are finally feasible.
Database management systems are also improving in
efficiency. Distributed databases with two phase commit are
finally here, allowing data to be stored in multiple user
locations yet accessed over networks. Fiber optic networks
will be needed for large data volumes. Most GIS systems
have their graphics database separate from their associated
attributes. This is unfortunate (but understandable for
performance reasons) since it is easy to get the data out of
synchronization, and, with large databases it will be difficult
to maintain database integrity. Hopefully, both graphics and
attribute data can be integrated in one database on more GIS
in the future.
A conceptual and fundamental GIS operation is shown in
Figure 1. The primary objective of any GIS is to collect
infor
comp
surfa
mani;
plann
Techi
(poly
infor
econc
traini
Adeq
diver:
such ;
satell
(man:
benef
legal
sensir
desig;
Figur
possit
comp
A GI:
reliab
mapp
those
to its
of da
must
accur
Accu
that a
1988
The 1
simpl
ternis