avais
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
are initialized for Kalman filter using the result of bundle block
adjustment. That is, after every bundle block adjustment that is
every 10 second, GPS and IMU and their errors are
complemented. Figure 5 is the strapdown navigation algorithm
for integrating IMU and GPS with the result of bundle block
adjustment..
P Image Bundle
Adjustmen
€ image 0.THz
v Yv
V IMU snd V GPS
IMU PU RTK-GPS
2 ) y à al "I » -
200Hz PIMU IS Hz
À
A:Acceleration
G:Gyro
; P
V:Velocity eV, E
P:Positio
D: Attitude Kalman
Filter
1Hz
Figure 5. Strapdown navigation algorithm
3.2 Geo-referencing
While measuring, the platform, including all sensors, is
continuously changing its position and attitude with respect
time. For direct geo-referencing of laser range data, corrected
position and attitude data is used. Geo-referencing of range data
is determined by 3D Helmert's transformation which is
computing rotation matrix and shift vector with the respect time
of IMU data and calibration parameters. All the points scanned
by the laser scanner, x, and digital camera, (Xu> Yu), IN terms of
local coordinate system is converted to world coordinate system
as given by Eq.(5) and Eq.(6). t is the time function. Rotation
matrix, R(t), and shift vector, S(t), is changing with time
because of drift of IMU. However, IMU is corrected with time
by Kalman filter and Bundle block adjustment in this research.
XN a = (R,(0*R) xX + (Si(t)* S) (5)
xvn. = RIO S0, £x, 9) (6)
Where f( ): collinearity condition equation
Therefore, geo-referencing of range data and CCD images is
done directly to overlap exactly with high accuracy and high
resolution.
3.2 Construction of DSM
The point cloud data which is acquired by laser scanner is geo-
referenced by corrected IMU data. This data presents with
world coordinate system. Image data are overlaid on geo-
referenced point cloud data. The integrated point cloud data
shows a good matching with image data because IMU data was
corrected by there image data using by the result of Bundle
block adjustment.
The DSM is a 3D model of the object surface that is
manipulated using a computer. It is comprised of 3D
measurements that are laid out on a grid. These measurements
are the 3D point cloud data, which is derived from laser scanner.
4. FEATURE EXTRACTION
Feature extraction is conducted by range data and image data.
Geometric shape, which is acquired by leaser scanner detect
features. Texture information which is acquired by digital
camera details those features. That is, more detail extraction is
possible using both 3D shapes and colors, texture..
4.1 Feature Extraction Procedure
Feature can be detected from both range data and image data. In
range data, there are some basic information to divide into
several groups, ground surface, horizontal plane, vertical plane,
scatter points, and line feature(Manandar, D., Shibasaki, R.,
2002). Usually, natural feature like trees or weeds have
scattered range points where as the man-made features follow
some geometric patterns like horizontal and vertical aligned
points. However, for detailed extractions, this is not always true.
Some man-made features which have very complicated shape
such as bicycles or decorated object has scattered range points.
In this point of view, image data is used to complement range
data for further details. Figure 6 shows the feature extraction
procedure of the acquired data. At first, range data used to
group all the feature by common characteristics. Then,
segmentation is conducted using color and edge of images.
Image data has higher resolution than range data. Finally, range
data and image data is integrated for detailed extraction.
Data Detected Features
Range Data 1. Extraction of Ground Surface
s (Height Data Frequency Analysis)
2. Grouping of Remaining Range Points
( Horizontal or Vertical aligned points,
Scatter points, Line Feature,
Individual Points)
Image Data 1. Segmentation
(Edge and Color)
2. Color or Texture Information
Integrate 1. Colored Ground Surface
(Asphalt or Soil etc.)
Data 2. Colored planes
(Buildings etc.)
3. Colored Scatter Points
(Vegetation or Man-manmade Features)
4. 3D Segmentation
Figure 6. Feature extraction procedure
4.2 Feature Extraction Results
Finally, feature extraction is conducted. Range data and image
data are used for feature extraction. List of extracted feature 1s
shown in Table 5. In this research, only several main features
are extracted, because the test site of this research is limited.
im
da