Figure 6. Example of OCM
2.4 Robust Common Feature Point Tracking
Figure 7 show the flow of robust tracking procedure of common
feature points. At the first, detection of common feature points
on first high resolution still image is performed by OCR. The
image coordinate of each detected common feature are
converted to the image coordinates on synchronized video
frame. The tracking of common feature points is performed on
video image sequences until the next still image is detected. On
the next still image, image coordinates of tracked common
feature points are converted to next high resolution still image,
and also the detection of common feature points is performed.
This procedure is performed until the final still image.
Detection of Common Feature Points
on Still Image (OCR)
Y
Convert Image Coordinate
on Synchronized video frame.
v
Matching of Common Feature Point
on Video Image «—
(OCM)
y
Detection of Outlier in Matching Result
(computation of transform
between each frame using LMeds)
ef
no
| Convert Image Coordinate on Still Image. |
Final Still Image ?
yes
Sub Pixel Matching on Still Image
(LSM)
Candidate of Pass Point
Figure 7. Flow of Robust Tracking Procedure
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
The matching of common feature points between each video
frame is performed by OCM in this procedure. Therefore,
almost points are matched correctly, but generally include some
error as outlier in main transformation between each video
frames. Therefore, detection of outlier in common feature points
is necessary to perform.
In this procedure, authors assume that the transformation
between each video frame is approximated with affine
transformation as following equation,
WU; SAU; + Ay; + A (5)
Via = au; + asv; + ac
Where a;-ag:affine parameter, (u, v):image coordinate at frame i.
The affine parameters a,-as are calculated using LMedS
method and the outlier of matching is eliminated automatically.
Finally, in order to perform the sub-pixel matching, Least
Square Matching (LSM) of each common feature points on each
still image is performed. In this LSM processing, forward and
backward matching is performed for the checking of LSM result.
This means that the result of forward (e.g. from left image to
right image) matching and backward (e.g. from right image to
left image) matching has large difference, the matching results
of this point is removed as error correspondence. The good
results of this sub-pixel matching procedure are used as the
candidate of correct pass points in following procedures.
2.5 Tie Point Matching
In order to perform adjusting between each flight line, tie point
matching using candidate pass points is performed. Figure 8
shows the flow of tie point matching procedure. At the first,
exterior orientation only using candidate pass points is
performed. The candidate pass points that have large residual of
image coordinate are removed in this step. As the result of this
exterior orientation, 3D coordinate of each candidate pass
points and exterior orientation parameter for each still image are
obtained in arbitrarily position and scale. From this 3D
geometry information, the 3D coordinate of each candidate pass
point can perform back projection to 2D image coordinate on
each still image. Therefore, the tie point matching is performed
in the ROI that located around back projection 2D image
coordinate of candidate pass points on still image by using LSM.
Finally, exterior orientation is computed again using pass points
and tie points.
Exterior Orientation using
Candidate Pass Points
}
Set ROI around
Back Projection Coordinate
i
Tie Point Matching by LSM
i
Exterior Orientation using
Pass Points and Tie Points
Figure 8. Tie Point Matching Procedure