xking east,
or approx-
S (5-10 m
o do way-
ed camera
mount. This camera is also used for the hand held terrestrial pho-
tography. A Leica 1200 real-time kinematic dual-frequency dif-
ferential GPS (RTK DGPS) was used to capture ground control.
2.3 Data Collection
To generate the UAV-MVS point cloud 89 photographs were taken
from nadir and 64 oblique photographs were taken from a ~45°angle.
The above ground level (AGL) flying height was approximately
30-40 m. Prior to acquiring the UAV imagery 42 small 10 cm
orange disks were distributed throughout the focus area. These
GCP disks were surveyed using RTK DGPS to an accuracy of
~1.5-2.5 cm. These disks were placed so that they could be seen
from above and from the waters edge. The UAV imagery cap-
tured these GCPs in ~380 overlapping aerial photographs and
then 179 terrestrial photographs were taken of the focus area by
hand. The UAV image dataset and the terrestrial dataset were
carefully screened and any blurred photographs or photographs
beyond the study area were rejected.
2.4 Multi-View Stereopsis
The MVS process relies on matching features in multiple pho-
tographs of a subject, in this case a section of coastline. The
Bundler software is used to perform a least squares bundle ad-
justment on the matched features. These features are discovered
and described using invariant descriptor vectors or SIFT feature
vectors. Once defined the SIFT features (or in our version SIFT-
Fast features?) can be matched and the MVS process produces
a sparse 3D point cloud along with the position and orientation
of the camera for each image. Radial distortion parameters are
also derived. The imagery used in this first step is down sam-
pled (5184x3456 pixels — 2000x1333 pixels). The point cloud
produced is in an arbitrary coordinate space. The next stage is to
densify the point cloud using PMVS2, usually with down sam-
pled images. The improvement made by our UAV-MVS process
is that we transform the output from the Bundler bundle adjust-
ment so that PMVS2 can run on the full resolution imagery. The
resulting set of 3D coordinates also includes point normals, how-
ever it is still in an arbitrary coordinate reference frame.
2.5 Georeferencing
The ground control points must identified in the imagery and
matched to their GPS positions in the local UTM coordinate sys-
tem (GDA94 Zone 55). This “semi-automatic GCP georeferenc-
ing” is done by analysing the colour attributes of a random selec-
tion of orange GCP disks found in the imagery. The point cloud
is then filtered based on the derived colour thresholds, i.e. Red,
Green, Blue (RGB) range for GCP orange. The filter finds points
in the cloud that are close enough in RGB colour space Euclidean
distance to the GCP orange. The extracted orange point cloud
contains clusters of points for each GCP and the bounding box of
those point clusters is used to calculate a cluster centroid for each
GCP cluster in the arbitrary coordinate space. To match these
cluster centres to the equivalent surveyed GPS positions, the nav-
igation grade on-board GPS positions for the time synchronised
camera locations are matched to the Bundler derived camera posi-
tions and a Helmert Transformation is derived that, when applied,
locates the point cloud in real work scale to an accuracy of ~5-
10 m. The cloud is now in real world scale, therefore the GCP
cluster centroid can be matched to the GPS positions by manually
finding the closest GCP position to each cluster (when GCPs are
more dispersed this process is usually automated). The resulting
list of GCP disk cluster centres matched to GCP GPS points is
then used to derive new Helmert Transformation parameters for
?http://sourceforge.net/projects/libsift/
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
transforming from arbitrary coordinate space to the UTM coordi-
nate space.
The terrestrial photography does not have an equivalent set of
camera position as the photographs were taken by hand. A “man-
ual GCP georeferencing” technique must therefore be under-
taken. This involves to extracting and labelling GCP disk cluster
centres from the point cloud and then comparing the distribution
to the GPS survey. GPS points can then be matched to their asso-
ciated cluster centre and a Helmert Transformation can be derived
and applied to point clouds and derived surfaces. Once the data
was georeferenced it could be clipped into profiles and smaller
point clouds using LASTools*.
2.6 Surface Generation
Triangulated meshes join the points in the dataset to their nearest
neighbours, for this study the focus is on the points (or vertices)
before and after Poisson Surface Reconstruction (since the vertex
locations will remain the same when a dense triangulated mesh is
created). Poisson surface reconstruction was done using Version
3 of the PoissonRecon software ? provided by Michael Kazhdan
and Matthew Bolitho. Default settings were used for all param-
eters except octree depth and solver divide, for which the values
of 12 and 8, respectively, were chosen based on experimenta-
tion. MeshLab and Eonfusion " were used to visualise point
clouds and surfaces and clean the data. Edge face removal using
length thresholds were used as well as isolated piece removal (au-
tomated and manual). The mesh vertices were then extracted by
clipping out the profiles (using LASTools) for comparison with
the original MVS derived vertex profiles.
2.7 Point Cloud and Surface Comparison
Future studies will investigate the best methods for quantitatively
comparing point clouds and derived surfaces. For this study the
method chosen was a qualitative comparison of point cloud pro-
files and strips along lines of interest within the focus area (see
Figure 4).
Figure 4: The two profiles within the focus area (see Figure 2).
Profile strips 1, 2 and 6 cm wide were extracted from the geo-
referenced MVS point clouds and from the Poisson vertex points
clouds. The points and derived surfaces were then overlaid and
visually compared to evaluate how well the Poisson vertices rep-
resent the surface and how well the UAV-MVS point cloud coin-
cides with the T-MVS point clouds and derived Poisson vertices
and surface meshes.
*http://Www.cs.unc.edu/ isenburg/lastools/
Shttp://www.cs.jhu.edu/ misha/Code/PoissonRecon/
http://meshlab.sourceforge.net/
"http://www.eonfusion.com/