Kunii, Yoichi
2.2.2 Experimental Results: R.M.S.E. were
investigated by the coplanarity condition and combined
the 3D adjustment with the coplanarity condition and the Table 1. RMSE of the each methods
bundle adjustment.
Table 1 shows the R.M.S.E. for check points by each
method. It may be seen from the results of this Method > (mm) 2 (mm)
experiment that the 2D accuracy for the each methods
ing the are high value, and the accuracy of Z-coordinate for the Coplanarity Condition 9.030 110752
combined adjustment is almost equal to the 2D accuracy.
It means that the combined adjustment is sufficient for
and the i |
3D object modeling. Therefore, the combined Combined Adustment 31206 31.196
ates of adjustment is useful method for 3D spatial data
acquisition.
2.3 3D Object Modeling
using a :
3 Te 3D spatial data of the house model could be acquired using the combined adjustment in this paper. Then, wire frame
k circle model of the house model could be reconstructed automatically. Figure 8 shows the wire frame model of the house
model. Furthermore, texture mapping was performed by following procedures: firstly one square surface was selected
on the wire frame model on manually. Secondly, color information which was corresponded to the square surface was
obtained from an image. Finally, the color information was put on the square surface, and repeating this procedures.
Figure 9 shows the texture mapping model.
h ratios
Figure 8. Wire frame model Figure 9. Texture mapping model
The detail procedures of the automatic 3D object modeling method
are as follows:
1. Line extraction is performed for the first image in the sequential
images.
2. The image coordinates of the both ends for each extracted lines
are calculated, and line tracking start.
3. Optical flow is estimated using the first image and the next
image.
4. Similarly, line extraction for the next image is performed, and
image coordinates of the both ends for each extracted line are
calculated.
5. Each line positions in the first image are moved to amount of
the optical flow.
6. Moved lines and the lines in the next image are corresponded
by similarity function, and just corresponded lines are remained.
7. Above procedure is successively repeated to the last image, and
the matching points between the first image and the last image
are automatically acquired.
Line Extraction for the First Image |
Estimation of Optical Flow
Line Extraction for the Next Image
Line Position Movement
Each Line Correspondence
Stereo Matching
3D Spatial Data Acquisition
8. 3D spatial data is acquired using the combined adjustment.
9. 3D modeling of an object is performed. ES
These procedures are shown in Figure 10. Figure 10. Automatic 3D modeling method
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000. 463