Full text: Technical Commission IV (B4)

and 
“be 
ator 
an 
me 
) be 
the 
ads 
oad 
ator 
and 
idly 
the 
will 
ore, 
and 
ymit 
the 
cific 
that 
e at 
the 
Tain 
es a 
ach, 
ved 
the 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
existing status of the area. Therefore, a human supervision is 
required in this step. 
4.5 Reverse registration 
For assisting the process of the buildings detecting and 
extracting from the image, the extracted buildings from the 
point clouds were transformed and registered on the image. In 
this step the point clouds which are a vector format, will be 
converted to raster format (image) before registering. An 
initialisation and orientation is implemented earlier. The goals 
of the reverse registration are (i) to define the area for searching, 
(ii) to define the size of search window on the image, and (iii) to 
have a guide for segmenting and detecting the roof of building. 
A template-based matching is carried out to extract the 
buildings. The reverse registration improves the method of the 
object detecting and extracting. When a building has been 
detected, it will be tested and matched with the extracted one 
from the point clouds. The last part of the process seems 
confusing with the process of registering the extracted building 
from the image to the 3D model or the point clouds, but this 
part is very important part of the process because it assures the 
operator to extract only the roof of the corresponding and 
interested building. 
4.6 Registering objects from image to the 3D model 
This is a final step of the approach. In this step, the extracted 
objects from the image will transform and register on the 3D 
model developed from the point clouds as Homainejad (20112) 
explained, and create a new 3D model. Basically two different 
processes will be implemented in this step. For both processes, 
a number of control points are initially defined on the image and 
the point clouds for orientation and initialisation. In the first 
process besides of initialisation and orientation, the algorithm 
will calculate the parameters of mapping for each individual 
pixel. The image always distorted while acquiring process and a 
correction always will be applied on the image for reducing the 
effects of the distortions. However, the distortions never 
removed from the image and always stay with image in some 
extension. If one looks the image as a whole, probably the 
remaining distortions will not be taken under consideration. 
However, if the image is split to the small areas the remaining 
distortions is very big issue and it is required the distortions to 
be removed perfectly. Therefore, the following equations were 
developed for mapping each individual pixel on the 3D model. 
X; 2 f(X,, 04,02, D, D3, $,, $2) 
Y; 2 f(Yo 6, 02, D3, D2,5,, $2) 
(Eq. 5) 
Where X; and Y; are the coordinate of pixel on the 3D model, X. 
and Y, are the coordinate of control point, 6,, 0, are angles of 
point i with two defined directions, 54, S are scale factors along 
X and Y directions, and D,, D, are distances of point i to two 
defined base lines which will defined from following Equation. 
_ [Xen Yer, Xez, Yea, Xi Yi) 
VAX? + AY? 
Where AX 2 X., — X? , and AY 2 Y, — cz are 
Scale Factors for cach point in two directions which will be 
calculated separately. With applying above equation, each pixel 
from the images will be transformed on the 3D model precisely 
and all distortions will be removed. 
In the second process, the image will be transformed and 
registered on the each extracted object from the point clouds. 
Since the algorithm will transform and register the only part of 
the image on its corresponding part on the point clouds which 
D 
(Eq. 6) 
has been already extracted, there is no requirement for the 
reverse registration in this step. The algorithm will define a 
search window on the image using information about the 
extracted data from the point clouds. The second process has 
been implemented for registering trees and the crown land from 
the image to the point clouds. 
4.7 Analysing the result 
In this research study, the point clouds was defined as the main 
and the only reference for checking and controlling the result, 
therefore, there is no attempt to correct and improve the point 
clouds data and it was assume that the corrections have been 
implemented in advance. For analysing the result, each point 
individually controlled visually and manually. For example it 
was checked that the corners of roofs are mapped in the correct 
location and they have had a correct elevation, or the tip of the 
building was mapped correctly and there was no any distortion 
remains. The analysing shows that the image was correctly 
mapped on the point clouds data. The standard deviation of 
points in X, Y, and Z directions is in the range of a fraction of 
centimetre with comparing the point clouds data. The focus of 
the analysis was on the 3D reconstruction of the buildings, since 
the roof of each building has special characteristics. Therefore, 
the roof of all buildings was individually checked in order to 
discover that the algorithm was able to precisely reconstruct the 
roof of a building in a 3D model and the 3D model shows all 
details. Figures 4, 5, 6, and 7 show the results after transforming 
and registering image on the point clouds. With study to these 
figures we realised that the algorithm was precisely developed a 
3D model via registering an image on the point clouds and all 
details have been shown. 
  
   
[T t. E 4 
Figure 4. The figure shows the result from developing a 3D 
model for arca 1. 
Figure 5. The figure shows the isometric view for area 1. 
183 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.