Full text: Technical Commission IV (B4)

   
XIX-B4, 2012 
ingles and positions 
cach matching point 
3 and 14 show the 
Figures 15 and 16 
nge. The horizontal 
he vertical axes in 
inces from the true 
he vertical axes in 
rom the true value 
—J 
-90.60 
i0 M 12 
ber 
3-DOF (wide range) 
  
  
10 H 12 
nber 
the 3-DOF (wide 
  
10 11 12 
Iber 
1e 3-DOF (narrow 
  
  
10 H 12 
nber 
the 3-DOF (narrow 
  
  
International Archives of the Photogrammetry, Remote Sensin 
  
g and Spatial Information Sciences, Volume XXXIX-B4, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
4. DISCUSSION 
From the results of our experiments, we have confirmed that 
our approach can detect locations using a camera and a point 
cloud via a fully automated procedure. There are three kinds of 
parameter estimation results to be discussed. First, the azimuth 
angle estimation for given position parameters was achieved 
reliably to within 1.0°, as shown in Figure 10. We have 
therefore demonstrated that our approach can be used in an 
indoor space environment containing iron frames if accurate 
positional data exist. 
Second, the X and Y camera position estimations for given 
azimuth angles achieved nearly 50 cm accuracy for the wide 
spatial range, as shown in Figure 11. The narrow spatial range 
result also achieved almost 30 cm accuracy, as shown in Figure 
12. From these results, we suggest that our approach will assist 
stand-alone positioning using a GPS receiver and existing 
indoor positioning techniques to achieve higher positional 
accuracy when accurate azimuth data exist. 
Finally, both the camera positions and azimuth angles (3-DOF) 
were estimated together. These results were less stable than the 
independent results because of the increase in estimated 
parameters. However, we have also confirmed that our 
approach can assist existing indoor positioning techniques to 
achieve higher positioning accuracy. For example, if we have 
indoor positioning services such as RFID tags and wireless 
LAN at 10 m spatial resolution, our proposed approach can 
improve the positional data to sub-meter accuracy. In addition, 
the positional data are attached to degrec-ordered azimuth 
angles. 
When we analyze our results, Figure 14 shows that the results 
for image numbers 9 and 11 gave large matching errors. Figure 
15 also shows that image number 2 gave large matching errors. 
We assume that color differences between the camera images 
and the rendered panoramic images caused the matching errors, 
because the window objects in the gymnasium were regions for 
Which the laser scanner failed to measure 3-D points. When 
laser-scanning failures exist, the failure points are projected as 
missing points from the camera into the panoramic image. 
Therefore, a new pixel value (color) is estimated at each 
missing point in the panoramic image using neighboring pixel 
values in this experiment. The result of the color estimation will 
then differ from the pixel value in the camera image. Specular 
reflection on the floor also caused matching errors for the same 
reason. 
Although we detected matching points from 73,800 candidates, 
other data could be used in the location detection. A reduced 
number of candidates for matching, achieved by using initial 
values taken from the various sensors in a mobile device, would 
be an effective approach to achieving more stable matching. For 
example, gyro sensor data could be used as initial values for 
azimuth angle estimation. 
Although the spatial resolution of panoramic images was 0.20°, 
We could process at approximately 0.01° resolution using 
massive point clouds before data reduction in the current state. 
In addition, we could apply sub-pixel image processing to 
achieve higher spatial resolutions for positions and azimuth 
angles, 
Currently, there are many challenges to making our approach 
useful in practice. Processing-time reduction is one technical 
Sue. Our proposed approach has achieved 3-D location 
Matching from a 3-D data-processing problem to simple 2-D 
Mage processing. This means that graphics-processor-based 
Computing might be an effective and low-cost solution for our 
Procedure. We can identify three additional challenges as 
follows. The first challenge is location detection using a 
handheld camera that includes roll, pitch, and yaw angle 
estimation. The second challenge is robust estimation in a 
changing environment. The third challenge is robust estimation 
when occlusion caused by moving objects such as pedestrians 
occur. 
S. CONCLUSIONS 
First, we have focused on the fact that the camera installed in 
mobile devices has the potential to act as a location Sensor, 
assisting other location sensors to improve positional accuracy. 
We have also observed that massive point-cloud data can be 
used as a reliable map. Our proposed location-matching 
methodology is based on image matching using images from a 
digital camera and panoramic images generated from a massive 
point cloud in an image-based GIS. When facility information 
for construction and maintenance is geocoded onto maps, 
higher accuracy and higher spatial resolutions are required. 
In this paper, therefore, we have described fine location 
matching aiming for 10 cm accuracy to assist indoor positioning 
techniques such as RFID and wireless LAN. We have then 
developed a matching system to confirm that our location 
application can provide location information using a camera 
and a point cloud via a fully automated procedure. Although the 
current success rate for location detection was below 100%, we 
have confirmed that our approach can detect a location using a 
digital camera horizontally. We are currently improving the 
reliability of our location-matching procedure. 
References 
[1] Petri, K., Petri, M., Teemu, R., Henry, T, Kimmo, 
V.Hannes, W. 2004. Topics in probabilistic location estimation 
in wireless networks. In: Personal, Indoor and Mobile Radio 
Communications, 2004. PIMRC 2004. 15th IEEE International 
Symposium, Vol.2, pp. 1052-1056. 
[2] Durrant-Whyte, H., Bailey, T. 2006. Simultaneous 
localization and mapping (SLAM): part I, Robotics & 
Automation Magazine, IEEE, Vol. 13, Issue: 2, pp. 99-110. 
[3] Bailey, T., Durrant-Whyte, H. 2006. Simultaneous 
localization and mapping (SLAM): part IL Robotics & 
Automation Magazine, IEEE, Vol. 13, Issue: 3, pp. 108-117. 
[4] Lars, L., Karsten, M. & Paul R. 2007. Splat-based Ray 
Tracing of Point Clouds. Journal of WSCG, Vol.15, Issue: 1-3: 
51-58. 
[5] Jason, D. 2000. An approach to virtual environments for 
visualization using linked geo-referenced panoramic imagery. 
Computers, Environment and Urban Systems 24, 127-152. 
[6] Edward, V., Sisi, Z. & Sander, D. 2005, Distance-value- 
added panoramic images as the base data model for 3D-GIS. 
Panoramic Photogrammetry Workshop. 
[7] Masafumi, N., 2011. LiDAR VR generation with point-based 
rendering. |n: The Urban Data Management Society 
Symposium 2011, Delft, Netherlands, Vol.XXXVIII, pp. 223- 
230. 
[8] Masafumi, N., Satoshi, K., Anna, N., Yuta, S., 2011, 4 
point-based rendering application with a high-speed spatial 
interpolation. In: The 32nd Asian Conference on Remote 
Sensing, Taipei, Taiwan, P 161 8-26-19. 
Acknowledgement 
This work is supported by Strategic Information and 
Communications R&D Promotion Programme (SCOPE) of the 
ministry of internal affairs and communications, Japan. 
   
   
   
   
    
   
   
   
    
   
    
  
   
  
  
  
  
   
  
  
  
    
    
   
    
   
   
   
  
   
   
    
   
   
   
    
    
   
     
   
   
    
    
  
    
   
   
  
  
   
  
  
   
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.