of
nic
ind
ind
nic
pp.
for
pp.
lare
Ore
iter
nal
and
Salt
VISION-AIDED CONTEXT-AWARE FRAMEWORK FOR PERSONAL NAVIGATION
SERVICES
S. Saeedi*, A. Moussa, Dr. N. El-Sheimy
Dept. of Geomatic Engineering, The University of Calgary, 2500, University Dr, NW, Calgary, AB, T2N 1N4 Canada
— (ssaeedi, amelsaye, elsheimy) @ucalgary.ca
Commission IV, WG 1V/5
KEY WORDS: Navigation, Vision, Data mining, Recognition, Fusion, Video, IMU
ABSTRACT:
The ubiquity of mobile devices (such as smartphones and tablet-PCs) has encouraged the use of location-based services (LBS) that
are relevant to the current location and context of a mobile user. The main challenge of LBS is to find a pervasive and accurate
personal navigation system (PNS) in different situations of a mobile user. In this paper, we propose a method of personal navigation
for pedestrians that allows a user to freely move in outdoor environments. This system aims at detection of the context information
which is useful for improving personal navigation. The context information for a PNS consists of user activity modes (e.g. walking,
stationary, driving, and etc.) and the mobile device orientation and placement with respect to the user. After detecting the context
information, a low-cost integrated positioning algorithm has been employed to estimate pedestrian navigation parameters. The
method is based on the integration of the relative user's motion (changes of velocity and heading angle) estimation based on the
video image matching and absolute position information provided by GPS. A Kalman filter (KF) has been used to improve the
navigation solution when the user is walking and the phone is in his/her hand. The Experimental results demonstrate the capabilities
of this method for outdoor personal navigation systems.
1. INTRODUCTION
Due to the rapid developments in mobile computing, wireless
communications and positioning technologies, using
smartphones as a PNS is getting popular. This evolution has
facilitated the development of applications that use the position
of the user, often known as LBS. Using various sensors on
smartphones provides a vast amount of information; however,
finding a ubiquitous and accurate pedestrian navigation solution
is a very challenging topic in ubiquitous positioning (Lee &
Gerla, 2010; Mokbel & Levandoski, 2009). Position estimation
in outdoor environments is mainly based on the global
positioning systems (GPS) or assisted GPS (AGPS); however, it
is a challenging task in indoor or urban canyon, especially when
GPS signals are unavailable or degraded due to the multipath
effect. In such cases, usually other navigation sensors and
solutions are applied for pedestrians. The first alternative is
wireless radio sensors, such as Bluetooth, RFID (Radio
Frequency IDentification) or WLAN (Wireless Local Arca
Network). These systems have limited availability and need a
pre-installed infrastructure that restricts their applicability. The
second navigation system is the IMU (Inertial Measurement
Unit) sensors that provide a relative position based on the
distance travelled and device’s orientation. The distance and
orientation information can be measured with a gyroscope and
an accelerometer sensor. The main drawback of the IMU is that
they are based on the relative position estimation techniques and
use the previous states of the system; therefore, after a short
period of time low cost MEMS (Micro Electro-Mechanical
Systems) sensors measurements typically result in large
cumulative drift errors unless the error are bounded by
measurements from other systems (Aggarwal et al., 2010).
Another solution is the vision-based navigation using video
camera sensors. These systems are based on two main
strategies: estimation of absolute position information using a
priori formed databases which highly depends on the
availability of image database for that area (Zhang and Kosecka,
2006) and estimating relative position information using the
motion of the camera calculated from consecutive images which
suffers from cumulative drift errors (Ruotsalainen et al., 2011;
Hide et al., 2011). Since there is not a single comprehensive
sensor for indoor navigation, it is necessary to integrate the
measurements from different sensors to improve the position
information.
Modern smartphones contain a number of Low cost MEMS
sensors (e.g. magnetometer, accelerometer, and gyroscope) that
can be used for integrated ubiquitous navigation even if GPS
signals are unavailable. Vision sensors are ideal for PNS since
they are available in good resolution on almost all smartphones.
Therefore, in this research a vision sensor is used to capture the
user's motion parameters using consecutive image frames and
to provide navigation aid when measurements from other
systems such as GPS are not available. This system doesn't
need any special infrastructure and makes use of camera as an
ideal aiding system. Since mobile users carry the device with
different orientation and placement, in almost everywhere
(indoor and outdoor environments) while doing various
activities (such as walking, running and driving), using specific
customized and context-aware algorithms are necessary for
different users’ modes. Therefore, a mobile navigation
application must be aware of user and device context to use
appropriate algorithm for cach case. For example, when the
context information shows that device is in “texting” or
“talking” mode, the observation from camera can be integrated
with GPS sensor to improve and validate the pedestrian dead-
reckoning algorithm. The main issue in context-aware PNSs is
detecting relevant context information using embedded mobile
sensors in an implicit way. The contribution of this paper is to
develop a visually-aided personal navigation solution using the
smartphone embedded sensors which takes into account various
user context.
217