Speed and Velocity Estimation Using a Smartphone Camera

2015 
In the absence of information from the Global Navigation Satellite System (GNSS), inertial sensors can be used to provide a relative navigation solution. For pedestrians, Pedestrian Dead Reckoning (PDR) approach is used to obtain the navigation solution based on the estimated device-user misalignment angle, step length and frequency. Nowadays, the Micro Electro Mechanical Systems (MEMS) based sensors, such as accelerometers and gyroscopes, can be found in the most recent smartphone devices. The main drawback of using MEMS based inertial sensors for navigation applications is their errors (i.e. “sensor drift”) which deteriorate the standalone inertial navigation solution obtained from these sensors rapidly with time. The resulted position error from the errors in the accelerometers and gyroscopes increases quadratically and cubically with time respectively. To overcome this drawback, an update for the inertial solution from an absolute source of data is needed. GNSS positions and velocity, Wi-Fi positions are examples of update sources that can be used. However, updates from these two sources rather depend on an external signal which might not exist in all environments, such as GNSS-denied environment, or require a special infrastructure. This paper proposes a new method of using the drift-free vision sensor, or device camera, to estimate the speed/velocity of the pedestrians using the various extracted parameters from the used images, thereby improving final positional accuracy. Nowadays, most smartphones have at least one camera which is considered as a rich source for the outside world information. The algorithm proceeds in multiple steps. In the first step, images are re-processed by reduce its size, apply grayscale conversion and histogram equalization, and smooth the obtained images. In the second and third steps, checks for the availability of enough features and if the user is in a motion are done. If there are no enough features, vision sensor can’t be used to estimate the user speed/velocity information. In addition, if the user is not in motion, user speed is assumed to be equal to zero. In the fourth step, the optical flow or the translational motion undergone by features found within the images from one frame to the next is calculated. Based on the calculated quantities from the previous step, the device misalignment angle is calculated in the fifth step. The device misalignment angle is crucial to accurately estimate the user velocity. In the sixth step, the algorithm checks if the user motion is meaningful or not using the calculated device misalignment angle and the device roll and pitch angles. If the user motion is not meaningful, a fidgeting scenario and zero speed of the user are assumed. Using the frequency of the calculated optical flow and the pitch and roll angles of the device, a use case classifier is applied in the seventh step to distinguish if the user was texting, calling or dangling the device. Based on the use case of the device, certain algorithms are used to estimate the speed/velocity of the user in the final step. The speed estimation methodology and results are shown in this paper. The obtained results demonstrate that vision sensors can be a major source of information to enhance the accuracy of the final navigation solution. This work is patent pending.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []