A Fog Computing Model for Implementing Motion Guide to Visually Impaired

2019 
Abstract A guide dog robot system for visually impaired often needs to process many kinds of information, such as image, voice and other sensor information. Information processing methods based on deep neural network can achieve better results. However, it requires expensive computing and communication resources to meet the real-time requirement. Fog computing has emerged as a promising solution for applications that are data-intensive and delay-sensitive. We propose a fog computing framework named PEN (Phone + Embedded board + Neural compute stick) for the guide dog robot system. The robot’s functions in PEN are wrapped as services and deployed on the appropriate devices. Services are combined as an application in a visual programming language environment. Neural compute stick accelerates image processing speed at low power consumption. A simulation environment and a prototype are built on the framework. The simulated guide dog system is developed for operating in a miniature environment, including a small robot dog, a small wheelchair, model cars, traffic lights, and traffic blockage. The prototype is a full-sized portable guide system that can be used by a visually impaired person in a real environment. Simulation and experiments show that the framework can meet the functional and performance requirements for implementing the guide systems for visually impaired.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    4
    Citations
    NaN
    KQI
    []