A device made to help visually challenged people navigate their surroundings using image processing and the Internet of Things. Traditional walking aids are ineffective at detecting impediments, making it challenging for blind people to move around on their own. Obstacles are identified by the smart walking stick's camera and ultrasonic sensor, which also calculate the user's distance from the object. A sound is produced in the user's headphones when an obstruction is identified, alerting them to the obstacle. For visually impaired people who might need assistance navigating their environment, this technology is a useful aid. The stick has a camera that takes pictures of the surrounding area. These pictures are subsequently analyzed using image processing algorithms to find barriers and other important details. The stick is also linked to the Internet of Things (IoT), which enables it to interact with other devices and offer further functionality.
In this paper, we improvise an existing word vector based machine translation system to measure semantic relatedness across languages and apply it to the language pair of English and Hindi. We also evaluate the model using human scored word relatedness datasets. Unlike most systems performing a similar task, the system does not make use of parallel corpora, which are cumbersome and not practical to build between all possible pairs of languages. The system learns a linear transformation between multidimensional word vector spaces of a pair of languages using a set of known translations, called a bilingual dictionary. An approach to reduce the effort in building a usable bilingual dictionary in the system is also proposed.