Towards practical 3D ultrasound sensing on commercial-off-the-shelf mobile devices

2021 
Abstract Ultrasound based contactless sensing has the potential to extend the interactive range of mobile devices significantly. However, existing approaches either require multiple transceiver pairs that are not available on commercial mobile devices or can only recognize simple actions with one single stroke. These drawbacks significantly limit the practicability of prior work. This article presents UltraScr , an ultrasound sensing system that can recognize the sophisticated gestures with multiple strokes (e.g., writing a capital letter) using just a single sound transceiver pair. To do so, UltraScr first uses the frequency attenuation profile (FAP) to capture the subtle hand movements. It then employs a convolutional neural network (CNN) to extract the subtle hand movements’ discriminative features to build an accurate ultrasound sensing system. We go further by exploiting the rejection classification method (RCM) and incremental learning to improve the robustness of our sensing system in the end-user environment. We evaluate UltraScr by applying it to gesture recognition across different scenarios and users. Extensive experimental results show that while using only one transceiver pair, the performance of UltraScr is comparable to the multi-transceiver-paired implementations. We show that UltraScr is robust to the change of external conditions (i.e., different humidity and battery) and can work effectively on a wide range of locations, but requires 3 × fewer training samples compared to the state-of-the-art.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    0
    Citations
    NaN
    KQI
    []