Action recognition is essential in security monitoring, home care, and behavior analysis. Traditional solutions usually leverage particular devices, such as smart watches, infrared/visible cameras, etc. These methods may narrow the application areas due to the risk of privacy leakage, high equipment cost, and over/under-exposure. Using wireless signals for motion recognition can effectively avoid the above problems. However, the motion recognition technology based on Wi-Fi signals currently has some defects, such as low resolution caused by narrow signal bandwidth, poor environmental adaptability caused by the multi-path effect, etc., which make it hard for commercial applications. To solve the above problems, we first propose and implement a position adaptive motion recognition method based on Wi-Fi feature enhancement, which is composed of an enhanced Wi-Fi features module and an enhanced convolution Transformer network. Meanwhile, we improve the generalization ability in the signal processing stage to avoid building an extremely complex model and reduce the demand for system hardware. To verify the generalization of the method, we implement real-world experiments using 9300 network cards and the PicoScenes software platform for data acquisition and processing. By contrast with the baseline method using original channel state information(CSI) data, the average accuracy of our algorithm is improved by 14% in different positions and over 16% in different orientations. Meanwhile, our method has best performance with an accuracy of 90.33% compared with the existing models on public datasets WiAR and WiDAR.
With the wide applications of smart devices and mobile computing, smart home becomes a hot issue in the household appliance industry. The controlling and interaction approach plays a key role in users' experience and turns into one of the most important selling points for profit growth. Considering the robustness and privacy protection, wearable devices equipped with MEMS, e.g., smartphones, smartwatches, or smart wristbands, are thought of one of the most feasible commercial solutions for interaction. However, the low-cost built-in MEMS sensors do not perform well in capturing finely grained human activity directly. In this paper, we propose a method that leverages the arm constraint and historical information recorded by MEMS sensors to estimate the maximum likelihood action in a two-phases model. First, in the arm posture estimation phase, we leverage the kinematics model to analyze the maximum likelihood position of users' arms. Second, in the trajectory recognition phase, we leverage the gesture estimation model to identify the key actions and output the instructions to devices by SVM. Our substantial experiments show that the proposed solution can recognize eight kinds of postures defined for man-machine interaction in the smart home application scene, and the solution implements efficient and effective interaction using low-cost smartwatches, and the interaction accuracy is >87%. The experiments also show that the algorithm proposed in this paper can be well applied to the perceptual control of smart household appliances, and has high practical value for the application design of the perceptual interaction function of household appliances.
U-Net has been considered as an outstanding deep learning neural network in medical image segmentation problems. The segmentation results of the U-Net based model, however, are always too conservative and smooth. MufiNet, a segmentation model using multiple U-Net chains (with multiple encoder-decoder branches), is proposed in this paper. It can fuse the receptive fields obtained from different scales. The convolution layer of 1 × 1 is introduced to add the residual connection to enhance the adaptability to the depth of the network. The multi-scale fusion module with residuals is combined with the U-Net chain architecture to retain more information flow paths, and the multi-scale context information is used to improve the performance and robustness of the segmented network. MufiNet model is extensively evaluated on three datasets in this paper, including two benchmark datasets (lung segmentation and skin cancer lesion segmentation) and cervical cancer dataset jointly constructed with a hospital. The experimental results show that MufiNet could yield better performance in medical image segmentation tasks than U-Net and LadderNet models.
Security over mobile Internet-of-Things (IoT) devices is critical due to the open nature of distributed wireless communication. To efficiently establish a secure connection between two communication parties, a fast mobile key extraction protocol, KEEP, is proposed. KEEP fastly generates similar bit sequences from two communication parties' measurements of channel-state information (CSI) of different subcarriers. Then, a distributed "verification-recombination" mechanism is introduced to generate the same encryption key from bit sequences without the public-key authentication, digital signature, or key distribution center of the other party. We implemented real-world experiments using commercial off-the-shelf 802.11n devices to evaluate the performance of KEEP in various scenarios. Theoretical analysis and experimental verification show that KEEP is more secure, effective, and reliable than the state-of-the-art methods.