Positioning Unit Cell Model Duplication with Residual Concatenation Neural Network (RCNN) and Transfer Learning for Visible Light Positioning (VLP)

2021 
Machine-learning (ML) can be employed to enhance the positioning accuracy of visible-light-positioning (VLP) system. To diminish the training time and complexity, the whole area is usually divided into several positioning unit cells. Most literatures only focus on the positioning performance within an unit cell, and assume the unit cell can be repeatedly duplicated to cover the whole area. In this work, we propose and demonstrate a positioning unit cell model duplication scheme, named as spatial sequence adaptation (SSA) process. We also propose and demonstrate a residual concatenation neural network (RCNN) and transfer learning (TL) to refine the model of the target positioning unit cell. A practical test-bed with vertical distance of 2.8 m consisting of two unit cells with dimensions of about 1.55 m × 2 m per cell is constructed. The client side is an autonomous mobile robot (AMR) for acquiring continuous training and testing data. Our experimental results reveal that high precision positioning in the duplicated unit cell duplication can be achieved.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    3
    Citations
    NaN
    KQI
    []