From compressed sensing to learned sensing with metasurface imagers

2021 
Reducing the latency of electromagnetic imaging is a crucial objective for applications such as security screening, autonomous driving, and touchless human-machine interaction. In that respect, a fundamental caveat with conventional compressed sensing techniques is that initially all information is indiscriminately multiplexed across a diverse set of measurement modes, and only during data processing one begins to select the information that is relevant to the task (e.g. concealed weapon detection). In order to only acquire relevant information in the first place, and hence drastically reduce the number of necessary measurements, the “learned sensing” paradigm suggests to interpret reconfigurable measurement hardware (e.g. a dynamic metasurface aperture) as a trainable physical layer. The latter can be directly integrated into the machine-learning pipeline used on the data processing side such that one can jointly optimize physical weights (measurement settings) and digital weights (processing network). We discuss our recent numerical and experimental investigations of this new approach to electromagnetic imaging. Our results show that a considerable reduction of the number of scene illuminations is possible by using learned illumination patterns as opposed to conventional patterns (random, orthogonal or principal scene components). At the same time, we find that there are no intuitive explanations for the learned patterns. We also clarify whether the resolution of sub-wavelength features in the scene is limited by the conventional diffraction limit.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []