Deep learning enables rapid and robust analysis of fluorescence lifetime imaging in photon-starved conditions

2020 
Fluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study the molecular states in the complex cellular environment as the lifetime readings are not biased by the fluorophore concentration or the excitation power. However, the current methods to generate FLIM images are either computationally intensive or unreliable when the number of photons acquired at each pixel is low. Here we introduce a new deep learning-based method termed flimGANE (fluorescence lifetime imaging based on Generative Adversarial Network Estimation) that can rapidly generate accurate and high-quality FLIM images even in photon-starved conditions. We demonstrated our model is not only 258 times faster than the most popular time-domain least-square estimation (TD_LSE) method but also provide more accurate analysis in barcode identification, cellular structure visualization, Forster resonance energy transfer characterization, and metabolic state analysis. With its advantages in speed and reliability, flimGANE is particularly useful in fundamental biological research and clinical applications, where ultrafast analysis is critical.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    73
    References
    2
    Citations
    NaN
    KQI
    []