Sub-2 mm Depth of Interaction Localization in PET Detectors with Prismatoid Light Guide Arrays and Single-Ended Readout using Convolutional Neural Networks.

2020 
PURPOSE: Depth of interaction (DOI) readout in PET imaging has been researched in efforts to mitigate parallax error, which would enable the development of small diameter, high resolution PET scanners. However, DOI PET hasn't yet been commercialized due to the lack of practical, cost-effective and data efficient DOI readout methods. The rationale for this study was to develop a supervised machine learning algorithm for DOI estimation in PET that can be trained and deployed on unique sets of crystals. METHODS: Depth collimated flood data was experimentally acquired using a Na-22 source with a depth-encoding single-ended readout Prism-PET module consisting of lutetium yttrium orthosilicate (LYSO) crystals coupled 4-to-1 to 3 x 3 mm2 silicon photomultiplier (SiPM) pixels on one end and a prismatoid light guide array on the other end. A convolutional neural network (CNN) was trained to perform DOI estimation on data from center, edge and corner crystals in the Prism-PET module using (a) all 64 readout pixels and (b) only the 4 highest readout signals per event. CNN testing was performed on data from crystals not included in CNN training. RESULTS: An average DOI resolution of 1.84 mm full width at half maximum (FWHM) across all crystals was achieved when using all 64 readout signals per event with the CNN compared to 3.04 mm FWHM DOI resolution using classical estimation. When using only the 4 highest signals per event, an average DOI resolution of 1.92 mm FWHM was achieved, representing only a 4% dropoff in CNN performance compared to using all 64 pixels per event. CONCLUSIONS: Our CNN-based DOI estimation algorithm provides the best reported DOI resolution in a single-ended readout module and can be readily deployed on crystals not used for model training.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    4
    Citations
    NaN
    KQI
    []