The difficulty in delineating the boundary between cancerous and healthy tissue during cancer resection surgeries often leads to suboptimal surgical outcomes due to either an incomplete removal of cancerous residuals or an excess removal of healthy tissue. The labelling of cancer cells with radiotracers which can be detected by intraoperative probes presents a potential solution for tumour localisation to facilitate excision. In this study, the feasibility of reconstructing the radiotracer distribution in real-time from sensor array outputs (SAOs) obtained with an intraoperative probe utilising CMOS monolithic active pixel sensors is explored through the use of a convolutional encoder-decoder network. The network takes SAOs containing all detected event clusters from radiotracer emissions obtained by scanning the probe over a region of interest as input and the outputs a reconstructed radiotracer distribution within the scanned region of interest. This initial work demonstrates that the network is able to reconstruct simulated 2D piece-wise constant radiotracer distributions from synthesised SAOs containing beta and gamma clusters isolated from experimentally obtained SAOs using the intraoperative probe.
Conventionally an A-mode scan, a single measurement with a single element transducer, is only used to detect the depth of a reflector or scatterer. In this case, a single measurement reveals only one-dimensional information; the axial distance. However, if the number of scatterers in the ultrasonic field is sparse, it is possible to detect the location of the scatter in multiple spatial dimensions. In this study, we developed a method to find the location of a scatterer in 3-D with a single-element transducer and single measurement. The feasibility of the proposed method was verified in 2-D with experimental measurements.
Monte Carlo simulations are widely used in radionuclide imaging, including for modelling radionuclide imaging systems, as well as for the development of new image reconstruction algorithms. However, discrepancies in data quality (e.g. spatial resolution) can be observed between simulated and experimental data due to an inability to fully model many bespoke effects, including charge-to-signal conversion processes and detector imperfections. In this study, a deep generative modelling framework is proposed for the enhancement of GATE simulations through the use of cycle-consistent generative adversarial networks (CycleGAN). The networks can be trained in an unsupervised manner to learn the mapping between simulated and experimentally obtained data, obviating the need for paired training data which is difficult to obtain in radionuclide imaging studies. The feasibility of the method was assessed for sensor array outputs from a CMOS intraoperative probe intended for single photon imaging/detection for cancer surgery. Overall, the proposed network was able to learn the distribution of images in both the simulated and the experimental domains. Through the analysis of the Frechet inception distance (FID) metric, the network was able to achieve a reduction of 95% in the FID score compared to purely GATE simulated data, indicating far greater consistency with measured data. This framework presents a method to generate training data which is unavailable to obtain experimentally.
Radioguided surgery (RGS) for cancer resection is a widely performed practice for accurate localisation of cancerous tissue. The medical radioisotope 99m Tc can be detected during RGS with the use of an intraoperative probe to detect cancerous tissue. For accurate localization, the internal conversion (IC) electrons from 99m Tc are set as the target emission due to their shorter range in tissue. However, the inability to isolate the IC electrons and gamma emissions mean that a labelled dataset is not available in practice, yet this is required for training a discriminator. In this study, an experimental method relying on evaporation is proposed to obtain ground truth information relating to emissions present within the dataset using physics-informed modelling of the IC electron and gamma signals. By experimental design, both signals vary differently over time thus allowing the ratio between the two signal sources to be estimated. This ground truth ratio information has been measured and hence used i) to assess the intrinsic response of an intraoperative probe for IC electron detection, and ii) to evaluate experimentally different discriminator algorithms. Furthermore, the data can also be used to partially label a measured dataset, allowing both training and testing of discriminators with known ratios. In summary, an experimental method was proposed to allow the evaluation of detector sensitivities and the development of discriminators for unlabelled datasets.
Accurate delineation of the boundary between cancerous and healthy tissue during cancer resection surgeries is important to ensure complete removal of cancerous cells while preserving healthy tissue. Labeling cancer cells with radiotracers, and then using a probe during surgery to detect the radiotracer distribution, is a potential solution for accurate tumor localization and hence better surgical outcomes. This work explores the feasibility of using deep learning to reconstruct a radiotracer distribution from data acquired by an intraoperative probe. The probe’s sensor array outputs (SAOs), obtained by scanning the probe over a region of interest, are supplied to the deep network, which then outputs a reconstructed radiotracer distribution for the region of interest. This initial work demonstrates that the deep network used here, a convolutional encoder–decoder (CED), can successfully reconstruct simulated 2-D radiotracer distributions from synthesized input data. However, the network was unable to generalize reliably when tested with count levels not present in the training set. Therefore, the network must be trained with desired count levels or else should include estimation of epistemic uncertainty to avoid misleading outcomes. We also show that test-time augmentation can improve reconstructed image quality, and hence can also be used to reduce the amount of training data required.
The challenge in delineating the boundary between cancerous and healthy tissue during cancer resection surgeries can be addressed with the use of intraoperative probes to detect cancer cells labelled with radiotracers to facilitate excision. In this study, deep learning algorithms for background gamma ray signal rejection were explored for an intraoperative probe utilising CMOS monolithic active pixel sensors optimised towards the detection of internal conversion electrons from 99mTc. Two methods utilising convolutional neural networks (CNNs) were explored for beta-gamma discrimination: 1) classification of event clusters isolated from the sensor array outputs (SAOs) from the probe and 2) semantic segmentation of event clusters within an acquisition frame of an SAO. The feasibility of the methods in this study was explored with several radionuclides including 14C, 57Co and 99mTc. Overall, the classification deep network is able to achieve an improved area under the curve (AUC) of the receiver operating characteristic (ROC), giving 0.93 for 14C beta and 99mTc gamma clusters, compared to 0.88 for a more conventional feature-based discriminator. Further optimisation of the lower left region of the ROC by using a customised AUC loss function during training led to an improvement of 33% in sensitivity at low false positive rates compared to the conventional method. The segmentation deep network is able to achieve a mean dice score of 0.93. Through the direct comparison of all methods, the classification method was found to have a better performance in terms of the AUC.
The challenge in delineating the boundary between cancerous and healthy tissue during cancer resection surgeries can be addressed with the use of intraoperative probes to detect cancer cells labeled with radiotracers to facilitate excision. In this study, deep learning algorithms for background gamma ray signal rejection were explored for an intraoperative probe utilizing CMOS monolithic active pixel sensors optimized toward the detection of internal conversion electrons from $^{99m}$ Tc. Two methods utilizing convolutional neural networks (CNNs) were explored for beta-gamma discrimination: 1) classification of event clusters isolated from the sensor array outputs (SAOs) from the probe and 2) semantic segmentation of event clusters within an acquisition frame of an SAO which provides spatial information on the classification. The feasibility of the methods in this study was explored with several radionuclides including 14 C, 57 Co, and $^{99m}$ Tc. Overall, the classification deep network is able to achieve an improved area under the curve (AUC) of the receiver operating characteristic (ROC), giving 0.93 for 14 C beta and $^{99m}$ Tc gamma clusters, compared to 0.88 for a more conventional feature-based discriminator. Further optimization of the lower left region of the ROC by using a customized AUC loss function during training led to an improvement of 31% in sensitivity at low false positive rates compared to the conventional method. The segmentation deep network is able to achieve a mean dice score of 0.93. Through the direct comparison of all methods, the classification method was found to have a better performance in terms of the AUC.
Deep neural network feed-forward architectures have recently begun exploratory adoption for image reconstruction in radionuclide imaging. However, there is a lack of analysis on the impact of the training data in terms of object variability and count level on the quality of the reconstructions. In this study, the effects of diversifying the training data manifold and test-time augmentation (TTA) are explored for a convolutional encoder-decoder (CED) network intended for the reconstruction of radiotracer distributions using sensor array outputs (SAOs) from a CMOS intraoperative probe. We demonstrate overfitting occurs when a single object type is present in the training data and including more objects in the training domain improved generalisation, yielding better quality reconstructions. The CED network was unable to generalise reliably when tested with count levels not present in the training set, therefore the network must be retrained with desired count levels or else should include estimation of epistemic uncertainty to avoid misleading outcomes. We show that TTA can consistently provide better reconstructions due to the merging of multiple reconstructions after dis-augmentation and match the nRMSE values with higher amounts of training data, serving as a method to reduce the amount of training data required.