Manual segmentation of lesions, required for radiotherapy planning and follow-up, is time-consuming and error-prone. Automatic detection and segmentation can assist radiologists in these tasks. This work explores the automated detection and segmentation of brain metastases (BMs) in longitudinal MRIs. It focuses on several important aspects: identifying and segmenting new lesions for screening and treatment planning, re-segmenting lesions in successive images using prior lesion locations as an additional input channel, and performing multi-component segmentation to distinguish between enhancing tissue, edema, and necrosis. The key component of the proposed approach is to propagate the lesion mask from the previous time point to improve the detection performance, which we refer to as "re-segmentation". The retrospective data includes 518 metastases in 184 contrast-enhanced T1-weighted MRIs originating from 49 patients (63% male, 37% female). 131 time-points (36 patients, 418 BMs) are used for cross-validation, the remaining 53 time-points (13 patients, 100 BMs) are used for testing. The lesions were manually delineated with label 1: enhancing lesion, label 2: edema, and label 3: necrosis. One-tailed t-tests are used to compare model performance including multiple segmentation and detection metrics. Significance is considered as p < 0.05. A Dice Similarity Coefficient (DSC) of 0.79 and
Locally Rotation Invariant (LRI) operators have shown great potential in biomedical texture analysis where patterns appear at random positions and orientations. LRI operators can be obtained by computing the responses to the discrete rotation of local descriptors, such as Local Binary Patterns (LBP) or the Scale Invariant Feature Transform (SIFT). Other strategies achieve this invariance using Laplacian of Gaussian or steerable wavelets for instance, preventing the introduction of sampling errors during the discretization of the rotations. In this work, we obtain LRI operators via the local projection of the image on the spherical harmonics basis, followed by the computation of the bispectrum, which shares and extends the invariance properties of the spectrum. We investigate the benefits of using the bispectrum over the spectrum in the design of a LRI layer embedded in a shallow Convolutional Neural Network (CNN) for 3D image analysis. The performance of each design is evaluated on two datasets and compared against a standard 3D CNN. The first dataset is made of 3D volumes composed of synthetically generated rotated patterns, while the second contains malignant and benign pulmonary nodules in Computed Tomography (CT) images. The results indicate that bispectrum CNNs allows for a significantly better characterization of 3D textures than both the spectral and standard CNN. In addition, it can efficiently learn with fewer training examples and trainable parameters when compared to a standard convolutional layer.
Abstract Manual segmentation of lesions, required for radiotherapy planning and follow-up, is time-consuming and error-prone. Automatic detection and segmentation can assist radiologists in these tasks. This work explores the automated detection and segmentation of brain metastases (BMs) in longitudinal MRIs. It focuses on several important aspects: identifying and segmenting new lesions for screening and treatment planning, re-segmenting lesions in successive images using prior lesion locations as an additional input channel, and performing multi-component segmentation to distinguish between enhancing tissue, edema, and necrosis. The retrospective data includes 518 metastases in 184 contrast-enhanced T1-weighted MRIs originating from 49 patients (63% male, 37% female). 131 time-points (36 patients, 418 BMs) are used for cross-validation, the remaining 53 time-points (13 patients, 100 BMs) are used for testing. The lesions were manually delineated with label 1: enhancing lesion, label 2: edema, and label 3: necrosis. One-tailed t-tests are used to compare model performance including multiple segmentation and detection metrics. Significance is considered as p$<$0.05. A Dice Similarity Coefficient (DSC) of 0.786 and F1-score of 0.804 are obtained for the segmentation of new lesions. On follow-up scans, the re-segmentation model significantly outperforms the segmentation model (DSC and F1 0.777 and 0.877 vs 0.559 and 0.604). The re-segmentation model also significantly outperforms the simple segmentation model on the enhancing lesion (DSC 0.761 vs 0.525) and edema (0.524 vs 0.465) components, while similar scores are obtained on the necrosis component (0.622 vs 0.627). Additionally, we analyze the correlation between lesion size and segmentation performance, as demonstrated in various studies that highlight the challenges in segmenting small lesions. Our findings indicate that this correlation disappears when utilizing the re-segmentation approach and evaluating with the unbiased normalized DSC. In conclusion, the automated segmentation of new lesions and subsequent re-segmentation in follow-up images was achievable, with high level of performance obtained for single- and multiple-component segmentation tasks.
Abstract Background Effective follow-up of brain metastasis (BM) patients post-treatment is crucial for adapting therapies and detecting new lesions. Current guidelines (Response Assessment in Neuro-Oncology-BM) have limitations, such as patient-level assessments and arbitrary lesion selection, which may not reflect outcomes in high tumor burden cases. Accurate, reproducible, and automated response assessments can improve follow-up decisions, including (1) optimizing re-treatment timing to avoid treating responding lesions or delaying treatment of progressive ones, and (2) enhancing precision in evaluating responses during clinical trials. Methods We compared manual and automatic (deep learning-based) lesion contouring using unidimensional and volumetric criteria. Analysis focused on (1) agreement in size and RANO-BM categories, (2) stability of measurements under scanner rotations and over time, and (3) predictability of 1-year outcomes. The study included 49 BM patients, with 184 MRI studies and 448 lesions, retrospectively assessed by radiologists. Results Automatic contouring and volumetric criteria demonstrated superior stability (P < .001 for rotation; P < .05 over time) and better outcome predictability compared to manual methods. These approaches reduced observer variability, offering reliable and efficient response assessments. The best outcome predictability, defined as 1-year response, was achieved using automatic contours and volumetric measurements. These findings highlight the potential of automated tools to streamline clinical workflows and provide consistency across evaluators, regardless of expertise. Conclusion Automatic BM contouring and volumetric measurements provide promising tools to improve follow-up and treatment decisions in BM management. By enhancing precision and reproducibility, these methods can streamline clinical workflows and improve the evaluation of response in trials and practice.
Abstract Objectives The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Conclusions MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/ .