Background and Objective: The goal of this work is to propose a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling. Methods: We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction, and a dedicated iterative procedure to improve the implant geometry, followed by automatic generation of models ready for 3-D printing. We propose a cross-case augmentation based on imperfect image registration combining cases from different datasets. We perform ablation studies regarding different augmentation strategies and compare them to other state-of-the-art methods.Results: We evaluate the method on three datasets introduced during the AutoImplant 2021 challenge, organized jointly with the MICCAI conference. We perform the quantitative evaluation using the Dice and boundary Dice coefficients, and the Hausdorff distance. The average Dice coefficient, boundary Dice coefficient, and the 95th percentile of Hausdorff distance are 0.91, 0.94, and 1.53 mm respectively. We perform an additional qualitative evaluation by 3-D printing and visualization in mixed reality to confirm the implant's usefulness. Conclusion: We propose a complete pipeline that enables one to create the cranial implant model ready for 3-D printing. The described method is a greatly extended version of the method that scored 1st place in all AutoImplant 2021 challenge tasks. We freely release the source code, that together with the open datasets, makes the results fully reproducible. The automatic reconstruction of cranial defects may enable manufacturing personalized implants in a significantly shorter time, possibly allowing one to perform the 3-D printing process directly during a given intervention. Moreover, we show the usability of the defect reconstruction in mixed reality that may further reduce the surgery time.
Manual segmentation of lesions, required for radiotherapy planning and follow-up, is time-consuming and error-prone. Automatic detection and segmentation can assist radiologists in these tasks. This work explores the automated detection and segmentation of brain metastases (BMs) in longitudinal MRIs. It focuses on several important aspects: identifying and segmenting new lesions for screening and treatment planning, re-segmenting lesions in successive images using prior lesion locations as an additional input channel, and performing multi-component segmentation to distinguish between enhancing tissue, edema, and necrosis. The key component of the proposed approach is to propagate the lesion mask from the previous time point to improve the detection performance, which we refer to as "re-segmentation". The retrospective data includes 518 metastases in 184 contrast-enhanced T1-weighted MRIs originating from 49 patients (63% male, 37% female). 131 time-points (36 patients, 418 BMs) are used for cross-validation, the remaining 53 time-points (13 patients, 100 BMs) are used for testing. The lesions were manually delineated with label 1: enhancing lesion, label 2: edema, and label 3: necrosis. One-tailed t-tests are used to compare model performance including multiple segmentation and detection metrics. Significance is considered as p < 0.05. A Dice Similarity Coefficient (DSC) of 0.79 and
This study presents an approach to Parkinson's disease detection using vowels with sustained phonation and a ResNet architecture dedicated originally to image classification. We calculated spectrum of the audio recordings and used them as an image input to the ResNet architecture pre-trained using the ImageNet and SVD databases. To prevent overfitting the dataset was strongly augmented in the time domain. The Parkinson's dataset (from PC-GITA database) consists of 100 patients (50 were healthy / 50 were diagnosed with Parkinson's disease). Each patient was recorded 3 times. The obtained accuracy on the validation set is above 90% which is comparable to the current state-of-the-art methods. The results are promising because it turned out that features learned on natural images are able to transfer the knowledge to artificial images representing the spectrogram of the voice signal. What is more, we showed that it is possible to perform a successful detection of Parkinson's disease using only frequency-based features. A spectrogram enables visual representation of frequencies spectrum of a signal. It allows to follow the frequencies changes of a signal in time.
Inverting a deformation field is a crucial part for numerous image registration methods and has an important impact on the final registration results. There are methods that work well for small and relatively simple deformations. However, a problem arises when the deformation field consists of complex and large deformations, potentially including folding. For such cases, the state-of-the-art methods fail and the inversion results are unpredictable. In this article, we propose a deep network using the encoder-decoder architecture to improve the inverse calculation. The network is trained using deformations randomly generated using various transformation models and their compositions, with a symmetric inverse consistency error as the cost function. The results are validated using synthetic deformations resembling real ones, as well as deformation fields calculated during registration of real histology data. We show that the proposed method provides an approximate inverse with a lower error than the current state-of-the-art methods.
A heart-convolutional neural network (heart-CNN) was designed and tested for the automatic classification of chest radiographs in dogs affected by myxomatous mitral valve disease (MMVD) at different stages of disease severity. A retrospective and multicenter study was conducted. Lateral radiographs of dogs with concomitant X-ray and echocardiographic examination were selected from the internal databases of two institutions. Dogs were classified as healthy, B1, B2, C and D, based on American College of Veterinary Internal Medicine (ACVIM) guidelines, and as healthy, mild, moderate, severe and late stage, based on Mitral INsufficiency Echocardiographic (MINE) score. Heart-CNN performance was evaluated using confusion matrices, receiver operating characteristic curves, and t-SNE and UMAP analysis. The area under the curve (AUC) was 0.88, 0.88, 0.79, 0.89 and 0.84 for healthy and ACVIM stage B1, B2, C and D, respectively. According to the MINE score, the AUC was 0.90, 0.86, 0.71, 0.82 and 0.82 for healthy, mild, moderate, severe and late stage, respectively. The developed algorithm showed good accuracy in predicting MMVD stages based on both classification systems, proving a potentially useful tool in the early diagnosis of canine MMVD.
The goal of this work is to propose a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling. We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction, and a dedicated iterative procedure to improve the implant geometry, followed by automatic generation of models ready for 3-D printing. We propose a cross-case augmentation based on imperfect image registration combining cases from different datasets. We perform ablation studies regarding different augmentation strategies and compare them to other state-of-the-art methods. We evaluate the method on three datasets introduced during the AutoImplant 2021 challenge, organized jointly with the MICCAI conference. We perform the quantitative evaluation using the Dice and boundary Dice coefficients, and the Hausdorff distance. The average Dice coefficient, boundary Dice coefficient, and the 95th percentile of Hausdorff distance are 0.91, 0.94, and 1.53 mm respectively. We perform an additional qualitative evaluation by 3-D printing and visualization in mixed reality to confirm the implant's usefulness. We propose a complete pipeline that enables one to create the cranial implant model ready for 3-D printing. The described method is a greatly extended version of the method that scored 1st place in all AutoImplant 2021 challenge tasks. We freely release the source code, that together with the open datasets, makes the results fully reproducible. The automatic reconstruction of cranial defects may enable manufacturing personalized implants in a significantly shorter time, possibly allowing one to perform the 3-D printing process directly during a given intervention. Moreover, we show the usability of the defect reconstruction in mixed reality that may further reduce the surgery time.
This study introduces an innovative multimodal strategy for diagnosing neurodegenerative disorders by integrating Augmented/Mixed Reality. Our primary focus involves harnessing AR goggles to capture Parkinson's Disease symptoms, reshaping our comprehension of brain-related disorders. Through meticulous sensor data and technical intricacies like hand movements, gait and gaze analysis, our system offers a transformative approach to patient care. At its core, our work features an autonomous game-alike experience - a 30-minute journey encompassing 17 varied tasks seamlessly inte-grating motor skills, gait and gaze exercises, speech tasks, memory challenges, and cognitive exercises. This gamified platform, coupled with clinical tests, delivers a holistic assessment of patients' physical and cognitive capabilities. By automating tasks and leveraging AR glasses for enhanced patient comfort, our approach not only capti-vates patients but also emerges as a potent diagnostic tool, heralding an era of interactive neurodiagnostics where innovation converges with patient engagement.
Abstract The aim of this study was to develop and test an artificial intelligence (AI)-based algorithm for detecting common technical errors in canine thoracic radiography. The algorithm was trained using a database of thoracic radiographs from three veterinary clinics in Italy, which were evaluated for image quality by three experienced veterinary diagnostic imagers. The algorithm was designed to classify the images as correct or having one or more of the following errors: rotation, underexposure, overexposure, incorrect limb positioning, incorrect neck positioning, blurriness, cut-off, or the presence of foreign objects, or medical devices. The algorithm was able to correctly identify errors in thoracic radiographs with an overall accuracy of 81.5% in latero-lateral and 75.7% in sagittal images. The most accurately identified errors were limb mispositioning and underexposure both in latero-lateral and sagittal images. The accuracy of the developed model in the classification of technically correct radiographs was fair in latero-lateral and good in sagittal images. The authors conclude that their AI-based algorithm is a promising tool for improving the accuracy of radiographic interpretation by identifying technical errors in canine thoracic radiographs.