Microscopic cellular image segmentation has become one of the most important routine procedures in modern biological applications. The segmentation task is non-trivial, however, mainly due to imaging artifacts causing highly inhomogeneous appearances of cell nuclei and background with large intensity variations within and across images. Such inconsistent appearance profiles would cause feature overlapping between cell nuclei and background pixels and hence lead to misclassifiation. In this paper, we present a novel method for automatic cell nucleus segmentation, focusing on tackling the intensity inhomogeneity issue. A two-level approach is designed to enhance the discriminative power of intensity features, by first a reference-based intensity normalization for reducing the inter-image variations, and then a further localized object discrimination for overcoming the intra-image variations. The proposed method is evaluated on three different sets of 2D fluorescence microscopy images, and encouraging performance improvements over the state-of-the-art results are obtained.
Multi-modality positron emission tomography and computed tomography (PET-CT) imaging depicts biological and physiological functions (from PET) within a higher resolution anatomical reference frame (from CT). The need to efficiently assimilate the information from these co-aligned volumes simultaneously has resulted in 3D visualisation methods that depict e.g., slice of interest (SOI) from PET combined with direct volume rendering (DVR) of CT. However because DVR renders the whole volume, regions of interests (ROIs) such as tumours that are embedded within the volume may be occluded from view. Volume clipping is typically used to remove occluding structures by 'cutting away' parts of the volume; this involves tedious trail-and-error tweaking of the clipping attempts until a satisfied visualisation is made, thus restricting its application. Hence, we propose a new automated opacity-driven volume clipping method for PET-CT using DVR-SOI visualisation. Our method dynamically calculates the volume clipping depth by considering the opacity information of the CT voxels in front of the PET SOI, thereby ensuring that only the relevant anatomical information from the CT is visualised while not impairing the visibility of the PET SOI. We outline the improvements of our method when compared to conventional 2D and traditional DVR-SOI visualisations.
Dynamic medical imaging is usually limited in application due to the large radiation doses and longer image scanning and reconstruction times. Existing methods attempt to reduce the dynamic sequence by interpolating the volumes between the acquired image volumes. However, these methods are limited to either 2D images and/or are unable to support large variations in the motion between the image volume sequences. In this paper, we present a spatiotemporal volumetric interpolation network (SVIN) designed for 4D dynamic medical images. SVIN introduces dual networks: first is the spatiotemporal motion network that leverages the 3D convolutional neural network (CNN) for unsupervised parametric volumetric registration to derive spatiotemporal motion field from two-image volumes; the second is the sequential volumetric interpolation network, which uses the derived motion field to interpolate image volumes, together with a new regression-based module to characterize the periodic motion cycles in functional organ structures. We also introduce an adaptive multi-scale architecture to capture the volumetric large anatomy motions. Experimental results demonstrated that our SVIN outperformed state-of-the-art temporal medical interpolation methods and natural video interpolation methods that have been extended to support volumetric images. Our ablation study further exemplified that our motion network was able to better represent the large functional motion compared with the state-of-the-art unsupervised medical registration methods.
2527 Objectives To develop an accurate segmentation method for primary lung tumors, in particular when the tumor has heterogeneous uptake on PET and boundary is difficult to discern on CT. Methods In our MGM, the tumor-background likelihood (TBL) is calculated from CT and the topology information is extracted from PET. The model is developed in 3 stages: Stage 1: Extraction of information including (a) topology to reflect the inclusion or exclusion relation of regions. The topology was extracted by representing PET as a contour tree [1]. (b) TBL was estimated as the joint intensity similarity and spatial distance defined as the shortest Euclidean distance between a pixel and the tumor/background labels. The higher the distance cost, the lower the likelihood of the pixel and the seeds. Stage 2: MGM was constructed with an intensity graph to incorporate PET SUVs for tumor identification and TBL for anatomical boundary delineation. A topology graph, based on the contour tree, provided information for inhomogeneous region grouping; and then an inter-graph was derived to propagate the regional grouping information to pixel level and to provide an appropriate classification of the inhomogeneous FDG distribution within the tumor. Stage 3: Tumor segmentation with MGM used a Random Walk (RW) [2] framework. We validated our method on 40 NSCLC patient datasets with manual delineation by a clinical expert. The volumetric overlap was measured by Dice’s similarity coefficient (DSC). Results Our method achieved a better average DSC of 0.842±0.050, when compared to 7 other approaches including SUV-2.5 (0.671 ± 0.120), 50% SUVmax (0.603 ± 0.098), an adaptive threshold based on mean SUV (0.574 ± 0.193), FCM (0.608 ± 0.209), TCD [4] (0.723 ± 0.086), RW from CT (0.787 ± 0.072) and TBLM [5] from PET-CT (0.813 ± 0.069). Conclusions Our MGM improved segmentation accuracy for the identification of primary lung tumors where the tumors had indistinct margins and where there was inhomogeneous FDG uptake.