A fully automatic system for segmentation of the liver from CT scans is presented. The core of the method consists of a voxel labeling procedure where the probability that each voxel is part of the liver is estimated using a statistical classifier (k-nearest-neighbor) and a set of features. Several features encode positional information, obtained from a multi-atlas registration procedure. In addition, pre-processing steps are carried out to determine the vertical scan range of the liver and to rotate the scan so that the subject is in supine position, and post-processing is applied to the voxel classification result to smooth and improve the final segmentation. The method is evaluated on 10 test scans and performs robustly, as the volumetric overlap error is 12.5% on average and 15.3% for the worst case. A careful inspection of the results reveals, however, that locally many errors are made and the localization of the border is often not precise. The causes and possible solutions for these failures are briefly discussed.
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research.
Lung segmentation is a prerequisite for automated analysis of chest CT scans. Conventional lung segmentation methods rely on large attenuation differences between lung parenchyma and surrounding tissue. These methods fail in scans where dense abnormalities are present, which often occurs in clinical data. Some methods to handle these situations have been proposed, but they are too time consuming or too specialized to be used in clinical practice. In this article, a new hybrid lung segmentation method is presented that automatically detects failures of a conventional algorithm and, when needed, resorts to a more complex algorithm, which is expected to produce better results in abnormal cases. In a large quantitative evaluation on a database of 150 scans from different sources, the hybrid method is shown to perform substantially better than a conventional approach at a relatively low increase in computational cost.
In medical image processing, many filters have been developed to enhance certain structures in 3-D data. In this paper, we propose to use pattern recognition techniques to design more optimal filters. The essential difference with previous approaches is that we provide a system with examples of what it should enhance and suppress. This training data is used to construct a classifier that determines the probability that a voxel in an unseen image belongs to the target structure(s). The output of a rich set of basis filters serves as input to the classifier. In a feature selection process, this set is reduced to a compact, efficient subset. We show that the output of the system can be reused to extract new features, using the same filters, that can be processed by a new classifier. Such a multistage approach further improves performance. While the approach is generally applicable, in this work the focus is on enhancing pulmonary fissures in 3-D computed tomography (CT) chest scans. A supervised fissure enhancement filter is evaluated on two data sets, one of scans with a normal clinical dose and one of ultra-low dose scans. Results are compared with those of a recently proposed conventional fissure enhancement filter. It is demonstrated that both methods are able to enhance fissures, but the supervised approach shows better performance; the areas under the receiver operating characteristic (ROC) curve are 0.98 versus 0.90, for the normal dose data and 0.97 versus 0.87 for the ultra low dose data, respectively.
Emphysema distribution is associated with chronic obstructive pulmonary disease. It is, however, unknown whether computed tomography (CT)-quantified emphysema distribution (upper/lower lobe) is associated with lung function decline in heavy (former) smokers. 587 male participants underwent lung CT and pulmonary function testing at baseline and after a median (interquartile range) follow-up of 2.9 (2.8–3.0) yrs. The lungs were automatically segmented based on anatomically defined lung lobes. Severity of emphysema was automatically quantified per anatomical lung lobe and was expressed as the 15th percentile (Hounsfield unit point below which 15% of the low-attenuation voxels are distributed (Perc15)). The CT-quantified emphysema distribution was based on principal component analysis. Linear mixed models were used to assess the association of emphysema distribution with forced expiratory volume in 1 s (FEV 1 )/forced vital capacity (FVC), FEV 1 and FVC decline. Mean± sd age was 60.2±5.4 yrs, mean baseline FEV 1 /FVC was 71.6±9.0% and overall mean Perc15 was -908.5±20.9 HU. Participants with upper lobe-predominant CT-quantified emphysema had a lower FEV 1 /FVC, FEV 1 and FVC after follow-up compared with participants with lower lobe-predominant CT-quantified emphysema (p=0.001), independent of the total extent of CT-quantified emphysema. Heavy (former) smokers with upper lobe-predominant CT-quantified emphysema have a more rapid decrease in lung function than those with lower lobe-predominant CT-quantified emphysema.
Bronchoscopic lung-volume reduction with the use of one-way endobronchial valves is a potential treatment for patients with severe emphysema. To date, the benefits have been modest but have been hypothesized to be much larger in patients without interlobar collateral ventilation than in those with collateral ventilation.We randomly assigned patients with severe emphysema and a confirmed absence of collateral ventilation to bronchoscopic endobronchial-valve treatment (EBV group) or to continued standard medical care (control group). Primary outcomes were changes from baseline to 6 months in forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), and 6-minute walk distance.Eighty-four patients were recruited, of whom 16 were excluded because they had collateral ventilation (13 patients) or because lobar segments were inaccessible to the endobronchial valves (3 patients). The remaining 68 patients (mean [±SD] age, 59±9 years; 46 were women) were randomly assigned to the EBV group (34 patients) or the control group (34). At baseline, the FEV1 and FVC were 29±7% and 77±18% of the predicted values, respectively, and the 6-minute walk distance was 374±86 m. Intention-to-treat analyses showed significantly greater improvements in the EBV group than in the control group from baseline to 6 months: the increase in FEV1 was greater in the EBV group than in the control group by 140 ml (95% confidence interval [CI], 55 to 225), the increase in FVC was greater by 347 ml (95% CI, 107 to 588), and the increase in the 6-minute walk distance was greater by 74 m (95% CI, 47 to 100) (P<0.01 for all comparisons). By 6 months, 23 serious adverse events had been reported in the EBV group, as compared with 5 in the control group (P<0.001). One patient in the EBV group died. Serious treatment-related adverse events in this group included pneumothorax (18% of patients) and events requiring valve replacement (12%) or removal (15%).Endobronchial-valve treatment significantly improved pulmonary function and exercise capacity in patients with severe emphysema characterized by an absence of interlobar collateral ventilation. (Funded by the Netherlands Organization for Health Research and Development and the University Medical Center Groningen; Netherlands Trial Register number, NTR2876.).
Computer-Aided Detection (CAD) has been shown to be a promising tool for automatic detection of pulmonary nodules from computed tomography (CT) images. However, the vast majority of detected nodules are benign and do not require any treatment. For effective implementation of lung cancer screening programs, accurate identification of malignant nodules is the key. We investigate strategies to improve the performance of a CAD system in detecting nodules with a high probability of being cancers. Two strategies were proposed: (1) combining CAD detections with a recently published lung cancer risk prediction model and (2) the combination of multiple CAD systems. First, CAD systems were used to detect the nodules. Each CAD system produces markers with a certain degree of suspicion. Next, the malignancy probability was automatically computed for each marker, given nodule characteristics measured by the CAD system. Last, CAD degree of suspicion and malignancy probability were combined using the product rule. We evaluated the method using 62 nodules which were proven to be malignant cancers, from 180 scans of the Danish Lung Cancer Screening Trial. The malignant nodules were considered as positive samples, while all other findings were considered negative. Using a product rule, the best proposed system achieved an improvement in sensitivity, compared to the best individual CAD system, from 41.9% to 72.6% at 2 false positives (FPs)/scan and from 56.5% to 88.7% at 8 FPs/scan. Our experiment shows that combining a nodule malignancy probability with multiple CAD systems can increase the performance of computerized detection of lung cancer.