We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzhen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform. LYSTO has supported a number of research in lymphocyte assessment in oncology. LYSTO will be a long-lasting educational challenge for deep learning and digital pathology, it is available at https://lysto.grand-challenge.org/ .
The British Machine Vision Conference is the main UK conference on machine vision and related areas. Organised by the British Machine Vision Association, the 18th BMVC was held on 10-13 September 2007 at the University of Warwick, UK.
This paper describes a novel approach to tissue classification using three-dimensional (3D) derivative features in the volume rendering pipeline. In conventional tissue classification for a scalar volume, tissues of interest are characterized by an opacity transfer function defined as a one-dimensional (1D) function of the original volume intensity. To overcome the limitations inherent in conventional 1D opacity functions, we propose a tissue classification method that employs a multidimensional opacity function, which is a function of the 3D derivative features calculated from a scalar volume as well as the volume intensity. Tissues of interest are characterized by explicitly defined classification rules based on 3D filter responses highlighting local structures, such as edge, sheet, line, and blob, which typically correspond to tissue boundaries, cortices, vessels, and nodules, respectively, in medical volume data. The 3D local structure filters are formulated using the gradient vector and Hessian matrix of the volume intensity function combined with isotropic Gaussian blurring. These filter responses and the original intensity define a multidimensional feature space in which multichannel tissue classification strategies are designed. The usefulness of the proposed method is demonstrated by comparisons with conventional single-channel classification using both synthesized data and clinical data acquired with CT (computed tomography) and MRI (magnetic resonance imaging) scanners. The improvement in image quality obtained using multichannel classification is confirmed by evaluating the contrast and contrast-to-noise ratio in the resultant volume-rendered images with variable opacity values.
Image segmentation is an important area in the general field of image processing and computer vision. It is a fundamental part of the `low level' aspects of computer vision and has many practical applications such as in medical imaging, industrial automation and satellite imagery. Traditional methods for image segmentation have approached the problem either from localisation in class space using region information, or from localisation in position, using edge or boundary information. More recently, however, attempts have been made to combine both region and boundary information in order to overcome the inherent limitations of using either approach alone.
In this thesis, a new approach to image segmentation is presented that integrates region and boundary information within a multiresolution framework. The role of uncertainty is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade-off between position and class resolution and ensures both robustness in noise and efficiency of computation.
The segmentation is based on an image model derived from a general class of multiresolution signal models, which incorporates both region and boundary features. A four stage algorithm is described consisting of: generation of a low-pass pyramid, separate region and boundary estimation processes and an integration strategy. Both the region and boundary processes consist of scale-selection, creation of adjacency graphs, and iterative estimation within a general framework of maximum a posteriori (MAP) estimation and decision theory. Parameter estimation is performed in situ, and the decision processes are both flexible and spatially local, thus avoiding assumptions about global homogeneity or size and number of regions which characterise some of the earlier algorithms. A method for robust estimation of edge orientation and position is described which addresses the problem in the form of a multiresolution minimum mean square error (MMSE) estimation. The method effectively uses the spatial consistency of output of small kernel gradient operators from different scales to produce more reliable edge position and orientation and is effective at extracting boundary orientations from data with low signal-to-noise ratios.
Segmentation results are presented for a number of synthetic and natural images which show the cooperative method to give accurate segmentations at low signal-to-noise ratios (0 dB) and to be more effective than previous methods at capturing complex region shapes.
In recent years, the development of new and powerful image acquisition techniques has lead to a shift from purely qualitative observation of biomedical images towards more a quantitative examination of the data, which linked with statistical analysis and mathematical modeling has provided more interesting and solid results than the purely visual monitoring of an experiment. The resolution of the imaging equipment has increased considerably and the data provided in many cases is not just a simple image, but a three-dimensional volume. Texture provides interesting information that can characterize anatomical regions or cell populations whose intensities may not be different enough to discriminate between them. This chapter presents a tutorial on volumetric texture analysis. The chapter begins with different definitions of texture together with a literature review focused on the medical and biological applications of texture. A review of texture extraction techniques follows, with a special emphasis on the analysis of volumetric data and examples to visualize the techniques. By the end of the chapter, a review of advantages and disadvantages of all techniques is presented together with some important considerations regarding the classification of the measurement space.Request access from your librarian to read this chapter's full text.
The aim of this work is to register serial in-vivo confocal microscopy images of zebrafish to enable accurate cell tracking on corresponding fluorescence images. The following problem arises during acquisition; the zebrafish tail may undergoe a series of movement and non-linear deformations, which if not corrected, adds to the motion of leukocytes being tracked. This makes it difficult to accurately assess their motion. We developed a correlation based, local affine image matching method, which is well suited to the textured DIC images of the anatomy of the zebrafish and enables accurate and efficient tracking of image regions over successive frames. Experimental results of the serial registration and tracking demonstrate its accuracy in estimating local affine motions in ze-brafish sequences.
An estimation method for approximating the lighting of a multi-view scene is developed. It is assumed that a set of scene patches can be obtained with estimates of their normals and depths from a given camera. The effect of lighting on the scene is modelled as multiplicative and additive bias-fields, represented by spherical harmonic (SH) basis functions. Parameters of a weighted sum of SHs to a given order are sought by minimising the entropy of the patch colours as the bias is taken out. The method performs a gradient descent using the entropy as a loss function. The entropy is estimated by sampling using a Parzen window estimator which allows its change with respect to the SH weights to be calculated analytically. We illustrate our estimator on 2D retrospective shading correction and then define Phong illumination as a bias field estimation problem and its continuous generalisation. Results on simple modelled scenes lit by one or more Phong point light sources without scattering are presented. We discuss how the lighting estimation could be extended to handle shadows and propose a model for estimation of parametric BRDFs in arbitrary lighting using the same framework.
Histopathological examination is a crucial step in the diagnosis and treatment of many major diseases. Aiming to facilitate diagnostic decision making and improve the workload of pathologists, we developed an artificial intelligence (AI)-based prescreening tool that analyses whole-slide images (WSIs) of large-bowel biopsies to identify typical, non-neoplastic, and neoplastic biopsies.This retrospective cohort study was conducted with an internal development cohort of slides acquired from a hospital in the UK and three external validation cohorts of WSIs acquired from two hospitals in the UK and one clinical laboratory in Portugal. To learn the differential histological patterns from digitised WSIs of large-bowel biopsy slides, our proposed weakly supervised deep-learning model (Colorectal AI Model for Abnormality Detection [CAIMAN]) used slide-level diagnostic labels and no detailed cell or region-level annotations. The method was developed with an internal development cohort of 5054 biopsy slides from 2080 patients that were labelled with corresponding diagnostic categories assigned by pathologists. The three external validation cohorts, with a total of 1536 slides, were used for independent validation of CAIMAN. Each WSI was classified into one of three classes (ie, typical, atypical non-neoplastic, and atypical neoplastic). Prediction scores of image tiles were aggregated into three prediction scores for the whole slide, one for its likelihood of being typical, one for its likelihood of being non-neoplastic, and one for its likelihood of being neoplastic. The assessment of the external validation cohorts was conducted by the trained and frozen CAIMAN model. To evaluate model performance, we calculated area under the convex hull of the receiver operating characteristic curve (AUROC), area under the precision-recall curve, and specificity compared with our previously published iterative draw and rank sampling (IDaRS) algorithm. We also generated heat maps and saliency maps to analyse and visualise the relationship between the WSI diagnostic labels and spatial features of the tissue microenvironment. The main outcome of this study was the ability of CAIMAN to accurately identify typical and atypical WSIs of colon biopsies, which could potentially facilitate automatic removing of typical biopsies from the diagnostic workload in clinics.A randomly selected subset of all large bowel biopsies was obtained between Jan 1, 2012, and Dec 31, 2017. The AI training, validation, and assessments were done between Jan 1, 2021, and Sept 30, 2022. WSIs with diagnostic labels were collected between Jan 1 and Sept 30, 2022. Our analysis showed no statistically significant differences across prediction scores from CAIMAN for typical and atypical classes based on anatomical sites of the biopsy. At 0·99 sensitivity, CAIMAN (specificity 0·5592) was more accurate than an IDaRS-based weakly supervised WSI-classification pipeline (0·4629) in identifying typical and atypical biopsies on cross-validation in the internal development cohort (p<0·0001). At 0·99 sensitivity, CAIMAN was also more accurate than IDaRS for two external validation cohorts (p<0·0001), but not for a third external validation cohort (p=0·10). CAIMAN provided higher specificity than IDaRS at some high-sensitivity thresholds (0·7763 vs 0·6222 for 0·95 sensitivity, 0·7126 vs 0·5407 for 0·97 sensitivity, and 0·5615 vs 0·3970 for 0·99 sensitivity on one of the external validation cohorts) and showed high classification performance in distinguishing between neoplastic biopsies (AUROC 0·9928, 95% CI 0·9927-0·9929), inflammatory biopsies (0·9658, 0·9655-0·9661), and atypical biopsies (0·9789, 0·9786-0·9792). On the three external validation cohorts, CAIMAN had AUROC values of 0·9431 (95% CI 0·9165-0·9697), 0·9576 (0·9568-0·9584), and 0·9636 (0·9615-0·9657) for the detection of atypical biopsies. Saliency maps supported the representation of disease heterogeneity in model predictions and its association with relevant histological features.CAIMAN, with its high sensitivity in detecting atypical large-bowel biopsies, might be a promising improvement in clinical workflow efficiency and diagnostic decision making in prescreening of typical colorectal biopsies.The Pathology Image Data Lake for Analytics, Knowledge and Education Centre of Excellence; the UK Government's Industrial Strategy Challenge Fund; and Innovate UK on behalf of UK Research and Innovation.