660 Developing generalizable deep learning models for tumor segmentation in pathology images to enable the identification of predictive biomarkers for immunotherapies

2020 
Background Despite recent advances in cancer immunotherapies, their efficacies vary significantly among patients. To better understand the mechanisms of drug resistance, it is essential to characterize immune responses to immunotherapies in the tumor immune microenvironment (TME) from intact patient tissues. To this end, quantitative spatial immune profiling of pathology images has been the focus for many recent studies. Such analysis often depends critically on the automated image segmentation of tumor and stromal compartments. However, current segmentation approaches, even these based on deep learning, often fail to perform well when given datasets to segment, which differ from the data on which they were trained. Specifically, tissue segmentation models trained for one type of organ (source) face challenges in performance when applied directly to images of another organ type (target), even when the targeted regions to segment are highly similar in morphology between the source and target. Here, we present a segmentation approach to adapt knowledge learned from source data of one cancer type to unlabeled target data of another organ cancer type via unsupervised domain adaptation (UDA) frameworks. This research will help build deep learning models that significantly reduce the need for expert manual annotations. Methods Annotated colorectal cancer (CRC)1 (target domain) and prostate cancer (source domain)2 were used for tumor tissue segmentation model development, containing image tiles from 38 and 20 whole slide images, respectively. We compared the performance and robustness of four approaches. First, we implemented two output-space domain-adversarial based UDA’s. We then implemented a self-training-based approach. Additionally, we designed a two-stage UDA approach by first conducting self-training and then further aligning target domain features with category-anchors generated from source data after a first stage of self-training. Results Directly applying a tumor tissue segmentation model trained on prostate cancer images (source) to CRC images (target) resulted in an intersection-over-union (IOU) score of 62.5%, which was 19% IOU lower (domain gap) than using a model trained on target data. Methods based on output-space domain adversarial training reduced the domain gap by up to 8% IOU, a performance result which was better than with self-training-based methods, which only reduced the domain gap by 4%. Both sets of approaches improved precision by 10%. Conclusions We demonstrate the feasibility of designing tumor segmentation models that are robust and generalizable to multiple indications. The UDA approaches have the potential to speed our understanding of factors influencing immunotherapy efficacy through automated annotation of tissue regions required. References Graham S, Chen H, Gamper J, Dou Q, Heng PA, Snead D, Tsang YW, Rajpoot N: MILD-Net: minimal information loss dilated network for quantitativegland instance segmentation in colon histology images. Medical image analysis 2019;52:199–211. Bulten W, Bandi P, Hoven J. et al. Epithelium segmentation using deep learning in HE9, 864.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []