Deep Recurrent Attention Models for Histopathological Image Analysis

2018 
Histopathology defines the gold standard in oncology. Automatic analysis of pathology images could thus have a significant impact on diagnoses, prognoses and treatment decisions for cancer patients. Recently, convolutional neural networks (CNNs) have shown strong performance in computational histopathology tasks. However, given it remains intractable to process pathology slides in their entirety, CNNs have traditionally performed inference on small individual patches extracted from the image. This often requires a significant amount of computation and can result in ignoring potentially relevant spatial and contextual information. Being able to process larger input patches and locating discriminatory regions more efficiently could help improve both computational and task specific performance. Inspired by the recent success of Deep Recurrent Attention Models (DRAMs) in image recognition tasks, we propose a novel attention-based architecture for classification in histopathology. Similar to CNNs, DRAMs have a degree of translation invariance built-in, but the amount of computation performed can be controlled independently from the input image size. The model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant areas of large input patches. We evaluate our model on histological and molecular subtype classification tasks for the glioma cohorts of The Cancer Genome Atlas (TCGA). Our results suggest that the DRAM has comparable performance to state-of-the-art CNNs despite only processing a select number of patches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    14
    Citations
    NaN
    KQI
    []