Deep learning system for true- and pseudo-invasion in colorectal polyps
0
Citation
20
Reference
10
Related Paper
Abstract:
Abstract Over 15 million colonoscopies were performed yearly in North America, during which biopsies were taken for pathological examination to identify abnormalities. Distinguishing between true- and pseudo-invasion in colon polyps is critical in treatment planning. Surgical resection of the colon is often the treatment option for true invasion, whereas observation is recommended for pseudo-invasion. The task of identifying true- vs pseudo-invasion, however, could be highly challenging. There is no specialized software tool for this task, and no well-annotated dataset is available. In our work, we obtained (only) 150 whole-slide images (WSIs) from the London Health Science Centre. We built three deep neural networks representing different magnifications in WSIs, mimicking the workflow of pathologists. We also built an online tool for pathologists to annotate WSIs to train our deep neural networks. Results showed that our novel system classifies tissue types with 95.3% accuracy and differentiates true- and pseudo-invasions with 83.9% accuracy. The system’s efficiency is comparable to an expert pathologist. Our system can also be easily adjusted to serve as a confirmatory or screening tool. Our system (available at http://ai4path.ca ) will lead to better, faster patient care and reduced healthcare costs.Keywords:
Deep Neural Networks
Patient Care
Deep Neural Networks
Cite
Citations (136)
As the workflow/BPM systems and their applications are prevailing in the wide and variety industries, we can easily predict not only that very large-scale workflow systems (VLSW) become more prevalent and much more needed in the markets, but also that the quality of workflow (QOW) and its related topics be issued in the near future. Particularly, in the QOW issues such as work flow knowledges/intelligence, workflow validations, workflow verifications, workflow mining and workflow rediscovery problems, the toughest challenging and the m ost impeccable issue is the workflow knowledge m ining and discovery problems that are based upon workflow enactment event history information logged by workflow engines equipped with a certain logging mechanism. Therefore, having an efficient event logging m echanism is the most valuable as well as A and Ω of those QOW issues and solutions. In this paper, we propose a workflow enactment event logging mechanism supporting three types of event log information ― workcase event type, activity event type and workite m event type, and descr ibe the im plementation details of the mechanism so as to be embedded into the e-Chautauqua system that has been recently developed by the CTRL research group as a very large scale workflow management system. Finally, we summarize the implications of the mechanism and its log information on workflow knowledge mining and discovery techniques.
Workflow Management Coalition
Cite
Citations (12)
e-Science usually involves a great number of data sets, computing resources, and large teams managed and developed by research laboratories, universities, or governments. Science processes, if deployed in the workflow forms, can be managed more effectively and executed more automatically. Scientific workflows have therefore emerged and been adopted as a paradigm to organize and orchestrate activities in e-Science processes. Differing with workflows applied in the business world, however, scientific workflows need to take account of specific characteristics of science processes and make corresponding changes to accommodate those specific characteristics. A task-based scientific workflow modeling and executing approach is therefore proposed in this chapter for orchestrating e-Science with the workflow paradigm. Besides, this chapter also discusses some related work in the scientific workflow field.
e-Science
Cite
Citations (0)
The use of workflows to support and realize computer simulations, experiments and calculations is well-accepted in the e-Science domain. The different tasks and the parameters of the simulation are therefore specified in workflow models. Scientists typically work in a trial-and-error manner which means they do not know how the final workflow of a simulation has to look like. Therefore, they use a maybe insufficient workflow model as a basis and try to improve this model over multiple iterations to get a better approximation to the problem to solve. So in each iteration multiple trials are based on different variants of the same workflow model. Towards the goal of building variants of workflow models and enabling the reuse of existing scientific workflows in a controlled and well-defined manner, in this paper, we identify how configurable workflow models will support scientists to customize existing workflow models by their configuration. Therefore, we introduce possible configuration options for scientific workflows and how scientists can specify them. Furthermore, we show how configurable workflow models are a first step towards enabling the collaboration among scientists in creating scientific workflows.
Cite
Citations (0)
Deep Learning has achieved tremendous results by pushing the frontier of automation in diverse domains. Unfortunately, current neural network architectures are not explainable by design. In this paper, we propose a novel method that trains deep hypernetworks to generate explainable linear models. Our models retain the accuracy of black-box deep networks while offering free lunch explainability by design. Specifically, our explainable approach requires the same runtime and memory resources as black-box deep models, ensuring practical feasibility. Through extensive experiments, we demonstrate that our explainable deep networks are as accurate as state-of-the-art classifiers on tabular data. On the other hand, we showcase the interpretability of our method on a recent benchmark by empirically comparing prediction explainers. The experimental results reveal that our models are not only as accurate as their black-box deep-learning counterparts but also as interpretable as state-of-the-art explanation techniques.
Interpretability
Black box
Benchmark (surveying)
Deep Neural Networks
Cite
Citations (1)
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples. This tutorial will particularly highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs). We will also introduce some effective countermeasures to improve robustness of deep learning models, with a particular focus on adversarial training. We aim to provide a comprehensive overall picture about this emerging direction and enable the community to be aware of the urgency and importance of designing robust deep learning models in safety-critical data analytical applications, ultimately enabling the end-users to trust deep learning classifiers. We will also summarize potential research directions concerning the adversarial robustness of deep learning, and its potential benefits to enable accountable and trustworthy deep learning-based data analytical systems and applications.
Robustness
Deep Neural Networks
Trustworthiness
Cite
Citations (14)
Workflow enacting systems are a popular technology in business and e-science alike to flexibly define and enact complex data processing tasks. Since the construction of a workflow for a specific task can become quite complex, efforts are currently underway to increase the re-use of workflows through the implementation of specialized workflow repositories. While existing methods to exploit the knowledge in these repositories usually consider workflows as an atomic entity, our work is based on the fact that workflows can naturally be viewed as graphs. Hence, in this paper we investigate the use of graph kernels for the problems of workflow discovery, workflow recommendation, and workflow pattern extraction, paying special attention on the typical situation of few labeled and many unlabeled workflows. To empirically demonstrate the feasibility of our approach we investigate a dataset of bioinformatics workflows retrieved from the website myexperiment.org. 1
Cite
Citations (8)
Workflow reuse is a major benefit of workflow systems and shared workflow repositories, but there are barely any studies that quantify the degree of reuse of workflows or the practical barriers that may stand in the way of successful reuse. In our own work, we hypothesize that defining workflow fragments improves reuse, since end-to-end workflows may be very specific and only partially reusable by others. This paper reports on a study of the current use of workflows and workflow fragments in labs that use the LONI Pipeline, a popular workflow system used mainly for neuroimaging research that enables users to define and reuse workflow fragments. We present an overview of the benefits of workflows and workflow fragments reported by users in informal discussions. We also report on a survey of researchers in a lab that has the LONI Pipeline installed, asking them about their experiences with reuse of workflow fragments and the actual benefits they perceive. This leads to quantifiable indicators of the reuse of workflows and workflow fragments in practice. Finally, we discuss barriers to further adoption of workflow fragments and workflow reuse that motivate further work.
Cite
Citations (13)
Similarity (geometry)
Cite
Citations (2)
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples. This tutorial will particularly highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs). We will also introduce some effective countermeasures to improve the robustness of deep learning models, with a particular focus on adversarial training. We aim to provide a comprehensive overall picture about this emerging direction and enable the community to be aware of the urgency and importance of designing robust deep learning models in safety-critical data analytical applications, ultimately enabling the end-users to trust deep learning classifiers. We will also summarize potential research directions concerning the adversarial robustness of deep learning, and its potential benefits to enable accountable and trustworthy deep learning-based data analytical systems and applications.
Robustness
Deep Neural Networks
Trustworthiness
Cite
Citations (0)