Whole-slide image (WSI) classification is a challenging task because 1) patches from WSI lack annotation, and 2) WSI possesses unnecessary variability, e.g., stain protocol. Recently, Multiple-Instance Learning (MIL) has made significant progress, allowing for classification based on slide-level, rather than patch-level, annotations. However, existing MIL methods ignore that all patches from normal slides are normal. Using this free annotation, we introduce a semi-supervision signal to de-bias the inter-slide variability and to capture the common factors of variation within normal patches. Because our method is orthogonal to the MIL algorithm, we evaluate our method on top of the recently proposed MIL algorithms and also compare the performance with other semi-supervised approaches. We evaluate our method on two public WSI datasets including Camelyon-16 and TCGA lung cancer and demonstrate that our approach significantly improves the predictive performance of existing MIL algorithms and outperforms other semi-supervised algorithms. We release our code at https://github.com/AITRICS/pathology_mil.
Background and Objective: To develop a reliable and accurate seizure detection method using deep learning models capable of detecting and classifying multiple seizure types in real time. Methods: We retrospectively collected electroencephalography (EEG) recordings, which were acquired as part of routine diagnostic tests for patients aged 3 months to <=18 years of age with childhood absence epilepsy, infantile epileptic spasms syndrome, other generalized epilepsy, and focal epilepsy, between January 2018 and December 2022 at Severance Children's Hospital. We used EEG recordings from both seizure and non-seizure patients, which were downsampled to 200 Hz for real-time seizure detection and multi-classification. Results: Of the 199 patients (620 seizures), 49 (297 seizures) belonged to the childhood absence epilepsy group, 16 (200 seizures) to the infantile epileptic spasms syndrome group, 14 (76 seizures) to other generalized epilepsy group, 19 (47 seizures) to focal epilepsy group, and 101 to the normal group. The results showed the best overall performance of AUROC 0.98 and APROC of 0.73 with ResNet with Long-Short Term Network and a 12 s sliding window on real-time seizure detection task. Furthermore, ResNet50 without the frequency bands feature extractor showed the best overall weighted performance for multi-class seizure detection with 0.99 AUROC and 0.99 APPRC. Discussion: Our approach proposes robust methods which include EEG preprocessing strategy with real-time detection/classification of multiple seizures, which helps monitor pediatric seizure. The result shows that real-time seizure detection can be effectively applied to real-world clinical datasets from a pediatric epilepsy unit with realistic performance and speed.
Whole-slide image (WSI) classification is a challenging task because 1) patches from WSI lack annotation, and 2) WSI possesses unnecessary variability, e.g., stain protocol. Recently, Multiple-Instance Learning (MIL) has made significant progress, allowing for classification based on slide-level, rather than patch-level, annotations. However, existing MIL methods ignore that all patches from normal slides are normal. Using this free annotation, we introduce a semi-supervision signal to de-bias the inter-slide variability and to capture the common factors of variation within normal patches. Because our method is orthogonal to the MIL algorithm, we evaluate our method on top of the recently proposed MIL algorithms and also compare the performance with other semi-supervised approaches. We evaluate our method on two public WSI datasets including Camelyon-16 and TCGA lung cancer and demonstrate that our approach significantly improves the predictive performance of existing MIL algorithms and outperforms other semi-supervised algorithms. We release our code at https://github.com/AITRICS/pathology_mil.
Abstract The array of complex and evolving patient data has limited clinical decision making in the emergency department (ED). This study introduces an advanced deep learning algorithm designed to enhance real-time prediction accuracy for integration into a novel Clinical Decision Support System (CDSS). A retrospective study was conducted using data from a level 1 tertiary hospital. The algorithm’s predictive performance was evaluated based on in-hospital cardiac arrest, inotropic circulatory support, advanced airway, and intensive care unit admission. We developed an artificial intelligence (AI) algorithm for CDSS that integrates multiple data modalities, including vitals, laboratory, and imaging results from electronic health records. The AI model was trained and tested on a dataset of 237,059 ED visits. The algorithm’s predictions, based solely on triage information, significantly outperformed traditional logistic regression models, with notable improvements in the area under the precision-recall curve (AUPRC). Additionally, predictive accuracy improved with the inclusion of continuous data input at shorter intervals. This study suggests the feasibility of using AI algorithms in diverse clinical scenarios, particularly for earlier detection of clinical deterioration. Future work should focus on expanding the dataset and enhancing real-time data integration across multiple centers to further optimize its application within the novel CDSS.
Electronic Health Record (EHR) provides abundant information through various modalities. However, learning multi-modal EHR is currently facing two major challenges, namely, 1) data embedding and 2) cases with missing modality. A lack of shared embedding function across modalities can discard the temporal relationship between different EHR modalities. On the other hand, most EHR studies are limited to relying only on EHR Times-series, and therefore, missing modality in EHR has not been well-explored. Therefore, in this study, we introduce a Unified Multi-modal Set Embedding (UMSE) and Modality-Aware Attention (MAA) with Skip Bottleneck (SB). UMSE treats all EHR modalities without a separate imputation module or error-prone carry-forward, whereas MAA with SB learns missing modal EHR with effective modality-aware attention. Our model outperforms other baseline models in mortality, vasopressor need, and intubation need prediction with the MIMIC-IV dataset.
Accurate time prediction of patients' critical events is crucial in urgent scenarios where timely decision-making is important. Though many studies have proposed automatic prediction methods using Electronic Health Records (EHR), their coarse-grained time resolutions limit their practical usage in urgent environments such as the emergency department (ED) and intensive care unit (ICU). Therefore, in this study, we propose an hourly prediction method based on self-supervised predictive coding and multi-modal fusion for two critical tasks: mortality and vasopressor need prediction. Through extensive experiments, we prove significant performance gains from both multi-modal fusion and self-supervised predictive regularization, most notably in far-future prediction, which becomes especially important in practice. Our uni-modal/bi-modal/bi-modal self-supervision scored 0.846/0.877/0.897 (0.824/0.855/0.886) and 0.817/0.820/0.858 (0.807/0.81/0.855) with mortality (far-future mortality) and with vasopressor need (far-future vasopressor need) prediction data in AUROC, respectively.