Medical imaging is crucial for the detection and diagnosis of breast cancer. Artificial intelligence and computer vision have rapidly become popular in medical image analyses thanks to technological advancements. To improve the effectiveness and efficiency of medical diagnosis and treatment, significant efforts have been made in the literature on medical image processing, segmentation, volumetric analysis, and prediction. This paper is interested in the development of a prediction pipeline for breast cancer studies based on 3D computed tomography (CT) scans. Several algorithms were designed and integrated to classify the suitability of the CT slices. The selected slices from patients were then further processed in the pipeline. This was followed by data generalization and volume segmentation to reduce the computation complexity. The selected input data were fed into a 3D U-Net architecture in the pipeline for analysis and volumetric predictions of cancer tumors. Three types of U-Net models were designed and compared. The experimental results show that Model 1 of U-Net obtained the highest accuracy at 91.44% with the highest memory usage; Model 2 had the lowest memory usage with the lowest accuracy at 85.18%; and Model 3 achieved a balanced performance in accuracy and memory usage, which is a more suitable configuration for the developed pipeline.
Abstract Immunophenotyping via multi-marker assays significantly contributes to patient selection, therapeutic monitoring, biomarker discovery, and personalized treatments. Despite its potential, the multiplex immunofluorescence (mIF) technique faces adoption challenges due to technical and financial constraints. Alternatively, hematoxylin and eosin (H&E)-based prediction models of cell phenotypes can provide crucial insights into tumor-immune cell interactions and advance immunotherapy. Current methods mostly rely on manually annotated cell label ground truths, with limitations including high variability and substantial labor costs. To mitigate these issues, researchers are increasingly turning to digitized cell-level data for accurate in-situ cell type prediction. Typically, immunohistochemical (IHC) staining is applied to a tissue section serial to one stained with H&E. However, this method may introduce distortions and tissue section shifts, challenging the assumption of consistent cellular locations. Conversely, mIF overcomes these limitations by allowing for mIF and H&E staining on the same tissue section. Importantly, the multiplexing capability of mIF allows for a thorough analysis of the tumor microenvironment by quantifying multiple cell markers within the same tissue section. In this study, we introduce a Pix2Pix generative adversarial network (P2P-GAN)-based virtual staining model, using CD3 + T-cells in lung cancer as a proof-of-concept. Using an independent CD3 IHC-stained lung cohort, we demonstrate that the model trained with cell label ground-truth from the same tissue section as H&E staining performed significantly better in both CD3 + and CD3 - T-cell prediction. Moreover, the model also displayed prognostic significance on a public lung cohort, demonstrating its potential clinical utility. Notably, our proposed P2P-GAN virtual staining model facilitates image-to-image translation, enabling further spatial analysis of the predicted immune cells, deepening our understanding of tumor-immune interactions, and propelling advancements in personalized immunotherapy. This concept holds potential for the prediction of other cell phenotypes, including CD4 + , CD8 + , and CD20 + cells.
In routine cancer diagnosis, pathologists manually characterize tumor cells based on hematoxylin and eosin (H&E)-stained images. On the other hand, transcriptomic-based tumor molecular subtypes were shown to be associated with important clinical features including tumorigenesis and prognosis. Leveraging recent development of spatial transcriptomics (ST) which allows in-situ transcriptomic profiling of tissues,1 we aim to develop a first-of-its-kind machine learning (ML)-enabled integrated morphology-transcriptome tumor single-cell phenotyping approach.
Methods
Two tissue sections each from tumor and adjacent-normal areas collected from a hepatocellular carcinoma (HCC) patient were profiled using 10× Visium ST platform. Using the companion H&E image, individual epithelial cells were segmented (StarDist algorithm) with 53 morphological and staining features extracted (QuPath v0.3.2). These cells were unsupervisedly clustered using encoder-based ensemble method where the optimal clustering solution was determined based on a consensus score of three clustering metrics. Phenotypic gene signatures of the cell clusters were determined through deconvoluting the ST data. Gene ontology (GO) analysis was done using single sample gene set enrichment, based on the molecular signatures database.
Results
At the optimal clustering setting, 4 epithelial cell clusters, characterized by differential nuclear size, were detected individually in each HCC tissue (figure. 1). Manual inspection by a pathologist (YZ) confirmed that the tumor epithelial cells demonstrated different nuclear sizes and revealed that the two smaller cell clusters looked relatively more well-differentiated, and ~1% found outside the tumor nest, suggesting potential epithelial to mesenchymal transition (EMT) activity. Whereas the two larger clusters were moderately-differentiated and demonstrated hyperchromatic nuclei and pleomorphism. GO analysis confirmed the upregulation of EMT in the smallest cluster, in both tumor tissues. While epithelial cells in the two normal-adjacent tissues appeared morphologically non-cancerous, the corresponding cell clusters contributed to similar cell fractions as that of the tumor tissues; two smaller clusters contributed to ~70% of the total cells across all tissues (figure. 2). Cell clusters with similar nuclear size shared 30%-65% of the top 20 pathways across tissues, indicating inter-tissue phenotypic consistency. Cells were found near cell-type of its own followed by cell-type of similar size, suggesting preferential cell clustering of similar phenotypes (figure 3).
Conclusions
Our ML approach revealed four morphologically-transcriptomically distinct tumor cell subsets in the HCC tissues, with the smallest cells appeared EMT-like. We revealed intra-patient tumor cell heterogeneity yet phenotypic consistency across tissue sampling sites. Altogether, our proposed approach would enable more refined tumor cell phenotyping, advancing our understanding of tumor biology.
Acknowledgements
I would like to thank NTU Undergraduate Research Experience on Campus (URECA) program for giving me opportunity to work on this project for the past year.
Reference
Nerurkar SN, Goh D, Cheung CCL, Nga PQY, Lim JCT, Yeong JPS. Transcriptional spatial profiling of cancer tissues in the era of immunotherapy: the potential and promise. Cancers. 2020;12:2572.
Ethics Approval
This study was approved by the SingHealth Centralized Institutional Review Board (reference numbers: 2018/3045 and 2019/2653).
Consent
The patients provided their written informed consent to participate in this study.
Therapy has been the method of overcoming and treating different phobias for a long time. In recent years, virtual reality (VR) therapy has taken up some space from mere counseling and has proved to be very helpful in this regard. However, VR therapies still lack a measure of the rate and degree of improvement in the patient's condition. In this chapter, a system is proposed that fills this gap by adding a heart rate (HR) monitoring feature embedded with the VR experience. First, 3D immersive VR environments were designed with eye gaze interaction for claustrophobia and nyctophobia. A heart rate monitoring sensor is then attached to the fingertips of the user where it provides real-time data during therapy sessions. A higher heart rate during a session indicates higher levels of stress or fear. Participants were further administered to State Trait Anxiety Inventory (STAI-Y1) survey after VR exposure therapy. Our experiments showed that HR variation during and after a VR session provided better insight into the degree of discomfort experienced by the patient and provided a proper track of the patient's progress throughout therapy sessions. Moreover, the results indicate a decrease in participants anxiety level from high to moderate in the second session of exposure therapy. The positive change in the patients' conditions motivate the future research into designing and developing of VRET for treating other phobias as well.
Virtual reality (VR) has good potential to promote technology-enhanced learning. Students can benefit from immersive visualization and intuitive interaction in their learning of abstract concepts, complex structures, and dynamic processes. This paper is interested in evaluating the effects of VR learning games in a Virtual and Augmented Reality Technology-Enhanced Learning (VARTeL) environment within an engineering education setting. A VARTeL flipped classroom is established in the HIVE learning hub at Nanyang Technological University (NTU) Singapore for the immersive and interactive learning. Experiments are designed for the university students conducting the learning, with three interactive and immersive VR games related to science, technology, engineering and mathematics (STEM), i.e., virtual cells, a virtual F1 racing car, and vector geometry. These VR games are a part of the VARTeL apps designed in-house at NTU for STEM education. Quantitative and qualitative analyses are performed. A total of 156 students from Mechanical Engineering participated in the experiment. There are 15 participants selected for an interview after the experiment. Pre-tests and post-tests are performed using two different models, the developed VARTeL and the modified Technology-Rich Outcome-Focused Learning Environment Inventory (TROFLEI), in order to measure the efficiency of the VARTeL environment in Higher Education. Significant improvements of about 24.8% are observed for the post-tests over the pre-tests, which illustrate the effectiveness of the VARTeL for Engineering education. Details of the VR simulation games, methods of data collection, data analyses, as well as the experiment results are discussed. It is observed from the results that all the underlying scales of the modified TROFLEI are above the threshold for the ‘Good’ category, indicating that a very reliable questionnaire is designed in this research. The mean ‘Ideal’ values are about 0.7–2.6% higher than the mean ‘Actual’ values. The limitations of the experiment and future works with recommendations are also presented in this paper.
Shipping is one of the most important transportation methods used in global trade. Underwater hull cleaning increases the efficiency and decreases fuel consumption by 9.6% due to the removal of marine foul which reduces friction and improves vessel hydrodynamics. With the advancement of technologies, underwater cleaning robots are used to reduce the need of the human diver to clean the hull. While the use of cleaning Remotely Operated Vehicles (ROVs) may have its merits, there is a difficulty in training people that can skilfully control the cleaning ROV. This work attempts to develop a novel simulation technology combining VR and underwater robotics for the training of hull cleaning operators. Specifically, the work investigates the integration of Unity and the Robot Operating System (ROS). A proof-of-concept VR simulator is implemented to control an underwater ROV, model the underwater environment and the ship hull with the aid of important sensors such as a SONAR and Inertial Measurement Units.
108 Background: Classification of disease response is an essential task in cancer research and needs to be done at scale. Automating this process can improve efficiency in the generation of real-world evidence, potentially leading to better patient outcomes. We aim to develop and evaluate Natural Language Processing (NLP) models for this task. Methods: Using 6203 computed tomography (CT) and 1358 magnetic resonance imaging (MRI) reports from 587 patients with lung cancer of all stages seen at the National Cancer Centre Singapore (NCCS), we trained four NLP models (BioBERT, RadBERT-RoBERTA, BioClinicalBERT, GatorTron) to classify the reports into one of four categories: no evidence of disease, stable disease, partial response or disease progression. Model output was compared against human-curated ground truth and performance was evaluated by accuracy. Results: Of the 4 models, GatorTron performed the best (accuracy = 97.1%), followed by RadBERT-RoBERTA (accuracy = 96.2%), BioBERT (accuracy = 94.2%), with BioClinicalBERT being last (accuracy = 90.4%). NLP Model runtimes for the dataset were relatively short, with BioBERT and BioClinicalBERT taking 3 minutes per epoch, RadBERT-RoBERTA taking 6 minutes per epoch, and GatorTron taking 10 minutes per epoch on a single central processing unit (CPU). Conclusions: We have demonstrated the effectiveness of NLP models for classifying disease responses in radiology reports of lung cancer patients. This has the potential to help derive progression-free survival for real-world evidence generation.