Drug-drug interactions (DDIs) trigger unexpected pharmacological effects in vivo, often with unknown causal mechanisms. Deep learning methods have been developed to better understand DDI. However, learning domain-invariant representations for DDI remains a challenge. Generalizable DDI predictions are closer to reality than source domain predictions. For existing methods, it is difficult to achieve out-of-distribution (OOD) predictions. In this article, focusing on substructure interaction, we propose DSIL-DDI, a pluggable substructure interaction module that can learn domain-invariant representations of DDIs from source domain. We evaluate DSIL-DDI on three scenarios: the transductive setting (all drugs in test set appear in training set), the inductive setting (test set contains new drugs that were not present in training set), and OOD generalization setting (training set and test set belong to two different datasets). The results demonstrate that DSIL-DDI improve the generalization and interpretability of DDI prediction modeling and provides valuable insights for OOD DDI predictions. DSIL-DDI can help doctors ensuring the safety of drug administration and reducing the harm caused by drug abuse.
Measurement techniques often result in domain gaps among batches of cellular data from a specific modality. The effectiveness of cross-batch annotation methods is influenced by inductive bias, which refers to a set of assumptions that describe the behavior of model predictions. Different annotation methods possess distinct inductive biases, leading to varying degrees of generalizability and interpretability. Given that certain cell types exhibit unique functional patterns, we hypothesize that the inductive biases of cell annotation methods should align with these biological patterns to produce meaningful predictions. In this study, we propose KIDA, Knowledge-based Inductive bias and Domain Adaptation. The knowledge-based inductive bias constrains the prediction rules learned from the reference dataset, composed of multiple batches, to functional patterns relevant to biology, thereby enhancing the generalization of the model to unseen batches. Since the query dataset also contains gaps from multiple batches, KIDA's domain adaptation employs pseudo labels for self-knowledge distillation, effectively narrowing the distribution gap between model predictions and the query dataset. Benchmark experiments demonstrate that KIDA is capable of achieving accurate cross-batch cell type annotation. Knowledge-based inductive bias and domain adaptation can enhance the cell type annotation accuracy of deep learning models.
Motivation: Advances in single-cell measurement techniques provide rich multimodal data, which helps us to explore the life state of cells more deeply. However, multimodal integration, or, learning joint embeddings from multimodal data remains a current challenge. The difficulty in integrating unpaired single-cell multimodal data is that different modalities have different feature spaces, which easily leads to information loss in joint embedding. And few existing methods have fully exploited and fused the information in single-cell multimodal data. Result: In this study, we propose CoVEL, a deep learning method for unsupervised integration of single-cell multimodal data. CoVEL learns single-cell representations from a comprehensive view, including regulatory relationships between modalities, fine-grained representations of cells, and relationships between different cells. The comprehensive view embedding enables CoVEL to remove the gap between modalities while protecting biological heterogeneity. Experimental results on multiple public datasets show that CoVEL is accurate and robust to single-cell multimodal integration. Data availability: https://github.com/shapsider/scintegration.
Single-cell analysis has revolutionized our understanding of cellular heterogeneity, yet current approaches face challenges in efficiency and interpretability. In this study, we present scKAN, a framework that leverages Kolmogorov-Arnold Networks for interpretable single-cell analysis through three key innovations: efficient knowledge transfer from large language models through a lightweight distillation strategy; systematic identification of cell-type-specific functional gene sets through KAN's learned activation curves; and precise marker gene discovery enabled by KAN's importance scores with potential for drug repurposing applications. The model achieves superior performance on cell-type annotation with a 6.63% improvement in macro F1 score compared to state-of-the-art methods. Furthermore, scKAN's learned activation curves and importance scores provide interpretable insights into cell-type-specific gene patterns, facilitating both gene set identification and marker gene discovery. We demonstrate the practical utility of scKAN through a case study on pancreatic ductal adenocarcinoma, where it successfully identified novel therapeutic targets and potential drug candidates, including Doconexent as a promising repurposing candidate. Molecular dynamics simulations further validated the stability of the predicted drug-target complexes. Our approach offers a comprehensive framework for bridging single-cell analysis with drug discovery, accelerating the translation of single-cell insights into therapeutic applications.
Because the subtle differences between the different sub-categories of common visual categories such as bird species, fine-grained classification has been seen as a challenging task for many years. Most previous works focus towards the features in the single discriminative region isolatedly, while neglect the connection between the different discriminative regions in the whole image. However, the relationship between different discriminative regions contains rich posture information and by adding the posture information, model can learn the behavior of the object which attribute to improve the classification performance. In this paper, we propose a novel fine-grained framework named PMRC (posture mining and reverse cross-entropy), which is able to combine with different backbones to good effect. In PMRC, we use the Deep Navigator to generate the discriminative regions from the images, and then use them to construct the graph. We aggregate the graph by message passing and get the classification results. Specifically, in order to force PMRC to learn how to mine the posture information, we design a novel training paradigm, which makes the Deep Navigator and message passing communicate and train together. In addition, we propose the reverse cross-entropy (RCE) and demomenstate that compared to the cross-entropy (CE), RCE can not only promote the accurracy of our model but also generalize to promote the accuracy of other kinds of fine-grained classification models. Experimental results on benchmark datasets confirm that PMRC can achieve state-of-the-art.
Heterogeneous feature spaces and technical noise hinder the cellular data integration and imputation. The high cost of obtaining matched data across modalities further restricts analysis. Thus, there's a critical need for deep learning approaches to effectively integrate and impute unpaired multi-modality single-cell data, enabling deeper insights into cellular behaviors. To address these issues, we introduce the Modal-Nexus Auto-Encoder (Monae). Leveraging regulatory relationships between modalities and employing contrastive learning within modality-specific auto-encoders, Monae enhances cell representations in the unified space. The integration capability of Monae furnishes it with modality-complementary cellular representations, enabling the generation of precise intra-modal and cross-modal imputation counts for extensive and complex downstream tasks. In addition, we develop Monae-E (Monae-Extension), a variant of Monae that can converge rapidly and support biological discoveries. Evaluations on various datasets have validated Monae and Monae-E's accuracy and robustness in multi-modality cellular data integration and imputation. Heterogeneous feature spaces and technical noise hinder the cellular data integration and further analysis. Here, authors report a Modal-Nexus Auto-Encoder (Monae) to effectively integrate unpaired multi-modality cellular data and generate imputation counts for downstream analysis.