Differences among the database representations of clinical data are a major barrier to the integration of databases and to the sharing of decision-support applications across databases. Prior research on resolving data heterogeneity has not addressed specifically the types of mismatches found in various timestamping approaches for clinical data. Such temporal mismatches, which include time-unit differences among timestamps, must be overcome before many applications can use these data to reason about diagnosis, therapy, or prognosis. In this paper, we present an analysis of the types of temporal mismatches that exist in databases. To formalize these various approaches to timestamping, we provide a foundational model of time. This model gives us the semantics necessary to encode the temporal dimensions of clinical data in legacy databases and to transform such heterogeneous data into a uniform temporal representation suitable for decision support. We have implemented this foundational model as an extension to our Chronus system, which provides clinical decision-support applications the ability to match temporal patterns in clinical databases. We discuss the uniqueness of our approach in comparison with other research on representing and querying clinical data with varying timestamp representations.
Abstract The Gene Expression Omnibus (GEO) contains more than two million digital samples from functional genomics experiments amassed over almost two decades. However, individual sample meta-data remains poorly described by unstructured free text attributes preventing its largescale reanalysis. We introduce the Search Tag Analyze Resource for GEO as a web application ( http://STARGEO.org ) to curate better annotations of sample phenotypes uniformly across different studies, and to use these sample annotations to define robust genomic signatures of disease pathology by meta-analysis. In this paper, we target a small group of biomedical graduate students to show rapid crowd-curation of precise sample annotations across all phenotypes, and we demonstrate the biological validity of these crowd-curated annotations for breast cancer. STARGEO.org makes GEO data findable, accessible, interoperable and reusable (i.e., FAIR) to ultimately facilitate knowledge discovery. Our work demonstrates the utility of crowd-curation and interpretation of open ‘big data’ under FAIR principles as a first step towards realizing an ideal paradigm of precision medicine.
Metadata play a crucial role in ensuring the findability, accessibility, interoperability, and reusability of datasets. This paper investigates the potential of large language models (LLMs), specifically GPT-4, to improve adherence to metadata standards. We conducted experiments on 200 random data records describing human samples relating to lung cancer from the NCBI BioSample repository, evaluating GPT-4's ability to suggest edits for adherence to metadata standards. We computed the adherence accuracy of field name-field value pairs through a peer review process, and we observed a marginal average improvement in adherence to the standard data dictionary from 79% to 80% (p<0.01). We then prompted GPT-4 with domain information in the form of the textual descriptions of CEDAR templates and recorded a significant improvement to 97% from 79% (p<0.01). These results indicate that, while LLMs may not be able to correct legacy metadata to ensure satisfactory adherence to standards when unaided, they do show promise for use in automated metadata curation when integrated with a structured knowledge base.
Objective evaluation and comparison of knowledge-based tools has so far been mostly an elusive goal for researchers and developers. Objective experiments are difficult to perform and require substantial resources. The EON Ontology Alignment Contest attempts to overcome these problems in inviting tool developers to perform a series of experiments in ontology alignment and compare their results to the reference alignments produced by experiment authors. We used our PROMPT suite of tools in the experiment. We briefly describe PROMPT in the paper and present our results. Based on this experience, we share our thoughts on the experiment design, its positive and negative aspects, and talk about lessons learned and ideas for future such experiments and contests.
Ontologies have become a critical component of many applications in biomedical informatics. However, the landscape of the ontology tools today is largely fragmented, with independent tools for ontology editing, publishing, and peer review: users develop an ontology in an ontology editor, such as Protégé; and publish it on a Web server or in an ontology library, such as BioPortal, in order to share it with the community; they use the tools provided by the library or mailing lists and bug trackers to collect feedback from users. In this paper, we present a set of tools that bring the ontology editing and publishing closer together, in an integrated platform for the entire ontology lifecycle. This integration streamlines the workflow for collaborative development and increases integration between the ontologies themselves through the reuse of terms.