An Intrinsic and Extrinsic Evaluation of Learned COVID-19 Concepts using Open-Source Word Embedding Sources

2021 
BACKGROUND: Scientists are developing new computational methods and prediction models to better clinically understand COVID-19 prevalence, treatment efficacy, and patient outcomes These efforts could be improved by leveraging documented, COVID-19-related symptoms, findings, and disorders from clinical text sources in the electronic health record Word embeddings can identify terms related to these clinical concepts from both the biomedical and non-biomedical domains and are being shared with the open-source community at large However, it's unclear how useful openly-available word embeddings are for developing lexicons for COVID-19-related concepts OBJECTIVE: Given an initial lexicon of COVID-19-related terms, characterize the returned terms by similarity across various, open-source word embeddings and determine common semantic and syntactic patterns between the COVID-19 queried terms and returned terms specific to word embedding source METHODS: We compared 7 openly-available word embedding sources Using a series of COVID-19-related terms for associated symptoms, findings, and disorders, we conducted an inter-annotator agreement study to determine how accurately the most similar returned terms could be classified according to semantic types by three annotators We conducted a qualitative study of COVID-19 queried terms and their returned terms to detect informative patterns for constructing lexicons We demonstrated the utility of applying such learned synonyms to discharge summaries by reporting the proportion of patients identified by concept among three cohorts: pneumonia (n=6410 patients), acute respiratory distress syndrome (n=8647 patients), and COVID-19 (n=2397 patients) cohorts RESULTS: We observed high, pairwise inter-annotator agreement (Cohen's Kappa) for symptoms (0 86 to 0 99), findings (0 93 to 0 99), and disorders (0 93 to 0 99) Word embedding sources generated based on characters tend to return more synonyms (mean count of 7 2 synonyms) compared to token-based embedding sources (mean counts range from 2 0 to 3 4) Word embedding sources queried using a qualifier term (e g , dry cough or muscle pain) more often returned qualifiers of the similar semantic type (e g , "dry" returns consistency qualifiers like "wet", "runny") compared to a single term (e g , cough or pain) queries A higher proportion of patients had documented fever (0 61-0 84), cough (0 41-0 55), shortness of breath (0 40-0 59), and hypoxia (0 51-0 56) retrieved than other clinical features Terms for dry cough returned a higher proportion of COVID-19 patients (0 07) than pneumonia (0 05) and ARDS (0 03) populations CONCLUSIONS: Word embeddings are a valuable technology for learning related terms, including synonyms When leveraging openly-available word embedding sources, choices made for the construction of the word embeddings can significantly influence the words learned CLINICALTRIAL: Not applicable
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []