Oral anticoagulants used for the primary treatment of venous thromboembolism (VTE) include warfarin and the more recently introduced direct oral anticoagulants (DOACs), including rivaroxaban, apixaban, dabigatran and edoxaban. Information on the comparative safety of these medications in routine clinical practice is lacking. We identified patients with diagnoses for VTE and prescriptions for oral anticoagulants using claims data from a large U.S. insurance database from 2012 to 2017. Marginal structural logistic models were used to examine associations between type of oral anticoagulant and risk of all-cause mortality. Of 62,431 enrolees in this analysis, 51% were female and the mean age was 61.9 years. Initial oral anticoagulant prescriptions were for warfarin (n = 35,704), rivaroxaban (n = 21,064) and apixaban (n = 5,663). A total of 1,791 deaths occurred within 6 months of the initial oral anticoagulant prescription. Risk of all-cause mortality was not associated with having a prescription for warfarin versus any DOAC or between any head-to-head DOAC comparisons. Also, associations generally did not vary when stratified by VTE type, sex, age, co-morbidities (including renal disease) or anti-platelet medication use. In this observational study, the associations with all-cause mortality comparing DOACs versus warfarin agree with results from previous clinical trials and observational studies, while the associations for head-to-head DOAC comparisons provide new information on the comparative safety of DOACs. Our findings suggest that other criteria such as patient preference, cost, recurrent VTE risk or bleeding risk should be used when determining the choice of anticoagulant for the primary treatment of VTE.
Time and motion (T&M) studies provide an objective method to measure the expenditure of time by clinicians. While some instruments for T&M studies have been designed to evaluate health information technology (HIT), these instruments have not been designed for nursing workflow. We took an existing open source HIT T&M study application designed to evaluate physicians in the ambulatory setting and rationally adapted it through empiric observations to record nursing activities in the inpatient setting and linked this instrument to an existing interface terminology, the Omaha System. Nursing activities involved several dimensions and could include multiple activities occurring simultaneously, requiring significant instrument redesign. 94% of the activities from the study instrument mapped adequately to the Omaha System. T&M study instruments require customization in design optimize them for different environments, such as inpatient nursing, to enable optimal data collection. Interface terminologies show promise as a framework for recording and analyzing T&M study data.
The Clinical Classifications Software (CCS), by grouping International Classification of Diseases (ICD), provides the capacity to better account for clinical conditions for payers, policy makers, and researchers to analyze outcomes, costs, and utilization. There is a critical need for additional research on application of CCS categories to validate the clinical condition representation and to prevent gaps in research. This study compared the event frequency and ICD codes of CCS categories with significant changes from the first three quarters of 2015 to 2016 using National Inpatient Sample data. A total of 63 of the 285 diagnostics CCS were identified with greater than 20% change, of which 32 had increased and 31 decreased over time. Due to the complexity associated with the transition from ICD-9 to ICD-10, more studies are needed to identify the reason for the changes to improve CCS use for ICD-10 and its comparability with ICD-9 based data.
Haemoglobin levels often decline into the anaemic range with androgen deprivation therapy (ADT). We conducted a chart review of patients receiving ADT for metastatic prostate cancer to assess anaemia-related symptoms.135 stage IV prostate cancer cases were reviewed for treatment type; haemoglobin values before and after treatment; and symptoms of anaemia. Mean haemoglobin levels before and after for all treatment forms, for leuprolide alone, and for combination leuprolide/bicalutamide were calculated and evaluated for significant differences. The numbers of patients developing symptoms were recorded and the effects of specific therapies evaluated.For all ADT treated patients, mean haemoglobin declined by -1.11 g/dL (p<.0001). Leuprolide-alone treated patients had a mean decline of -1.66 g/dL (p<0.0001). Leuprolide and bicalutamide combination treatment caused a mean decline of -0.78 g/dL (p=0.0426). 16 of 43 patients had anemia symptoms. Contingency analysis with Fisher's exact test shows patients receiving leuprolide therapy alone versus other forms of ADT were significantly less likely to have symptoms (chi(2)=0.0190).The present study confirms that ADT results in a significant drop in haemoglobin levels into the anaemic range. A number of patients become symptomatic from this change. Practitioners should monitor haemoglobin levels, and treat symptomatic patients.
Background: Logistic regression-based signal detection algorithms have benefits over disproportionality analysis due to their ability to handle potential confounders and masking factors. Feature exploration and developing alternative machine learning algorithms can further strengthen signal detection. Objectives: Our objective was to compare the signal detection performance of logistic regression, gradient-boosted trees, random forest and support vector machine models utilizing Food and Drug Administration adverse event reporting system data. Design: Cross-sectional study. Methods: The quarterly data extract files from 1 October 2017 through 31 December 2020 were downloaded. Due to an imbalanced outcome, two training sets were used: one stratified on the outcome variable and another using Synthetic Minority Oversampling Technique (SMOTE). A crude model and a model with tuned hyperparameters were developed for each algorithm. Model performance was compared against a reference set using accuracy, precision, F1 score, recall, the receiver operating characteristic area under the curve (ROCAUC), and the precision-recall curve area under the curve (PRCAUC). Results: Models trained on the balanced training set had higher accuracy, F1 score and recall compared to models trained on the SMOTE training set. When using the balanced training set, logistic regression, gradient-boosted trees, random forest and support vector machine models obtained similar performance evaluation metrics. The gradient-boosted trees hyperparameter tuned model had the highest ROCAUC (0.646) and the random forest crude model had the highest PRCAUC (0.839) when using the balanced training set. Conclusion: All models trained on the balanced training set performed similarly. Logistic regression models had higher accuracy, precision and recall. Logistic regression, random forest and gradient-boosted trees hyperparameter tuned models had a PRCAUC ⩾ 0.8. All models had an ROCAUC ⩾ 0.5. Including both disproportionality analysis results and additional case report information in models resulted in higher performance evaluation metrics than disproportionality analysis alone.
Abstract Objective Theory-based research of social and behavioral determinants of health (SBDH) found SBDH-related patterns in interventions and outcomes for pregnant/birthing people. The objectives of this study were to replicate the theory-based SBDH study with a new sample, and to compare these findings to a data-driven SBDH study. Materials and Methods Using deidentified public health nurse-generated Omaha System data, 2 SBDH indices were computed separately to create groups based on SBDH (0–5+ signs/symptoms). The data-driven SBDH index used multiple linear regression with backward elimination to identify SBDH factors. Changes in Knowledge, Behavior, and Status (KBS) outcomes, numbers of interventions, and adjusted R-squared statistics were computed for both models. Results There were 4109 clients ages 13–40 years. Outcome patterns aligned with the original research: KBS increased from admission to discharge with Knowledge improving the most; discharge KBS decreased as SBDH increased; and interventions increased as SBDH increased. Slopes of the data-driven model were steeper, showing clearer KBS trends for data-driven SBDH groups. The theory-based model adjusted R-squared was 0.54 (SE = 0.38) versus 0.61 (SE = 0.35) for the data-driven model with an entirely different set of SBDH factors. Conclusions The theory-based approach provided a framework to identity patterns and relationships and may be applied consistently across studies and populations. In contrast, the data-driven approach can provide insights based on novel patterns for a given dataset and reveal insights and relationships not predicted by existing theories. Data-driven methods may be an advantage if there is sufficiently comprehensive SBDH data upon which to create the data-driven models.
Home monitoring is an effective tool that can be used to promote the health of lung transplant recipients, but only if the recipients transmit data to the healthcare team. This study examines the relationship of the Multidimensional Health Locus of Control scale to adherence with home spirometry use. Form C of this scale was mailed to 139 eligible lung and heart-lung transplant recipients. Eighty-three respondents returned the questionnaire. The respondents were on average 4 years older and had greater adherence than nonrespondents. Men tended to score higher than women on all the subscales. These scales were not related to type of transplant or underlying disease. Adherence was also unrelated to the Multidimensional Health Locus of Control scale, with little difference in adherence across persons with various levels of the health locus of control subscales.