An abstract is not available for this content. As you have access to this content, full HTML content is provided on this page. A PDF of this content is also available in through the 'Save PDF' action button.
Abstract Background Providers use institutional recommendations, national guidelines, and antibiograms to decide on empiric antibiotics. As local antibiograms are most effective after organisms are known, we sought to use local microbiology and clinical data to develop predictive models for antibiotic coverage prior to identifying the organism. We focused on Gram-negative organisms as they are common urinary pathogens and are often the cause of sepsis originating in the urinary tract. As such, they are important to cover in hospitalized patients with urinary tract infections (UTI). Methods Hospitalized patients, with a diagnosis of UTI and a positive urine culture in the first 48 hours were included. Gram-positive organisms, yeast, and cultures without susceptibilities were excluded. Unknown susceptibilities were filled in using expert-derived rules. Clinical information from electronic health record (EHR) data were extracted on each patient. Penalized logistic regression with 10-fold cross validation was used to develop final models for coverage for five antibiotics (cefazolin, ceftriaxone, ciprofloxacin, cefepime, piperacillin–tazobactam). Final models were chosen based on their discrimination, calibration, and number of predictors, and then tested on a held-out validation dataset. Results Included were 5,096 patients (80% training; 20% validation). Coverage ranged from 65% for cefazolin to 90% for cefepime. Positive blood cultures were present in 544 (11%) with 388 (71%), including a urinary pathogen. In the first 24 hours, 2329 (46%) were hypotensive, 2179 (43%) had a respiratory rate > 22, 2049 (40%) had a WBC > 12, 1079 (21%) were febrile, and 584 (11%) required ICU care. Final model covariates included demographics, antibiotic exposure, prior resistant pathogens, and antibiotic allergies. The five predictive models had a point-estimate for the area under the ROC on the validation set that ranged from 0.70 for ciprofloxacin to 0.73 for ceftriaxone. Conclusion In this cohort of moderate to high acuity hospitalized patients with Gram-negative urinary pathogens, we used EHR data to develop 5 models that predict antibiotic coverage which could be used to support empiric prescribing. These models performed well in a held-out validation set. Disclosures All authors: No reported disclosures.
Importance: The effect of metformin on reducing symptom duration among outpatient adults with coronavirus disease 2019 (COVID-19) has not been studied. Objective: Assess metformin compared with placebo for symptom resolution during acute infection with SARS-CoV-2. Design, Setting, and Participants: The ACTIV-6 platform evaluated repurposed medications for mild to moderate COVID-19. Between September 19, 2023, and May 1, 2024, 2991 participants age >=30 years with confirmed SARS-CoV-2 infection and >=2 COVID-19 symptoms for <=7 days, were included at 90 US sites. Interventions: Participants were randomized to receive metformin (titrated to 1500 mg daily) or placebo for 14 days. Main Outcomes and Measures: The primary outcome was time to sustained recovery (3 consecutive days without COVID-19 symptoms) within 28 days of receiving study drug. Secondary outcomes included time to hospitalization or death; time to healthcare utilization (clinic visit, emergency department visit, hospitalization, or death). Safety events of special interest were hypoglycemia and lactic acidosis. Results: Among 2991 participants who were randomized and received study drug, the median age was 47 years (IQR 38-58); 63.4% were female, 46.5% identified as Hispanic/Latino, and 68.3% reported >=2 doses of a SARS-CoV-2 vaccine. Among 1443 participants who received metformin and 1548 who received placebo, differences in time to sustained recovery were not observed (adjusted hazard ratio [aHR] 0.96; 95% credible interval [CrI] 0.89-1.03; P(efficacy)=0.11). For participants enrolled during current variants, the aHR was 1.19 (95% CrI 1.05-1.34). The median time to sustained recovery was 9 days (95% confidence interval [CI] 9-10) for metformin and 10 days (95% CI 9-10) for placebo. No deaths were reported; 111 participants reported healthcare utilization: 58 in the metformin group and 53 in the placebo group (HR 1.24; 95% CrI 0.81-1.75; P(efficacy)=0.135). Seven participants who received metformin and 3 who received placebo experienced a serious adverse event over 180 days. Five participants in each group reported having hypoglycemia. Conclusions and Relevance In this randomized controlled trial, metformin was not shown to shorten the time to symptom resolution in adults with mild to moderate COVID-19. The median days to symptom resolution was numerically but not significantly lower for metformin. Safety was not a limitation in the study population.
The Cascade-HF protocol is a Continuous Remote Patient Monitoring (CRPM) study at a major health system in the United States to reduce Heart Failure (HF)-related hospitalizations and readmissions using wearable biosensors to collect physiological data over a 30-day period to determine decompensation risk among HF patients. The alerts produced, coupled with electronic patient-reported outcomes, are utilized daily by the home health team, and escalated to the heart failure team as needed, for proactive actions. Limited research has examined anticipating the implementation and workflow challenges of such complex CRPM studies such as resource planning and staffing decisions that leverage the recorded data to drive clinical preparedness and operational efficiency. This preliminary analysis applies discrete event simulation modeling to the Cascade-HF protocol using pilot data from a soft launch to assess workload of the clinical team, evaluate escalation patterns and provide decision support recommendations to enable scale-up for all post-discharge patients.
Background Numerous predictive models in the literature stratify patients by risk of mortality and readmission. Few prediction models have been developed to optimize impact while sustaining sufficient performance. Objective We aimed to derive models for hospital mortality, 180-day mortality and 30-day readmission, implement these models within our electronic health record and prospectively validate these models for use across an entire health system. Materials & methods We developed, integrated into our electronic health record and prospectively validated three predictive models using logistic regression from data collected from patients 18 to 99 years old who had an inpatient or observation admission at NorthShore University HealthSystem, a four-hospital integrated system in the United States, from January 2012 to September 2018. We analyzed the area under the receiver operating characteristic curve (AUC) for model performance. Results Models were derived and validated at three time points: retrospective, prospective at discharge, and prospective at 4 hours after presentation. AUCs of hospital mortality were 0.91, 0.89 and 0.77, respectively. AUCs for 30-day readmission were 0.71, 0.71 and 0.69, respectively. 180-day mortality models were only retrospectively validated with an AUC of 0.85. Discussion We were able to retain good model performance while optimizing potential model impact by also valuing model derivation efficiency, usability, sensitivity, generalizability and ability to prescribe timely interventions to reduce underlying risk. Measuring model impact by tying prediction models to interventions that are then rapidly tested will establish a path for meaningful clinical improvement and implementation.
Objectives: Bacteremia and fungemia can cause life-threatening illness with high mortality rates, which increase with delays in antimicrobial therapy. The objective of this study is to develop machine learning models to predict blood culture results at the time of the blood culture order using routine data in the electronic health record. Design: Retrospective analysis of a large, multicenter inpatient data. Setting: Two academic tertiary medical centers between the years 2007 and 2018. Subjects: All hospitalized patients who received a blood culture during hospitalization. Interventions: The dataset was partitioned temporally into development and validation cohorts: the logistic regression and gradient boosting machine models were trained on the earliest 80% of hospital admissions and validated on the most recent 20%. Measurements and Main Results: There were 252,569 blood culture days—defined as nonoverlapping 24-hour periods in which one or more blood cultures were ordered. In the validation cohort, there were 50,514 blood culture days, with 3,762 cases of bacteremia (7.5%) and 370 cases of fungemia (0.7%). The gradient boosting machine model for bacteremia had significantly higher area under the receiver operating characteristic curve (0.78 [95% CI 0.77–0.78]) than the logistic regression model (0.73 [0.72–0.74]) ( p < 0.001). The model identified a high-risk group with over 30 times the occurrence rate of bacteremia in the low-risk group (27.4% vs 0.9%; p < 0.001). Using the low-risk cut-off, the model identifies bacteremia with 98.7% sensitivity. The gradient boosting machine model for fungemia had high discrimination (area under the receiver operating characteristic curve 0.88 [95% CI 0.86–0.90]). The high-risk fungemia group had 252 fungemic cultures compared with one fungemic culture in the low-risk group (5.0% vs 0.02%; p < 0.001). Further, the high-risk group had a mortality rate 60 times higher than the low-risk group (28.2% vs 0.4%; p < 0.001). Conclusions: Our novel models identified patients at low and high-risk for bacteremia and fungemia using routinely collected electronic health record data. Further research is needed to evaluate the cost-effectiveness and impact of model implementation in clinical practice.