OBJECTIVES: Nonconventional ventilators (NCVs), defined here as transport ventilators and certain noninvasive positive pressure devices, were used extensively as crisis-time ventilators for intubated patients with COVID-19. We assessed whether there was an association between the use of NCV and higher mortality, independent of other factors. DESIGN: This is a multicenter retrospective observational study. SETTING: The sample was recruited from a single healthcare system in New York. The recruitment period spanned from March 1, 2020, to April 30, 2020. PATIENTS: The sample includes patients who were intubated for COVID-19 acute respiratory distress syndrome (ARDS). INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The primary outcome was 28-day in-hospital mortality. Multivariable logistic regression was used to derive the odds of mortality among patients managed exclusively with NCV throughout their ventilation period compared with the remainder of the sample while adjusting for other factors. A secondary analysis was also done, in which the mortality of a subset of the sample exclusively ventilated with NCV was compared with that of a propensity score-matched subset of the control group. Exclusive use of NCV was associated with a higher 28-day in-hospital mortality while adjusting for confounders in the regression analysis (odds ratio, 1.41; 95% CI [1.07–1.86]). In the propensity score matching analysis, the mortality of patients exclusively ventilated with NCV was 68.9%, and that of the control was 60.7% ( p = 0.02). CONCLUSIONS: Use of NCV was associated with increased mortality among patients with COVID-19 ARDS. More lives may be saved during future ventilator shortages if more full-feature ICU ventilators, rather than NCVs, are reserved in national and local stockpiles.
Rationale: The preliminary reports of COVID Acute Respiratory Distress Syndrome (COVIDARDS) suggest the existence of a subset of patients with higher lung compliance despite profound hypoxemia. Understanding heterogeneity seen in patients with COVIDARDS and comparing to non-COVIDARDS may inform tailored treatments. Objectives: To describe the trajectories of hypoxemia and respiratory compliance in COVIDARDS and associations with outcomes. Methods: A multidisciplinary team of frontline clinicians and data scientists created the Northwell COVIDARDS dataset (NorthCARDS) leveraging over 11,542 COVID-19 hospital admissions. Data was summarized to describe differences based on clinically meaningful categories of lung compliance, and compared to non-COVIDARDS reports. A sophisticated method of extrapolating PaO2 from SpO2, as well estimating FiO2 from non invasive oxygen delivery devices were utilized to create meaningful trends of derived PaO2 to FiO2 (P/F). Measurements and Main Results: Of the 1595 COVIDARDS patients in the NorthCARDS dataset, there were 538 (34.6%) who had very low lung compliance ( 50ml/cmH2O). The very low compliance group had double the median time to intubation compared to the low-normal group (107 hours(IQR 26.3, 238.3) vs. 37.9 hours(IQR 4.8, 90.7)). Oxygenation trends have improved in all groups after a nadir immediately post intubation. The P/F ratio improved from a mean of 109 to 155, with the very low compliance group showing a smaller improvement compared to low compliance group. The derived P/F trends closely correlated with blood gas analysis driven P/F trends, except immediately post intubation were the trends diverge as illustrated in the image. Overall, 67.5% (n=1049) of the patients died during the hospitalization. In comparison to non-COVIDARDS reports, there were less patients in the high compliance category (2.2%vs.12%, compliance ≥50mL/cmH220), and more patients with P/F ≤ 150 ( 57.8% vs. 45.6%). No correlation was apparent between lung compliance and P/F ratio. The Oxygenation Index was similar, (11.12(SD 5.67)vs.12.8(SD 10.8)). Conclusions: Heterogeneity in lung compliance is seen in COVIDARDS, without apparent correlation to degree of hypoxemia. Notably, time to intubation was greater in the very low lung compliance category. Understanding ARDS patient heterogeneity must include consideration of treatment patterns in addition to trajectories of change in patient-level data and demographics.
Background Atrial fibrillation is associated with higher mortality. Identification of causes of death and contemporary risk factors for all‐cause mortality may guide interventions. Methods and Results In the Rivaroxaban Once Daily Oral Direct Factor Xa Inhibition Compared with Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation (ROCKET AF) study, patients with nonvalvular atrial fibrillation were randomized to rivaroxaban or dose‐adjusted warfarin. Cox proportional hazards regression with backward elimination identified factors at randomization that were independently associated with all‐cause mortality in the 14 171 participants in the intention‐to‐treat population. The median age was 73 years, and the mean CHADS 2 score was 3.5. Over 1.9 years of median follow‐up, 1214 (8.6%) patients died. Kaplan–Meier mortality rates were 4.2% at 1 year and 8.9% at 2 years. The majority of classified deaths (1081) were cardiovascular (72%), whereas only 6% were nonhemorrhagic stroke or systemic embolism. No significant difference in all‐cause mortality was observed between the rivaroxaban and warfarin arms ( P =0.15). Heart failure (hazard ratio 1.51, 95% CI 1.33–1.70, P <0.0001) and age ≥75 years (hazard ratio 1.69, 95% CI 1.51–1.90, P <0.0001) were associated with higher all‐cause mortality. Multiple additional characteristics were independently associated with higher mortality, with decreasing creatinine clearance, chronic obstructive pulmonary disease, male sex, peripheral vascular disease, and diabetes being among the most strongly associated (model C‐index 0.677). Conclusions In a large population of patients anticoagulated for nonvalvular atrial fibrillation, ≈7 in 10 deaths were cardiovascular, whereas <1 in 10 deaths were caused by nonhemorrhagic stroke or systemic embolism. Optimal prevention and treatment of heart failure, renal impairment, chronic obstructive pulmonary disease, and diabetes may improve survival. Clinical Trial Registration URL : https://www.clinicaltrials.gov/ . Unique identifier: NCT 00403767.
Introduction: Manual pulse detection is inaccurate in cardiac arrest(CA) and Doppler ultrasound may detect blood flow without an adequate perfusion blood pressure (pseudo-pulseless electrical activity). The purpose of this study is to assess whether maximum femoral arterial velocity during a pulse check is correlated with arterial line systolic blood pressure (SBP) and whether it can be used to accurately identify a SBP of ≥60mmHG. Methods: This is a prospective study of CA patients at a quaternary care Emergency Department. During a pulse check, a linear ultrasound was placed at the common femoral artery and the presence or absence of an arterial Doppler waveform, the associated maximum velocity value, and arterial line SBP were recorded simultaneously. The correlation between SBP and maximum waveform velocity was assessed. Arterial SBPs were dichotomized as <60mmHG or ≥60mmHg, as this was deemed as an adequate perfusion pressure, and a receiver operator characteristic curve analysis was performed to determine optimal cutoff value of maximum velocity associated with SBP ≥60mmHG. Sensitivity (Sn), specificity (Sp), and accuracy (Acc) of manual palpation and femoral artery pulse wave doppler for detection of SBP ≥60mmHg were calculated. Results: A total of 51 patients and 183 pulse checks were analyzed. There was a strong correlation between arterial line SBP and maximum waveform velocity (Spearman correlation coefficient: 0.92; p<0.001). The optimal cutoff value of waveform velocity associated with a SBP ≥60mmHG was 20 cm/second (Sn: 0.89; specificity: 0.94; area under the curve: 0.98) with an Acc of 0.92. To detect SBP ≥60mmHg, manual palpation had a Sn of 0.45, Sp of 0.82, and Acc of 0.67 McNemar's test showed that Sn (p<0.001), Sp (p=0.009), and Acc (p<0.001) was significantly higher for doppler ultrasound >=20cm/sec compared with manual palpation. Conclusion: In this study, during a pulse check, patients with a femoral arterial doppler waveform with a maximum velocity greater than 20cm/sec had a high probability of having a SBP ≥60mmHg, and improved Sn, Sp and Acc over manual palpation. The results demonstrate femoral arterial doppler maximum velocity is an accurate and objective tool to determine the presence of a pulse with adequate perfusion pressures.
Abstract Background Despite much evidence supporting the monitoring of the divergence of transcutaneous partial pressure of carbon dioxide (tcPCO 2 ) from arterial partial pressure carbon dioxide (artPCO 2 ) as an indicator of the shock status, data are limited on the relationships of the gradient between tcPCO 2 and artPCO 2 (tc-artPCO 2 ) with the systemic oxygen metabolism and hemodynamic parameters. Our study aimed to test the hypothesis that tc-artPCO 2 can detect inadequate tissue perfusion during hemorrhagic shock and resuscitation. Methods This prospective animal study was performed using female pigs at a university-based experimental laboratory. Progressive massive hemorrhagic shock was induced in mechanically ventilated pigs by stepwise blood withdrawal. All animals were then resuscitated by transfusing the stored blood in stages. A transcutaneous monitor was attached to their ears to measure tcPCO 2 . A pulmonary artery catheter (PAC) and pulse index continuous cardiac output (PiCCO) were used to monitor cardiac output (CO) and several hemodynamic parameters. The relationships of tc-artPCO 2 with the study parameters and systemic oxygen delivery (DO 2 ) were analyzed. Results Hemorrhage and blood transfusion precisely impacted hemodynamic and laboratory data as expected. The tc-artPCO 2 level markedly increased as CO decreased. There were significant correlations of tc-artPCO 2 with DO 2 and COs (DO 2 : r = − 0.83, CO by PAC: r = − 0.79; CO by PiCCO: r = − 0.74; all P < 0.0001). The critical level of oxygen delivery (DO 2crit ) was 11.72 mL/kg/min according to transcutaneous partial pressure of oxygen (threshold of 30 mmHg). Receiver operating characteristic curve analyses revealed that the value of tc-artPCO 2 for discrimination of DO 2crit was highest with an area under the curve (AUC) of 0.94, followed by shock index (AUC = 0.78; P < 0.04 vs tc-artPCO 2 ), and lactate (AUC = 0.65; P < 0.001 vs tc-artPCO 2 ). Conclusions Our observations suggest the less-invasive tc-artPCO 2 monitoring can sensitively detect inadequate systemic oxygen supply during hemorrhagic shock. Further evaluations are required in different forms of shock in other large animal models and in humans to assess its usefulness, safety, and ability to predict outcomes in critical illnesses.
BACKGROUND: Fever in patients can provide an important clue to the etiology of a patient's symptoms.Non-invasive temperature sites (oral, axillary, temporal) may be insensitive due to a variety of factors.This has not been well studied in adult emergency department patients.To determine whether emergency department triage temperatures detected fever adequately when compared to a rectal temperature.METHODS: A retrospective chart review was made of 27 130 adult patients in a high volume, urban emergency department over an eight-year period who received first a non-rectal triage temperature and then a subsequent rectal temperature. RESULTS:The mean difference in temperatures between the initial temperature and the rectal temperature was 1.3 °F (P<0.001), with 25.9% of the patients having higher rectal temperatures ≥2 °F, and 5.0% having higher rectal temperatures ≥4 °F.The mean difference among the patients who received oral, axillary, and temporal temperatures was 1.2 °F (P<0.001),1.8 °F (P<0.001), and 1.2 °F (P<0.001)respectively.About 18.1% of the patients were initially afebrile and found to be febrile by rectal temperature, with an average difference of 2.5 °F (P<0.001).These patients had a higher rate of admission (61.4%, P<0.005), and were more likely to be admitted to the hospital for a higher level of care, such as an intensive care unit, when compared with the full cohort (12.5% vs. 5.8%, P<0.005).CONCLUSIONS: There are significant differences between rectal temperatures and noninvasive triage temperatures in this emergency department cohort.In almost one in five patients, fever was missed by triage temperature.
Introduction: Previous studies have shown that survival from out-of-hospital cardiac arrest (OHCA) is higher in urban settings than in rural settings. However, few studies have assessed survival differences in inner-city areas compared to suburban areas. We assessed differences in survival to admission and survival to discharge between OHCA patients treated at inner-city hospitals and those treated at suburban hospitals. Method: We conducted a retrospective analysis of adult atraumatic OHCA patients presenting to 12 hospitals in 2018 and 2019 within a healthcare system in the downstate New York area. Hospitals were classified as either inner-city or suburban using the urban-rural classification system of the National Center for Health Statistics, according to hospital addresses. We modified the LACE score to create a measure of the Charlson Comorbidity Index and number of ED visits in the past 6 months (termed “CE score”) to represent comorbidities and overall health. Two multivariable logistic regression models were developed to assess the association between location (inner city vs suburban) and survival to admission and discharge, after controlling for relevant covariates. Results: A total of 3,035 patients were included in this analysis; 1,350 patients (44%) presented to inner-city hospitals and 1,685 (56%) presented to suburban hospitals. Overall median age was 75 years, 59% were male, and 63% were White. Controlling for race, age, and CE score, inner-city hospital patients had higher odds of survival to admission (adjusted OR: 1.84; 95% CI: 1.57, 2.15) and higher odds of survival to discharge (adjusted OR: 1.35; 95% CI: 1.13, 1.60) compared with suburban hospital patients. Conclusion: Patients treated at inner-city hospitals had significantly higher odds of both survival to admission and survival to discharge compared with those treated at suburban hospitals. Further studies are needed to determine the relevant factors and inform public health interventions to increase survival from OHCA in suburban areas.