Abstract Introduction Severe acute renal failure (sARF) is associated with considerable morbidity, mortality and use of healthcare resources; however, its precise epidemiology and long-term outcomes have not been well described in a non-specified population. Methods Population-based surveillance was conducted among all adult residents of the Calgary Health Region (population 1 million) admitted to multidisciplinary and cardiovascular surgical intensive care units between May 1 1999 and April 30 2002. Clinical records were reviewed and outcome at 1 year was assessed. Results sARF occurred in 240 patients (11.0 per 100,000 population/year). Rates were highest in males and older patients (≥65 years of age). Risk factors for development of sARF included previous heart disease, stroke, pulmonary disease, diabetes mellitus, cancer, connective tissue disease, chronic renal dysfunction, and alcoholism. The annual mortality rate was 7.3 per 100,000 population with rates highest in males and those ≥65 years. The 28-day, 90-day, and 1-year case-fatality rates were 51%, 60%, and 64%, respectively. Increased Charlson co-morbidity index, presence of liver disease, higher APACHE II score, septic shock, and need for continuous renal replacement therapy were independently associated with death at 1 year. Renal recovery occurred in 78% (68/87) of survivors at 1 year. Conclusion sARF is common and males, older patients, and those with underlying medical conditions are at greatest risk. Although the majority of patients with sARF will die, most survivors will become independent from renal replacement therapy within a year.
Treatment of hypoxemic respiratory failure and acute respiratory distress syndrome is complex. Evidence-based therapies that can improve survival and guidelines advocating their use exist; however, implementation is inconsistent. Our objective was to develop and validate an evidence-based, stakeholder-informed standardized management pathway for hypoxemic respiratory failure and acute respiratory distress syndrome to improve adherence to best practice.A standardized management pathway was developed using a modified Delphi consensus process with a multidisciplinary group of ICU clinicians. The proposed pathway was externally validated with a survey involving multidisciplinary stakeholders and clinicians.In-person meeting and web-based surveys of ICU clinicians from 17 adult ICUs in the province of Alberta, Canada.Not applicable.The consensus panel was comprised of 30 ICU clinicians (4 nurses, 10 respiratory therapists, 15 intensivists, 1 nurse practitioner; median years of practice 17 [interquartile range, 13-21]). Ninety-one components were serially rated and revised over two rounds of online and one in-person review. The final pathway included 46 elements. For the validation survey, 692 responses (including 59% nurses, 33% respiratory therapists, 7% intensivists and 1% nurse practitioners) were received. Agreement of greater than 75% was achieved on 43 of 46 pathway elements.A 46-element evidence-informed hypoxemic respiratory failure and acute respiratory distress syndrome standardized management pathway was developed and demonstrated to have content validity.
Background. A common tenet in emergency medical services (EMS) is that faster response equates to better patient outcome, translated by some EMS operations into a goal of a response time of 8 minutes or less for advanced life support (ALS) units responding to life-threatening events. Objective. To explore whether an 8-minute EMS response time was associated with mortality. Methods. This was a one-year retrospective cohort study of adults with a life-threatening event as assessed at the time of the 9-1-1 call (Medical Priority Dispatch System Echo- or Delta-level event). The study setting was an urban all-ALS EMS system serving a population of approximately 1 million. Response time was defined as 9-1-1 call receipt to ALS unit arrival on scene, and outcome was defined as all-cause mortality at hospital discharge. Potential covariates included patient acuity, age, gender, and combined scene and transport interval time. Stratified analysis and logistic regression were used to assess the response time–mortality association. Results. There were 7,760 unit responses that met the inclusion criteria; 1,865 (24%) were ≥8 minutes. The average patient age was 56.7 years (standard deviation = 21.5). For patients with a response time ≥8 minutes, 7.1% died, compared with 6.4% for patients with a response time ≤7 minutes 59 seconds (risk difference 0.7%; 95% confidence interval [CI]: –0.5%, 2.0%). The adjusted odds ratio of mortality for ≥8 minutes was 1.19 (95% CI: 0.97, 1.47). An exploratory analysis suggested there may be a small beneficial effect of response ≤7 minutes 59 seconds for those who survived to become an inpatient (adjusted odds ratio = 1.30; 95% CI: 1.00, 1.69). Conclusions. These results call into question the clinical effectiveness of a dichotomous 8-minute ALS response time on decreasing mortality for the majority of adult patients identified as having a life-threatening event at the time of the 9-1-1 call. However, this study does not suggest that rapid EMS response is undesirable or unimportant for certain patients. This analysis highlights the need for further research on who may benefit from rapid EMS response, whether these individuals can be identified at the time of the 9-1-1 call, and what the optimum response time is.
Abstract Background The Berlin definition of acute respiratory distress syndrome (ARDS) includes only clinical characteristics. Understanding unique patient pathobiology may allow personalized treatment. We aimed to define and describe ARDS phenotypes/endotypes combining clinical and pathophysiologic parameters from a Canadian ARDS cohort. Methods A cohort of adult ARDS patients from multiple sites in Calgary, Canada, had plasma cytokine levels and clinical parameters measured in the first 24 h of ICU admission. We used a latent class model (LCM) to group the patients into several ARDS subgroups and identified the features differentiating those subgroups. We then discuss the subgroup effect on 30 day mortality. Results The LCM suggested three subgroups ( n 1 = 64, n 2 = 86, and n 3 = 30), and 23 out of 69 features made these subgroups distinct. The top five discriminating features were IL-8, IL-6, IL-10, TNF-a, and serum lactate. Mortality distinctively varied between subgroups. Individual clinical characteristics within the subgroup associated with mortality included mean PaO 2 /FiO 2 ratio, pneumonia, platelet count, and bicarbonate negatively associated with mortality, while lactate, creatinine, shock, chronic kidney disease, vasopressor/ionotropic use, low GCS at admission, and sepsis were positively associated. IL-8 and Apache II were individual markers strongly associated with mortality (Area Under the Curve = 0.84). Perspective ARDS subgrouping using biomarkers and clinical characteristics is useful for categorizing a heterogeneous condition into several homogenous patient groups. This study found three ARDS subgroups using LCM; each subgroup has a different level of mortality. This model may also apply to developing further trial design, prognostication, and treatment selection.
Growing interest in microbial dysbiosis during critical illness has raised questions about the therapeutic potential of microbiome modification with probiotics. Prior randomized trials in this population suggest that probiotics reduce infection, particularly ventilator-associated pneumonia (VAP), although probiotic-associated infections have also been reported.
Objective
To evaluate the effect ofLactobacillus rhamnosusGG on preventing VAP, additional infections, and other clinically important outcomes in the intensive care unit (ICU).
Design, Setting, and Participants
Randomized placebo-controlled trial in 44 ICUs in Canada, the United States, and Saudi Arabia enrolling adults predicted to require mechanical ventilation for at least 72 hours. A total of 2653 patients were enrolled from October 2013 to March 2019 (final follow-up, October 2020).
Interventions
EnteralL rhamnosusGG (1 × 1010colony-forming units) (n = 1321) or placebo (n = 1332) twice daily in the ICU.
Main Outcomes and Measures
The primary outcome was VAP determined by duplicate blinded central adjudication. Secondary outcomes were other ICU-acquired infections includingClostridioides difficileinfection, diarrhea, antimicrobial use, ICU and hospital length of stay, and mortality.
Results
Among 2653 randomized patients (mean age, 59.8 years [SD], 16.5 years), 2650 (99.9%) completed the trial (mean age, 59.8 years [SD], 16.5 years; 1063 women [40.1%.] with a mean Acute Physiology and Chronic Health Evaluation II score of 22.0 (SD, 7.8) and received the study product for a median of 9 days (IQR, 5-15 days). VAP developed among 289 of 1318 patients (21.9%) receiving probiotics vs 284 of 1332 controls (21.3%; hazard ratio [HR], 1.03 (95% CI, 0.87-1.22;P = .73, absolute difference, 0.6%, 95% CI, –2.5% to 3.7%). None of the 20 prespecified secondary outcomes, including other ICU-acquired infections, diarrhea, antimicrobial use, mortality, or length of stay showed a significant difference. Fifteen patients (1.1%) receiving probiotics vs 1 (0.1%) in the control group experienced the adverse event ofL rhamnosusin a sterile site or the sole or predominant organism in a nonsterile site (odds ratio, 14.02; 95% CI, 1.79-109.58;P < .001).
Conclusions and Relevance
Among critically ill patients requiring mechanical ventilation, administration of the probioticL rhamnosusGG compared with placebo, resulted in no significant difference in the development of ventilator-associated pneumonia. These findings do not support the use ofL rhamnosusGG in critically ill patients.
To independently appraise the methodological quality of a sample of reports of meta-analyses that address critical care topics in the Cochrane Database of Systematic Reviews compared with the quality of reports published in regular journals, using a validated assessment instrument, the Overview Quality Assessment Questionnaire (OQAQ).Studies were selected from a search of MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews from 1994 to 2003, using multiple search terms for critical care and sensitive filters to identify meta-analyses.Two authors independently selected meta-analyses that addressed topics pertinent to critical care medicine.Two authors independently extracted the data. The proportion of reports that met each component of the OQAQ was determined, as was the overall quality score. Meta-analyses published in the Cochrane Database of Systematic Reviews were compared with those published in regular journals.There were 36 reports of meta-analyses in the Cochrane Database of Systematic Reviews and 103 reports of meta-analyses published in regular journals; 11 of these were reports of Cochrane reviews. The meta-analyses published in the Cochrane Database of Systematic Reviews were more likely to fulfill most components of the OQAQ. The median overall OQAQ scores indicated significant methodological problems in the reports regardless of the source of publication, although the reports in the Cochrane database scored higher than those in regular journals (five compared with two, p<.001). Major methodological flaws, notably failure to appropriately refer to the validity of included studies, were found in meta-analyses in both the Cochrane Database of Systematic Reviews and regular journals (44.4% and 79.3%, respectively).Although the quality of reports of meta-analyses published in the Cochrane Database of Systematic Reviews is superior to the quality of reports of meta-analyses published in regular journals, there is significant room for improvement. Clinicians should critically appraise all reports of meta-analyses before considering the results, regardless of the source of publication.
Background: Despite the evidence that enteral feeding reduces morbidity in critically ill patients and is preferred to parenteral nutrition, the delivery of enteral nutrition (EN) is often inadequate. The purpose of this study was to determine whether implementation of an evidence‐based nutrition support (NS) protocol could improve EN delivery. Methods: An NS protocol incorporating available scientific evidence; data from a retrospective survey of 30 intensive care unit (ICU) patients; and input from dietitians, intensive care physicians, surgeons, nurses, and pharmacists was developed. The impact of this protocol was evaluated prospectively in 123 consecutive adult patients admitted to a multisystem ICU who were eligible for EN. Results: The percentage of patients who received at least 80% of their estimated energy requirements during their ICU stay increased from 20% before implementation of the NS protocol to 60% after implementation ( p < .001). After adjusting for confounders, those in the postimplementation group received significantly more kcal/kg/d than the preimplementation group (3.71 kcal/kg/d; 95% confidence interval, 1.64 to 5.78; p = .001). Parenteral nutrition kcal/kg/d use was reduced in the postimplementation group (1.6 vs 13%, p = .02). There was no difference in time to initiation of enteral nutrition between groups (1.76 days preprotocol vs 1.44 days postprotocol implementation, p = .9). Conclusions: The development and use of an evidence‐based NS protocol improved the proportion of enterally fed ICU patients meeting their calculated nutrition requirements.