Okinawa prefecture is at risk for emerging infectious diseases due to its subtropical climate and its location within the Indo-Pacific region. Understanding the existing vectors and infectious agents contextualizes current threats, guides treatment, and informs prevention, and may be of unique concern in the setting of complex emergencies.
We were pleased to read the Clinical Focus Review by Richards et al., "Damage Control Resuscitation in Traumatic Hemorrhage: It Is More Than Fixing the Holes and Filling the Tank"1 in the March 2024 issue of Anesthesiology. This review is one of the most succinct yet comprehensive discussions of evidence-based trauma resuscitation that we have encountered. As military anesthesiologists, this topic is of particular interest to us. We feel that this will serve as a valuable reference for service members deployed to combat zones. We also look forward to using this article to discuss resuscitation with our residents as a teaching guide.We do, however, find the term "damage control resuscitation" to be misleading in this context. The U.S. Military has used this term for decades, and its use by Richards et al. here invites confusion. In 2006, the United States Army published clinical practice guidelines for damage control resuscitation.2 This term developed as an extension of the concept of damage control surgery, which emphasized hemorrhage control, decontamination, and correction of major physiologic derangements before definitive repair.3 Damage control resuscitation practice guidelines recommended a higher than traditional fresh frozen plasma: packed red blood cell transfusion ratio and restrictive crystalloid use. These practices likely contributed to the decrease in trauma mortality during the wars of the last 2 decades.4 The concepts of damage control resuscitation were further developed in the military when the Defense Health Agency's (Falls Church, Virginia) Joint Trauma System advocated for the use of whole blood when available.5 Among allied military partners, whole blood has similarly been recommended for use.6Resuscitation is the act of reviving the near-dead to a state of hemodynamic stability. Generally, this involves the correction of physiologic abnormalities, including hemorrhage, acidosis, hypothermia, coagulopathy, and electrolyte disturbances. While patients with significant trauma often require rapid, large-volume resuscitation, they are hardly the only patients in need of such resuscitation. All of the techniques and considerations that are described by Richards et al. are likely valid for most surgical patient populations, not just those who undergo traumatic hemorrhage. Our concern is that the damage control resuscitation label may imply that these well-described fundamentals of comprehensive resuscitation are solely applicable to the trauma patient population, and do not necessarily apply to other surgical patients requiring similar resuscitation.Ultimately, we feel the concept of damage control resuscitation is misleading because there is no "damage control" aspect of the resuscitation, unlike with damage control surgery. No amount of damage control resuscitation will fix or correct damage, whether caused by trauma or otherwise. It is a way to buy time for definitive treatment. Furthermore, "damage" is term not historically applied to people. Structures are damaged; machines are damaged; networks are damaged. Humans are injured. Better terms may include "comprehensive," "goal-directed," or "balanced" resuscitation, all of which highlight the broader approach to correcting physiologic derangements than simply transfusing red cells.Despite our concerns with the terminology used by Richards et al., we would again like to emphasize that their article will serve as a foundational document for anesthesiologists who are deploying into harm's way, and seek to have the most up-to-date knowledge on how to care for our service members. We would press the authors to consider the use of more inclusive language to ensure that these lessons are applied outside of the trauma-specific patient population as well.The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Department of the Navy, the Department of Defense, or the United States Government. We are military service members. This work was prepared as part of our official duties.The authors declare no competing interests.
Plane blocks are an increasingly common type of regional anaesthesia technique in the perioperative period. Increased spread of local anaesthesia during plane blocks is thought to be related to an increased area of pain coverage. This study sought to assess differences in injectate spread comparing Tuohy needles with standard insulated stimulating block needles.10 Yorkshire-Cross porcine cadavers were used in this study. Immediately following euthanasia, the cadavers underwent bilateral ultrasound-guided transversus abdominis plane (TAP) block injection with radiopaque contrast dye, with one side placed with a 20 g Tuohy needle, and the other side with a 20 g insulated stimulating block needle. Injectate spread was assessed using plain film X-ray and area of spread was measured to compare differences.All 10 animals underwent successful ultrasound-guided TAP block placement. In all 10 animals, the area of contrast spread was greater with the Tuohy than stimulating needle. Wilcoxon signed-rank test was used to analyse the difference between the groups. The average difference between the two sides was 33.02% (p=0.002).This is the first study to demonstrate differences in injectate spread with different needle types. This suggests enhanced spread with Tuohy needle compared with standard block needle, and may encourage its use during plane blocks.
Discharge destination impacts costs and perioperative planning for primary total knee (TKA) or hip arthroplasty (THA). The purpose of this study was to create a tool to predict discharge destination in contemporary patients. Models were developed using more than 400,000 patients from the National Surgical Quality Improvement Program database. Models were compared with a previously published model using area under the receiver operating characteristic curve (AUC) and decision curve analysis (DCA). AUC on patients with TKA was 0.729 (95% confidence interval [CI]: 0.719 to 0.738) and 0.688 (95% CI: 0.678 to 0.697) using the new and previous models, respectively. AUC on patients with THA was 0.768 (95% CI: 0.758 to 0.778) and 0.726 (95% CI: 0.714 to 0.737) using the new and previous models, respectively. DCA showed substantially improved net clinical benefit. The new models were integrated into a web-based application. This tool enhances clinical decision making for predicting discharge destination following primary TKA and THA. (Journal of Surgical Orthopaedic Advances 32(4):252-258, 2023).
Article| August 2022 Using Big Data to Analyze ASA's Twitter Presence: @ASALifeline Engagement Depends on Tweet Topic and Influencers Gregory J. Booth, MD; Gregory J. Booth, MD Search for other works by this author on: This Site PubMed Google Scholar Nolan Martin, BS; Nolan Martin, BS Search for other works by this author on: This Site PubMed Google Scholar Henry DeYoung, MD; Henry DeYoung, MD Search for other works by this author on: This Site PubMed Google Scholar Trevor Elam, MD; Trevor Elam, MD Search for other works by this author on: This Site PubMed Google Scholar Scott Hughey, MD; Scott Hughey, MD Search for other works by this author on: This Site PubMed Google Scholar A. Steven Bradley, MD A. Steven Bradley, MD Search for other works by this author on: This Site PubMed Google Scholar ASA Monitor August 2022, Vol. 86, 32–33. https://doi.org/10.1097/01.ASM.0000855696.55126.70 Views Icon Views Article contents Figures & tables Video Audio Supplementary Data Peer Review Share Icon Share Facebook Twitter LinkedIn Email Cite Icon Cite Get Permissions Search Site Citation Gregory J. Booth, Nolan Martin, Henry DeYoung, Trevor Elam, Scott Hughey, A. Steven Bradley; Using Big Data to Analyze ASA's Twitter Presence: @ASALifeline Engagement Depends on Tweet Topic and Influencers. ASA Monitor 2022; 86:32–33 doi: https://doi.org/10.1097/01.ASM.0000855696.55126.70 Download citation file: Ris (Zotero) Reference Manager EasyBib Bookends Mendeley Papers EndNote RefWorks BibTex toolbar search Search Dropdown Menu toolbar search search input Search input auto suggest filter your search All ContentAll PublicationsASA Monitor Search Advanced Search Topics: big data Twitter is a powerful tool for ASA (@ASALifeline) and has the power to drive substantial change in our field, such as improving engagement in conferences and enhancing resident physician recruitment (Anesth Analg 2020;130:333-40; ASA Monitor 2021;85:40). It is important to analyze social media strategies to assess whether an individual or organization is providing value through their messaging. To identify topics ASA members value, who ASA's influencers are, and areas for potential ASA audience growth, we explored whether specific tweet topics and the society's social network structure impact tweet engagement. How can we analyze topics from thousands of ASA tweets? In the era of big data, we often characterize data as structured or unstructured. Structured data, such as patient risk factors, demographic variables, or outcomes, drives most traditional statistics we've learned in our evidence-based medicine curricula and is the engine for the vast majority of ongoing... You do not currently have access to this content.
Hyponatremia and hypernatremia, as conventionally defined (<135 mEq/L and >145 mEq/L, respectively), are associated with increased perioperative morbidity and mortality. However, the effects of subtle deviations in serum sodium concentration within the normal range are not well-characterized.The purpose of this analysis is to determine the association between borderline hyponatremia (135-137 mEq/L) and hypernatremia (143-145 mEq/L) on perioperative morbidity and mortality.A retrospective cohort study was performed using data from the American College of Surgeons National Surgical Quality Improvement Program database. This database is a repository of surgical outcome data collected from over 600 hospitals across the United States. The National Surgical Quality Improvement Program database was queried to extract all patients undergoing elective, noncardiac surgery from 2015 to 2019. The primary predictor variable was preoperative serum sodium concentration, measured less than 5 days before the index surgery. The 2 primary outcomes were the odds of morbidity and mortality occurring within 30 days of surgery. The risk of both outcomes in relation to preoperative serum sodium concentration was modeled using weighted generalized additive models to minimize the effect of selection bias while controlling for covariates.In the overall cohort, 1,003,956 of 4,551,726 available patients had a serum sodium concentration drawn within 5 days of their index surgery. The odds of morbidity and mortality across sodium levels of 130-150 mEq/L relative to a sodium level of 140 mEq/L followed a nonnormally distributed U-shaped curve. The mean serum sodium concentration in the study population was 139 mEq/L. All continuous covariates were significantly associated with both morbidity and mortality (P<.001). Preoperative serum sodium concentrations of less than 139 mEq/L and those greater than 144 mEq/L were independently associated with increased morbidity probabilities. Serum sodium concentrations of less than 138 mEq/L and those greater than 142 mEq/L were associated with increased mortality probabilities. Hypernatremia was associated with higher odds of both morbidity and mortality than corresponding degrees of hyponatremia.Among patients undergoing elective, noncardiac surgery, this retrospective analysis found that preoperative serum sodium levels less than 138 mEq/L and those greater than 142 mEq/L are associated with increased morbidity and mortality, even within currently accepted "normal" ranges. The retrospective nature of this investigation limits the ability to make causal determinations for these findings. Given the U-shaped distribution of risk, past investigations that assume a linear relationship between serum sodium concentration and surgical outcomes may need to be revisited. Likewise, these results question the current definition of perioperative eunatremia, which may require future prospective investigations.
Background The administration of epidural anesthesia during labor is a common technique used to reduce the pain of childbirth. We sought to compare standard infusion strategies of continuous epidural infusions (CEI) with programmed intermittent epidural bolus (PIEB) to assess the length of spread in terms of vertebral body length. Based on previous clinical data in humans, the PIEB was associated with improved pain control and decreased total dose of local anesthetic. We hypothesized that the PIEB was associated with increased spread when compared with CEI. Methods Thirty female Yorkshire-cross swine cadavers were used to compare three infusion strategies, continuous infusion (CEI) 10 mL/hour programmed continuously, multiple bolus (MB) 2 mL given every 12 min for 10 mL total and 10 mL delivered in a single bolus (SB). Radiographs were used to identify the spread of the radiopaque contrast dye, and a number of vertebral bodies covered were measured to assess spread. Results Overall, the CEI had an average spread of 5.6 levels, MB 7.9 and SB 10.4. The differences between SB and MB (p=0.011), SB and CEI (p<0.001) and MB and CEI (p=0.028) were all found to be significant. Conclusions We demonstrated increased spread of epidural contrast with programmed intermittent bolus strategies. This supports previous evidence of improved patient outcomes with PIEB strategy compared with CEI, and encourages the use of PIEB in the appropriate patient population.
Abstract Objective Radiofrequency ablation (RFA) of the medial branch nerve is a commonly performed procedure for patients with facet syndrome. RFA has previously been demonstrated to provide long-term functional improvement in approximately 50% of patients, including those who had significant pain relief after diagnostic medial branch block. We sought to identify factors associated with success of RFA for facet pain. Design Active-duty military patients who underwent lumbar RFA (L3, L4, and L5 levels) over a 3-year period were analyzed. Defense and Veterans Pain Rating Scale (DVPRS) and Oswestry Disability Index (ODI) scores were assessed the day of procedure and at the 2-month and 6-month follow-up. These data were analyzed to identify associations between patient demographics, pain, and functional status and patients’ improvement after RFA, with a primary outcome of ODI improvement and a secondary outcome of pain reduction. Results Higher levels of starting functional impairment (starting ODI scores of 42.9 vs. 37.5; P = 0.0304) were associated with a greater likelihood of improvement in functional status 6 months after RFA, and higher starting pain scores (DVPRS pain scores of 6.1 vs. 5.1; P < 0.0001) were associated with a higher likelihood that pain scores would improve 6 months after RFA. A multivariate logistic regression was then used to develop a scoring system to predict improvement after RFA. The scoring system generated a C-statistic of 0.764, with starting ODI, pain scores, and both gender and smoking history as independent variables. Conclusions This algorithm compares favorably to that of diagnostic medial branch block in terms of prediction accuracy (C-statistic of 0.764 vs. 0.57), suggesting that its use may improve patient selection in patients who undergo RFA for facet syndrome.