Hypertension is a leading risk factor for cardiovascular morbidity and mortality. Despite the widespread availability of both pharmacological and lifestyle therapeutic options, blood pressure control rates across the globe are worsening. In fact, only 23% of individuals with high blood pressure in the United States achieve treatment goals. In 2023, the US Food and Drug Administration approved renal denervation, a catheter-based procedure that ablates the renal sympathetic nerves, as an adjunctive treatment for patients in whom lifestyle modifications and antihypertensive medications do not adequately control blood pressure. This approval followed the publication of multiple randomized clinical studies using rigorous trial designs, all incorporating renal angiogram as the sham control. Most but not all of the new generation of trials reached their primary end point, demonstrating modest efficacy of renal denervation in lowering blood pressure across a spectrum of hypertension, from mild to truly resistant. Individual patient responses vary, and further research is needed to identify those who may benefit most. The initial safety profile appears favorable, and multiple ongoing studies are assessing longer-term efficacy and safety. Multidisciplinary teams that include hypertension specialists and adequately trained proceduralists are crucial to ensure that referrals are made appropriately with full consideration of the potential risks and benefits. Incorporating patient preferences and engaging in shared decision-making conversations will help patients make the best decisions given their individual circumstances. Although further research is clearly needed, renal denervation presents a novel treatment strategy for patients with uncontrolled blood pressure.
Background Next-generation implantable and wearable KRTs may revolutionize the lives of patients undergoing dialysis by providing more frequent and/or prolonged therapy along with greater mobility compared with in-center hemodialysis. Medical device innovators would benefit from patient input to inform product design and development. Our objective was to determine key risk/benefit considerations for patients with kidney failure and test how these trade-offs could drive patient treatment choices. Methods We developed a choice-based conjoint discrete choice instrument and surveyed 498 patients with kidney failure. The choice-based conjoint instrument consisted of nine attributes of risk and benefit pertinent across KRT modalities. Attributes were derived from literature reviews, patient/clinician interviews, and pilot testing. The risk attributes were serious infection, death within 5 years, permanent device failure, surgical requirements, and follow-up requirements. The benefit attributes were fewer diet restrictions, improved mobility, pill burden, and fatigue. We created a random, full-profile, balanced overlap design with 14 choice pairs plus five fixed tasks to test validity. We used a mixed-effects regression model with attribute levels as independent predictor variables and choice decisions as dependent variables. Results All variables were significantly important to patient choice preferences, except follow-up requirements. For each 1% higher risk of death within 5 years, preference utility was lower by 2.22 ( β =−2.22; 95% confidence interval [CI], −2.52 to −1.91), while for each 1% higher risk of serious infection, utility was lower by 1.38 ( β =−1.46; 95% CI, −1.77 to −1.00) according to comparisons of the β coefficients. Patients were willing to trade a 1% infection risk and 0.5% risk of death to gain complete mobility and freedom from in-center hemodialysis ( β =1.46; 95% CI, 1.27 to 1.64). Conclusions Despite an aversion to even a 1% higher risk of death within 5 years, serious infection, and permanent device rejection, patients with kidney failure suggested that they would trade these risks for the benefit of complete mobility.
The OPTN defines high risk donors (HRDs), colloquially known as 'CDC high risk donors', as those thought to carry an increased risk of HIV window period (WP) infection prior to serologic detectability. However, the true risk of such infection remains unknown. To quantify the risk of WP infection in each HRD behavior category, we performed a systematic review and meta-analysis of studies of HIV prevalence and incidence. Of 3476 abstracts reviewed, 27 eligible studies of HIV infection in HRD populations were identified. Pooled HIV incidence estimates were calculated for each category of HRD behavior and used to calculate the risk of WP HIV infection. Risks ranged from 0.09–12.1 per 10 000 donors based on WP for ELISA and 0.04–4.9 based on nucleic acid testing (NAT), with NAT reducing WP risk by over 50% in each category. Injection drug users had the greatest risk of WP infection (4.9 per 10 000 donors by NAT WP), followed by men who have sex with men (4.2:10 000), commercial sex workers (2.7:10 000), incarcerated donors (0.9:10 000), donors exposed to HIV through blood (0.6:10 000), donors engaging in high-risk sex (0.3:10 000) and hemophiliacs (0.035:10 000). These estimates can help inform patient and provider decision making regarding HRDs. The OPTN defines high risk donors (HRDs), colloquially known as 'CDC high risk donors', as those thought to carry an increased risk of HIV window period (WP) infection prior to serologic detectability. However, the true risk of such infection remains unknown. To quantify the risk of WP infection in each HRD behavior category, we performed a systematic review and meta-analysis of studies of HIV prevalence and incidence. Of 3476 abstracts reviewed, 27 eligible studies of HIV infection in HRD populations were identified. Pooled HIV incidence estimates were calculated for each category of HRD behavior and used to calculate the risk of WP HIV infection. Risks ranged from 0.09–12.1 per 10 000 donors based on WP for ELISA and 0.04–4.9 based on nucleic acid testing (NAT), with NAT reducing WP risk by over 50% in each category. Injection drug users had the greatest risk of WP infection (4.9 per 10 000 donors by NAT WP), followed by men who have sex with men (4.2:10 000), commercial sex workers (2.7:10 000), incarcerated donors (0.9:10 000), donors exposed to HIV through blood (0.6:10 000), donors engaging in high-risk sex (0.3:10 000) and hemophiliacs (0.035:10 000). These estimates can help inform patient and provider decision making regarding HRDs.
Anemia induced by chronic kidney disease (CKD) has multiple underlying mechanistic causes and generally worsens as CKD progresses. Erythropoietin (EPO) is a key endogenous protein which increases the number of erythrocyte progenitors that mature into red blood cells that carry hemoglobin (Hb). Recombinant human erythropoietin (rHuEPO) in its native and re-engineered forms is used as a therapeutic to alleviate CKD-induced anemia by stimulating erythropoiesis. However, due to safety risks associated with erythropoiesis-stimulating agents (ESAs), a new class of drugs, prolyl hydroxylase inhibitors (PHIs), has been developed. Instead of administering exogenous EPO, PHIs facilitate the accumulation of HIF-α, which results in the increased production of endogenous EPO. Clinical trials for ESAs and PHIs generally involve balancing decisions related to safety and efficacy by carefully evaluating the criteria for patient selection and adaptive trial design. To enable such decisions, we developed a quantitative systems pharmacology (QSP) model of erythropoiesis which captures key aspects of physiology and its disruption in CKD. Furthermore, CKD virtual populations of varying severities were developed, calibrated, and validated against public data. Such a model can be used to simulate alternative trial protocols while designing phase 3 clinical trials, as well as an asset for reverse translation in understanding emerging clinical data.