BACKGROUND Silent carriers were relevant in spreading infection during the coronavirus disease (COVID-19) pandemic. Estimating the prevalence of undocumented cases of COVID-19 has been a significant public health issue since the beginning of the pandemic. OBJECTIVE In this work, we propose to modify a commonly used indirect estimation method by assuming undetected COVID-19 cases as a hidden population. The proposed method is based on Bernard’s Network Scale-Up Method (NSUM) and its subsequent modification in a Bayesian framework. The primary aim of this study is to estimate the proportion of undocumented COVID-19 cases in three Italian regions, Veneto, Piemonte, and Lombardia, using a modified NSUM method. The secondary endpoints included estimating COVID-19 cases, people in quarantine, and those who moved between regions after the Italian law act during the earliest pandemic wave. METHODS For this purpose, a cross-sectional survey with social networks sampling between 15 April 2020 and 6 May 2020 involving three Italian regions (Lombardia, Piemonte, and Veneto) was conducted. The prevalence of documented and undocumented COVID-19 cases, people in quarantine, and those who moved between regions were estimated. The three methods proposed by Maltiel and based on Network Scale-Up Method: the random degree model (RDM), the barrier effects model (BEM), and the transmission bias model (TBM), were applied. The analysis assumed several scenarios on the average network degree size on the log-normal scale. RESULTS The respondents were 1484: 895 (60%) were women with a median age of 39. For all the regions considered, RDM estimates of COVID-19 cases were closer to the official data than those obtained using the other two models. According to the RDM, estimated undocumented cases were higher in Lombardia compared to Piemonte and Veneto (2.78%, 0.44%, and 0.24%, respectively). CONCLUSIONS Although there are gold standard methods for detecting the size of undocumented cases, such as mass testing, using an indirect method could still help define the prevalence of a hard-to-reach phenomenon. This modification of NSUM is useful, especially in the early phase of the pandemic when the mass testing procedure was not widespread. INTERNATIONAL REGISTERED REPORT RR2-10.3390/ijerph18115713
(1) Background: Emerging data regarding patients recovered from COVID-19 are reported in the literature, but cardiac sequelae have not yet been clarified. To quickly detect any cardiac involvement at follow-up, the aims of the research were to identify: elements at admission predisposing subclinical myocardial injury at follow up; the relationship between subclinical myocardial injury and multiparametric evaluation at follow-up; and subclinical myocardial injury longitudinal evolution. (2) Methods and Results: A total of 229 consecutive patients hospitalised for moderate to severe COVID-19 pneumonia were initially enrolled, of which 225 were available for follow-up. All patients underwent a first follow-up visit, which included a clinical evaluation, a laboratory test, echocardiography, a six-minute walking test (6MWT), and a pulmonary functional test. Of the 225 patients, 43 (19%) underwent a second follow-up visit. The median time to the first follow-up after discharge was 5 months, and the median time to the second follow-up after discharge was 12 months. Left ventricular global longitudinal strain (LVGLS) and right ventricular free wall strain (RVFWS) were reduced in 36% (n = 81) and 7.2% (n = 16) of the patients, respectively, at first the follow-up visit. LVGLS impairment showed correlations with patients of male gender (p 0.008, OR 2.32 (95% CI 1.24–4.42)), the presence of at least one cardiovascular risk factor (p < 0.001, OR 6.44 (95% CI 3.07–14.9)), and final oxygen saturation (p 0.002, OR 0.99 (95% CI 0.98–1)) for the 6MWTs. Subclinical myocardial dysfunction had not significantly improved at the 12-month follow-ups. (3) Conclusions: in patients recovered from COVID-19 pneumonia, left ventricular subclinical myocardial injury was related to cardiovascular risk factors and appeared stable during follow-up.
The importance of body composition, in particular skeletal muscle mass, as risk factor affecting survival of cancer patients has recently gained increasing attention. The relationship between sarcopenia and oncological outcomes has become a topic of research in particular in patients with gastrointestinal cancer. However, there are few studies addressing this issue in colorectal cancer, and even less specifically focused on rectal cancer, in particular in Western countries. The aim of this study was to evaluate the prognostic relevance of preoperative skeletal mass index (SMI) on long-term outcomes in patients undergoing laparoscopic curative resection for rectal cancer.SMI data and clinicopathological characteristics of rectal cancer patients in a 15-year period (June 2005-December 2020) were evaluated; patients with metastatic disease at surgery were excluded; overall and disease-free survival as well as recurrence were evaluated.Hundred and sixty-five patients were included in the study. Sarcopenia was identified in 30 (18%) patients. Multivariate analysis identified sarcopenia (HR = 3.28, CI = 1.33-8.11, P = 0.015), along with age (HR = 1.06, CI = 1.02-1.10, P = 0.002) and stage III (HR = 2.63, CI = 1.13-6.08, P < 0.03) as independent risk factors for overall survival.Long-term results of rectal cancer patients undergoing curative resection are affected by their preoperative skeletal muscle status. Larger studies including comprehensive data on muscle strength along with SMI are awaited to confirm these results on both Eastern and Western rectal cancer patient populations before strategies to reverse muscle depletion can be extensively applied.
Several European countries suspended or changed recommendations for the use of Vaxzevria (AstraZeneca) for suspected adverse effects due to atypical blood-clotting. This research aims to identify a reference point towards the number of thrombotic events expected in the Italian population over 50 years of age who received Vaxzevria from 22 January to 12 April 2021. The venous thromboembolism (VT) and immune thrombocytopenia (ITP) event rates were estimated from a population-based cohort. The overall VT rate was 1.15 (95% CI 0.93-1.42) per 1000 person-years, and the ITP rate was 2.7 (95% CI 0.7-11) per 100,000 person-years. These figures translate into 83 and two expected events of VT and ITP, respectively, in the 15 days following the first administration of Vaxzevria. The number of thrombotic events reported from the Italian Medicines Agency does not appear to have increased beyond that expected in individuals over 50 years of age.
Background Unintentional injury is the leading cause of death in young children. Emergency department (ED) diagnoses are a useful source of information for injury epidemiological surveillance purposes. However, ED data collection systems often use free-text fields to report patient diagnoses. Machine learning techniques (MLTs) are powerful tools for automatic text classification. The MLT system is useful to improve injury surveillance by speeding up the manual free-text coding tasks of ED diagnoses. Objective This research aims to develop a tool for automatic free-text classification of ED diagnoses to automatically identify injury cases. The automatic classification system also serves for epidemiological purposes to identify the burden of pediatric injuries in Padua, a large province in the Veneto region in the Northeast Italy. Methods The study includes 283,468 pediatric admissions between 2007 and 2018 to the Padova University Hospital ED, a large referral center in Northern Italy. Each record reports a diagnosis by free text. The records are standard tools for reporting patient diagnoses. An expert pediatrician manually classified a randomly extracted sample of approximately 40,000 diagnoses. This study sample served as the gold standard to train an MLT classifier. After preprocessing, a document-term matrix was created. The machine learning classifiers, including decision tree, random forest, gradient boosting method (GBM), and support vector machine (SVM), were tuned by 4-fold cross-validation. The injury diagnoses were classified into 3 hierarchical classification tasks, as follows: injury versus noninjury (task A), intentional versus unintentional injury (task B), and type of unintentional injury (task C), according to the World Health Organization classification of injuries. Results The SVM classifier achieved the highest performance accuracy (94.14%) in classifying injury versus noninjury cases (task A). The GBM method produced the best results (92% accuracy) for the unintentional and intentional injury classification task (task B). The highest accuracy for the unintentional injury subclassification (task C) was achieved by the SVM classifier. The SVM, random forest, and GBM algorithms performed similarly against the gold standard across different tasks. Conclusions This study shows that MLTs are promising techniques for improving epidemiological surveillance, allowing for the automatic classification of pediatric ED free-text diagnoses. The MLTs revealed a suitable classification performance, especially for general injuries and intentional injury classification. This automatic classification could facilitate the epidemiological surveillance of pediatric injuries by also reducing the health professionals’ efforts in manually classifying diagnoses for research purposes.
Background: Lung transplantation is a specialized procedure used to treat chronic end-stage respiratory diseases. Due to the scarcity of lung donors, constructing fair and equitable lung transplant allocation methods is an issue that has been addressed with different strategies worldwide. This work aims to describe how Italy’s “national protocol for the management of surplus organs in all transplant programs” functions through an online app to allocate lung transplants. We have developed two probability models to describe the allocation process among the various transplant centers. An online app was then created. The first model considers conditional probabilities based on a protocol flowchart to compute the probability for each area and transplant center to receive each n-th organ in the period considered. The second probability model is based on the generalization of the binomial distribution to correlated binary variables, which is based on Bahadur’s representation, to compute the cumulative probability for each transplant center to receive at least nth organs. Our results show that the impact of the allocation of a surplus organ depends mostly on the region where the organ was donated. The discrepancies shown by our model may be explained by a discrepancy between the northern and southern regions in relation to the number of organs donated.
Introduction An intact auditory system is essential for the development and maintenance of voice quality and speech prosody. On the contrary hearing loss affects the adjustments and appropriate use of organs involved in speech and voice production. Spectro-acoustic voice parameters have been evaluated in Cochlear Implant (CI) users, and the authors of previous systematic reviews on the topic concluded that fundamental frequency (F0) seemed preliminarily the most reliable parameter to evaluate voice alterations in adult CI users. The main aim of this systematic review and meta-analysis was to clarify the vocal parameters and prosodic alterations of speech in pediatric CI users. Materials and methods The protocol of the systematic review was registered on the PROSPERO database, International prospective register of systematic reviews. We conducted a search of the English literature published in the period between January 1, 2005 and April 1, 2022 on the Pubmed and Scopus databases. A meta-analysis was conducted to compare the values of voice acoustic parameters in CI users and non-hearing-impaired controls. The analysis was conducted using the standardized mean difference as the outcome measure. A random-effects model was fitted to the data. Results A total of 1334 articles were initially evaluated using title and abstract screening. After applying inclusion/exclusion criteria, 20 articles were considered suitable for this review. The age of the cases ranged between 25 and 132 months at examination. The most studied parameters were F0, Jitter, Shimmer and Harmonics-to-Noise Ratio (HNR); other parameters were seldom reported. A total of 11 studies were included in the meta-analysis of F0, with the majority of estimates being positive (75%); the estimated average standardized mean difference based on the random-effects model was 0.3033 (95% CI: 0.0605 to 0.5462; P = 0.0144). For Jitter (0.2229; 95% CI: -0.1862 to 0.7986; P = 0.2229) and shimmer (0.2540; 95% CI: -0.1404 to 0.6485; P = 0.2068) there was a trend toward positive values without reaching statistical significance. Discussion and conclusions This meta-analysis confirmed that higher F0 values have been observed in the pediatric population of CI users compared to age-matched normal hearing volunteers, whereas the parameters of voice noise were not significantly different between cases and controls. Prosodic aspects of language need further investigations. In longitudinal contexts, prolonged auditory experience with CI has brought voice parameters closer to the norm. Following the available evidence, we stress the utility of inclusion of vocal acoustic analysis in the clinical evaluation and follow-up of CI patients to optimize the rehabilitation process of pediatric patients with hearing loss.
Lung cancer is still the main cause of cancer death. In the last decades, significant innovations were introduced in non-small cell lung cancer (NSCLC) treatment and management improving patient outcomes. The discovery of immune checkpoint inhibitors and the detection of an increasing list of actionable genetic alterations are enabling a tailored approach. Herein, we assessed in a pragmatic retrospective study the rate of biomarker tests within a large pulmonary pathology-based unit (PPU) network of the Veneto region (Northern Italy).
Abstract Background The efficacy of immunosuppressive therapy (IT) in biopsy-proven (BP) myocarditis was proved by small trials in lymphocytic myocarditis with heart failure (HF) presentation. Aim To evaluate the efficacy of IT in a single center cohort of BP myocarditis patients, irrespective of histological type and clinical presentation. Methods Consecutive BP myocarditis patients were included and, in the absence of contraindications, were treated with IT according to current guidelines on top of standard care. IT patients were compared with a propensity score-weighted control group of non-IT patients, considering age, gender, ejection fraction (EF), NYHA class and histological type (Fig. 1A). The primary outcome was a composite of death or heart transplant (HTx), and the secondary outcome a composite of variation of biventricular function, NYHA class and myocarditis relapse. Results 87 IT patients were compared with 265 without IT. IT patients were older (44 vs 39 years, p=0.035), and more frequently had a systemic immune-mediated disease (34% vs 10%, p<0.001), and HF (59% vs 46%, p=0.048) or fulminant presentation (9% vs 2 %, p=0.002). IT patients had lower baseline left ventricular (LV) EF both on echocardiography (35±15% vs 43±15%, p=0.001) and cardiac magnetic resonance (26±16 % vs 45.5±15 %, p=0.0013), lower right ventricular (RV) fractional area change (FAC: 34±11 % vs 41±11 %, p<0.001) and higher frequency of lymphocytic, eosinophilic and giant cell myocarditis (71% vs 58%, 11% vs 1 %, and 7% vs 2%, p<0.001 respectively) compared to non IT patients. IT was indicated because of unremitting HF (62%) and recurrent chest pain (14%), and mainly involved prednisone (PDN) and azathioprine as first-line (61%) and PDN and mycophenolate mofetil as second-line therapy (62%). IT lasted on average 24 months. No difference was observed in the primary outcome between the two groups (5 years survival 88% in IT vs 91% in non-IT patients, p=0.8, Fig. 1B), while relapse incidence was higher in IT patients (p<0.001, Fig.2B). At long-term follow up (54 months, IQR 21.4-96.45), LVEF and RV FAC were within the normal range in both groups, and propensity score weighting analysis showed that IT patients presented LVEF values of only 3% lower than non-IT (p=0.013, Fig. 2). In addition, IT patients had a similar probability of presenting the same NYHA class as non-IT patients at last follow up (p=0.46). Conclusions Our study shows for the first time the efficacy of prolonged tailored IT in BP lymphocytic and non lymphocytic autoimmune myocarditis, with or without HF and with or without RV dysfunction. IT patients, despite having lower biventricular function and a higher risk profile at baseline as well as a higher frequency of relapses, at long-term follow up showed normalized biventricular function and a similar survival compared to their propensity-score weighted controls. Thus, autoimmune myocarditis responds to IT, even when the disease relapses.Figure 1.Figure 2.