The international healthcare response to COVID-19 has been driven by epidemiological data related to case numbers and case fatality rate. Second order effects have been less well studied. This study aimed to characterise the changes in emergency activity of a high-volume cardiac catheterisation centre and to cautiously model any excess indirect morbidity and mortality. Retrospective cohort study of patients admitted with acute coronary syndrome fulfilling criteria for the heart attack centre (HAC) pathway at St. Bartholomew's hospital, UK. Electronic data were collected for the study period March 16th – May 16th 2020 inclusive and stored on a dedicated research server. Standard governance procedures were observed in line with the British Cardiovascular Intervention Society audit. There was a 28% fall in the number of primary percutaneous coronary interventions (PCIs) for ST elevation myocardial infarction (STEMI) during the study period (111 vs. 154) and 36% fewer activations of the HAC pathway (312 vs. 485), compared to the same time period averaged across three preceding years. In the context of 'missing STEMIs', the excess harm attributable to COVID-19 could result in an absolute increase of 1.3% in mortality, 1.9% in nonfatal MI and 4.5% in recurrent ischemia. The emergency activity of a high-volume PCI centre was significantly reduced for STEMI during the peak of the first wave of COVID-19. Our data can be used as an exemplar to help future modelling within cardiovascular workstreams to refine aggregate estimates of the impact of COVID-19 and inform targeted policy action.
Abstract Objective To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians. Design Systematic review. Data sources Medline, Embase, Cochrane Central Register of Controlled Trials, and the World Health Organization trial registry from 2010 to June 2019. Eligibility criteria for selecting studies Randomised trial registrations and non-randomised studies comparing the performance of a deep learning algorithm in medical imaging with a contemporary group of one or more expert clinicians. Medical imaging has seen a growing interest in deep learning research. The main distinguishing feature of convolutional neural networks (CNNs) in deep learning is that when CNNs are fed with raw data, they develop their own representations needed for pattern recognition. The algorithm learns for itself the features of an image that are important for classification rather than being told by humans which features to use. The selected studies aimed to use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups (eg, disease or non-disease). For example, raw chest radiographs tagged with a label such as pneumothorax or no pneumothorax and the CNN learning which pixel patterns suggest pneumothorax. Review methods Adherence to reporting standards was assessed by using CONSORT (consolidated standards of reporting trials) for randomised studies and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) for non-randomised studies. Risk of bias was assessed by using the Cochrane risk of bias tool for randomised studies and PROBAST (prediction model risk of bias assessment tool) for non-randomised studies. Results Only 10 records were found for deep learning randomised clinical trials, two of which have been published (with low risk of bias, except for lack of blinding, and high adherence to reporting standards) and eight are ongoing. Of 81 non-randomised clinical trials identified, only nine were prospective and just six were tested in a real world clinical setting. The median number of experts in the comparator group was only four (interquartile range 2-9). Full access to all datasets and code was severely limited (unavailable in 95% and 93% of studies, respectively). The overall risk of bias was high in 58 of 81 studies and adherence to reporting standards was suboptimal (<50% adherence for 12 of 29 TRIPOD items). 61 of 81 studies stated in their abstract that performance of artificial intelligence was at least comparable to (or better than) that of clinicians. Only 31 of 81 studies (38%) stated that further prospective studies or trials were required. Conclusions Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions. Study registration PROSPERO CRD42019123605.
Abstract Aims To conduct a contemporary cost-effectiveness analysis examining the use of implantable cardioverter defibrillators (ICDs) for primary prevention in patients with hypertrophic cardiomyopathy (HCM). Methods A discrete-time Markov model was used to determine the cost-effectiveness of different ICD decision-making rules for implantation. Several scenarios were investigated, including the reference scenario of implantation rates according to observed real-world practice. A 12-year time horizon with an annual cycle length was used. Transition probabilities used in the model were obtained using Bayesian analysis. The study has been reported according to the Consolidated Health Economic Evaluation Reporting Standards checklist. Results Using a 5-year SCD risk threshold of 6% was cheaper than current practice and has marginally better total quality adjusted life years (QALYs). This is the most cost-effective of the options considered, with an incremental cost-effectiveness ratio of £834 per QALY. Sensitivity analyses highlighted that this decision is largely driven by what health-related quality of life (HRQL) is attributed to ICD patients and time horizon. Conclusion We present a timely new perspective on HCM-ICD cost-effectiveness, using methods reflecting real-world practice. While we have shown that a 6% 5-year SCD risk cut-off provides the best cohort stratification to aid ICD decision-making, this will also be influenced by the particular values of costs and HRQL for subgroups or at a local level. The process of explicitly demonstrating the main factors, which drive conclusions from such an analysis will help to inform shared decision-making in this complex area for all stakeholders concerned.
Background We sought to establish to what extent decision certainty has been measured in real time and whether high or low levels of certainty correlate with clinical outcomes. Methods Our pre-specified study protocol is published on PROSPERO, CRD42019128112. We identified prospective studies from Medline, Embase and PsycINFO up to February 2019 that measured real time self-rating of the certainty of a medical decision by a clinician. Findings Nine studies were included and all were generally at high risk of bias. Only one study assessed long-term clinical outcomes: patients rated with high diagnostic uncertainty for heart failure had longer length of stay, increased mortality and higher readmission rates at 1 year than those rated with diagnostic certainty. One other study demonstrated the danger of extreme diagnostic confidence – 7% of cases (24/341) labelled as having either 0% or 100% diagnostic likelihood of heart failure were made in error. Conclusions The literature on real time self-rated certainty of clinician decisions is sparse and only relates to diagnostic decisions. Further prospective research with a view to generating hypotheses for testable interventions that can better calibrate clinician certainty with accuracy of decision making could be valuable in reducing diagnostic error and improving outcomes.
Postgraduate medical education will need to adapt in light of the healthcare and educational reset that the COVID-19 response has necessitated. The ongoing uncertainty of the pandemic, and the proliferation of data from many sources, used by many actors with different frames, has meant that the importance of unbiased decision-making is now central in pulling together a unified response.
As two aspiring academic clinicians in the UK with protected time to develop and explore ideas alongside our clinical training1, we became curious about clinical decision-making. We initially examined decision-making from the lens of our research experiences of evaluating the rise of artificial intelligence (AI) algorithms in healthcare.2 Our thesis was that their increasing use would profoundly affect how clinicians made decisions. As we began to unpack the existing literature of clinical decision-making, we focused on the current educational provision for clinicians in understanding what makes for good decisions—and the biases that may warp them.
We were surprised to uncover such a paucity of assessment and formal training in these areas—for instance, the terms ‘clinical decision-making’ and ‘bias’ appear only twice each in the UK’s general internal medicine curriculum.3 As a result, we designed an educational intervention in the form of a series of Grand Rounds with a TED-style presentation.4 Our aim was to increase the awareness of biases that can affect decision-making among our peers, consultant colleagues and other allied health professionals.
Using our experiences of delivering the presentation ‘Biases in clinical reasoning: I’ll think to that! ’, we reflect on the wider implications for clinicians, not only in terms of the need for future educational interventions but also in terms of the format that they will need …
Background Patient and public involvement (PPI) has growing impact on the design of clinical care and research studies. There remains underreporting of formal PPI events including views related to using digital tools. This study aimed to assess the feasibility of hosting a hybrid PPI event to gather views on the use of digital tools in clinical care and research. Methods A PPI focus day was held following local procedures and published recommendations related to advertisement, communication and delivery. Two exemplar projects were used as the basis for discussions and qualitative and quantitative data was collected. Results 32 individuals expressed interest in the PPI day and 9 were selected to attend. 3 participated in person and 6 via an online video-calling platform. Selected written and verbal feedback was collected on two digitally themed projects and on the event itself. The overall quality and interactivity for the event was rated as 4/5 for those who attended in person and 4.5/5 and 4.8/5 respectively, for those who attended remotely. Conclusions A hybrid PPI event is feasible and offers a flexible format to capture the views of patients. The overall enthusiasm for digital tools amongst patients in routine care and clinical research is high, though further work and standardised, systematic reporting of PPI events is required.