Objectives Approximately 80% of people with epilepsy live in low- and middle-income countries (LMICs), where limited resources and stigma hinder accurate diagnosis and treatment. Clinical machine learning models have demonstrated substantial promise in supporting the diagnostic process in LMICs by aiding in preliminary screening and detection of possible epilepsy cases without relying on specialised or trained personnel. How well these models generalise to naïve regions is, however, underexplored. Here, we use a novel approach to assess the suitability and applicability of such clinical tools to aid screening and diagnosis of active convulsive epilepsy in settings beyond their original training contexts. Methods We sourced data from the Study of Epidemiology of Epilepsy in Demographic Sites dataset, which includes demographic information and clinical variables related to diagnosing epilepsy across five sub-Saharan African sites. For each site, we developed a region-specific (single-site) predictive model for epilepsy and assessed its performance at other sites. We then iteratively added sites to a multi-site model and evaluated model performance on the omitted regions. Model performances and parameters were then compared across every permutation of sites. We used a leave-one-site-out cross-validation analysis to assess the impact of incorporating individual site data in the model. Results Single-site clinical models performed well within their own regions, but generally worse when evaluated in other regions (p<0.05). Model weights and optimal thresholds varied markedly across sites. When the models were trained using data from an increasing number of sites, mean internal performance decreased while external performance improved. Conclusions Clinical models for epilepsy diagnosis in LMICs demonstrate characteristic traits of ML models, such as limited generalisability and a trade-off between internal and external performance. The relationship between predictors and model outcomes also varies across sites, suggesting the need to update specific model aspects with local data before broader implementation. Variations are likely to be particular to the cultural context of diagnosis. We recommend developing models adapted to the cultures and contexts of their intended deployment and caution against deploying region- and culture-naïve models without thorough prior evaluation.
Abstract Introduction Artificial Intelligence (AI) is redefining healthcare, with Large Language Models (LLMs) like ChatGPT offering novel and powerful capabilities in processing and generating human-like information. These advancements offer potential improvements in Women’s Health, particularly Obstetrics and Gynaecology (O&G), where diagnostic and treatment gaps have long existed. Despite its generalist nature, ChatGPT is increasingly being tested in healthcare, necessitating a critical analysis of its utility, limitations and safety. This study examines ChatGPT’s performance in interpreting and responding to international gold standard benchmark assessments in O&G: the RCOG’s MRCOG Part One and Two examinations. We evaluate ChatGPT’s domain- and knowledge area-specific accuracy, the influence of linguistic complexity on performance and its self-assessment confidence and uncertainty, essential for safe clinical decision-making. Methods A dataset of MRCOG examination questions from sources beyond the reach of LLMs was developed to mitigate the risk of ChatGPT’s prior exposure. A dual-review process validated the technical and clinical accuracy of the questions, omitting those dependent on previous content, duplicates, or requiring image interpretation. Single Best Answer (SBA) and Extended Matching (EMQ) Questions were converted to JSON format to facilitate ChatGPT’s interpretation, incorporating question types and background information. Interaction with ChatGPT was conducted via OpenAI’s API, structured to ensure consistent, contextually informed responses from ChatGPT. The response from ChatGPT was recorded and compared against the known accurate response. Linguistic complexity was evaluated using unique token counts and Type-Token ratios (vocabulary breadth and diversity) to explore their influence on performance. ChatGPT was instructed to assign confidence scores to its answers (0–100%), reflecting its self-perceived accuracy. Responses were categorized by correctness and statistically analysed through entropy calculation, assessing ChatGPT’s capacity for self-evaluating certainty and knowledge boundaries. Findings Of 1,824 MRCOG Part One and Two questions, ChatGPT’s accuracy on MRCOG Part One was 72.2% (95% CI 69.2–75.3). For Part Two, it achieved 50.4% accuracy (95% CI 47.2–53.5) with 534 correct out of 989 questions, performing better on SBAs (54.0%, 95% CI 50.0–58.0) than on EMQs (45.0%, 95% CI 40.1–49.9). In domain-specific performance, the highest accuracy was in Biochemistry (79.8%, 95% CI 71.4–88.1) and the lowest in Biophysics (51.4%, 95% CI 35.2–67.5). The best-performing subject in Part Two was Urogynaecology (63.0%, 95% CI 50.1–75.8) and the worst was Management of Labour (35.6%, 95% CI 21.6–49.5). Linguistic complexity analysis showed a marginal increase in unique token count for correct answers in Part One (median 122, IQR 114–134) compared to incorrect (median 120, IQR 112–131, p=0.05). TTR analysis revealed higher medians for correct answers with negligible effect sizes (Part One: 0.66, IQR 0.63–0.68; Part Two: 0.62, IQR 0.57–0.67) and p-values < 0.001. Regarding self-assessed confidence, the median confidence for correct answers was 70.0% (IQR 60–90), the same as for incorrect choices identified as correct (p < 0.001). For correct answers deemed incorrect, the median confidence was 10.0% (IQR 0–10), and for incorrect answers accurately identified, it was 5.0% (IQR 0–10, p < 0.001). Entropy values were identical for correct and incorrect responses (median 1.46, IQR 0.44–1.77), indicating no discernible distinction in ChatGPT’s prediction certainty. Conclusions ChatGPT demonstrated commendable accuracy in basic medical queries on the MRCOG Part One, yet its performance was markedly reduced in the clinically demanding Part Two exam. The model’s high self-confidence across correct and incorrect responses necessitates scrutiny for its application in clinical decision-making. These findings suggest that while ChatGPT has potential, its current form requires significant refinement before it can enhance diagnostic efficacy and clinical workflow in women’s health.
Abstract Artificial Intelligence (AI) is transforming healthcare, with Large Language Models (LLMs) like ChatGPT offering novel capabilities. This study evaluates ChatGPT’s performance in interpreting and responding to the UK Royal College of Obstetricians and Gynaecologists MRCOG Part One and Two examinations – international benchmarks for assessing knowledge and clinical reasoning in Obstetrics and Gynaecology. We analysed ChatGPT’s domain-specific accuracy, the impact of linguistic complexity, and its self-assessment confidence. A dataset of 1824 MRCOG questions was curated, ensuring minimal prior exposure to ChatGPT. ChatGPT’s responses were compared to known correct answers, and linguistic complexity was assessed using token counts and Type-Token ratios. Confidence scores were assigned by ChatGPT and analysed for self-assessment accuracy. ChatGPT achieved 72.2% accuracy on Part One and 50.4% on Part Two, performing better on Single Best Answer (SBA) than Extended Matching (EMQ) Questions. The findings highlight the potential and significant limitations of ChatGPT in clinical decision-making in women’s health.
The rapid advancement of Artificial Intelligence (AI) in healthcare presents a unique opportunity for advancements in obstetric care, particularly through the analysis of cardiotocography (CTG) for fetal monitoring. However, the effectiveness of such technologies depends upon the availability of large, high-quality datasets that are suitable for machine learning. This paper introduces the Oxford Maternity (OxMat) dataset, the world's largest curated dataset of CTGs, featuring raw time series CTG data and extensive clinical data for both mothers and babies, which is ideally placed for machine learning. The OxMat dataset addresses the critical gap in women's health data by providing over 177,211 unique CTG recordings from 51,036 pregnancies, carefully curated and reviewed since 1991. The dataset also comprises over 200 antepartum, intrapartum and postpartum clinical variables, ensuring near-complete data for crucial outcomes such as stillbirth and acidaemia. While this dataset also covers the intrapartum stage, around 94% of the constituent CTGS are antepartum. This allows for a unique focus on the underserved antepartum period, in which early detection of at-risk fetuses can significantly improve health outcomes. Our comprehensive review of existing datasets reveals the limitations of current datasets: primarily, their lack of sufficient volume, detailed clinical data and antepartum data. The OxMat dataset lays a foundation for future AI-driven prenatal care, offering a robust resource for developing and testing algorithms aimed at improving maternal and fetal health outcomes.
Abstract Objectives Approximately 80% of people with epilepsy live in low- and middle-income countries (LMICs), where limited resources and stigma hinder accurate diagnosis and treatment. Clinical machine learning models have demonstrated substantial promise in supporting the diagnostic process in LMICs without relying on specialised or trained personnel. How well these models generalise to naïve regions is, however, underexplored. Here, we use a novel approach to assess the suitability and applicability of such clinical tools for diagnosing active convulsive epilepsy in settings beyond their original training contexts. Methods We sourced data from the Study of Epidemiology of Epilepsy in Demographic Sites dataset, which includes demographic information and clinical variables related to diagnosing epilepsy across five sub-Saharan African sites. For each site, we developed a region-specific (single-site) predictive model for epilepsy and evaluated its performance on other sites. We then iteratively added sites to a multi-site model and evaluated its performance on the omitted regions. Model performances and parameters were then compared across every permutation of sites. We used a leave-one-site-out cross-validation analysis to assess the impact of incorporating individual site data in the model. Results Single-site clinical models performed well within their own regions, but worse in general when evaluated on other regions (p<0.05). Model weights and optimal thresholds varied markedly across sites. When the models were trained using data from an increasing number of sites, mean internal performance decreased while external performance improved. Conclusions Clinical models for epilepsy diagnosis in LMICs demonstrate characteristic traits of ML models, such as limited generalisability and a trade-off between internal and external performance. The relationship between predictors and model outcomes also varies across sites, suggesting the need to update specific aspects of the model with local data before broader implementation. Variations are likely to be specific to the cultural context of diagnosis. We recommend developing models adapted to the cultures and contexts of their intended deployment and caution against deploying region- and culture-naïve models without thorough prior evaluation. Key points Machine learning-driven clinical tools are becoming more prevalent in low-resource settings; however, their general performance across regions is not fully established. Given their potential impact, it is crucial models are robust, safe and appropriately deployed Models perform poorly when making predictions for regions that were not included in their training data, as opposed to sites that were Models trained on different regions can have different optimal parameters and thresholds for performance in practice There is a trade-off between internal and external performance, where a model with better external performance usually has worse internal performance but is generally more robust overall SEEDS collaborators Agincourt HDSS, South Africa: Ryan Wagner, Rhian Twine, Myles Connor, F. Xavier Gómez-Olivé, Mark Collinson (and INDEPTH Network, Accra, Ghana), Kathleen Kahn (and INDEPTH Network, Accra, Ghana), Stephen Tollman (and INDEPTH Network, Accra, Ghana) Ifakara HDSS, Tanzania: Honratio Masanja (and INDEPTH Network, Accra, Ghana), Alexander Mathew Iganga/Mayuge HDSS, Uganda: Angelina Kakooza, George Pariyo, Stefan Peterson (and Uppsala University, Dept of Women’s and Children’s Health, IMCH; Karolinska Institutet, Div. of Global Health, IHCAR; Makerere University School of Public Health), Donald Ndyomughenyi Kilifi HDSS, Kenya: Anthony K Ngugi, Rachael Odhiambo, Eddie Chengo, Martin Chabi, Evasius Bauni, Gathoni Kamuyu, Victor Mung’ala Odera, James O Mageto, Isaac Egesa, Clarah Khalayi, Charles R Newton Kintampo HDSS, Ghana: Ken Ae-Ngibise, Bright Akpalu, Albert Akpalu, Francic Agbokey, Patrick Adjei, Seth Owusu-Agyei, Victor Duko (and INDEPTH Network, Accra, Ghana) London School of Hygiene and Tropical Medicine: Christian Bottomley, Immo Kleinschmidt Institute of Psychiatry, King’s College London: Victor CK Doku UCL Queen Square Institute of Neurology, London: Josemir W Sander Swiss Tropical Institute: Peter Odermatt