This meta-analysis assessed the performance of ChatGPT-4o against human-executed steps for screening and full-text review tasks. The study population involved emotional functioning after spinal cord stimulation, though specific sample size and setting details were not reported. The analysis focused on accuracy, sensitivity, specificity, positive predictive value, and negative predictive value for these tasks.
ChatGPT-4o demonstrated modest to moderate accuracy in title and abstract screening at 70.4%, with a sensitivity of 54.9% and specificity of 80.1%. For full-text screening, accuracy was 68.4%, sensitivity was 75.6%, and specificity was 66.8%. Data pooling accuracy for five forest plots was reported as 100%, and no significant discrepancies were found in forest plot generation.
Safety and tolerability data were not reported in this analysis. Key limitations include the observation that ChatGPT demonstrates only modest to moderate accuracy in screening and study selection tasks, alongside minor discrepancies in tau-squared values ranging from 0.01 to 0.05. The study phase was not reported, and funding or conflicts of interest were not disclosed.
The practice relevance underscores the potential of AI to augment systematic review methodologies, while also emphasizing the need for human oversight to ensure accuracy and integrity in research workflows. Clinicians should interpret these results as indicative of current AI capabilities rather than a replacement for human expertise in systematic reviews.
View Original Abstract ↓
INTRODUCTION: Artificial intelligence (AI), particularly large-language models like Chat Generative Pre-Trained Transformer (ChatGPT), has demonstrated potential in streamlining research methodologies. Systematic reviews and meta-analyses, often considered the pinnacle of evidence-based medicine, are inherently time-intensive and demand meticulous planning, rigorous data extraction, thorough analysis, and careful synthesis. Despite promising applications of AI, its utility in conducting systematic reviews with meta-analysis remains unclear. This study evaluated ChatGPT's accuracy in conducting key tasks of a systematic review with meta-analysis.
METHODS: This validation study used data from a published meta-analysis on emotional functioning after spinal cord stimulation. ChatGPT-4o performed title/abstract screening, full-text study selection, and data pooling for this systematic review with meta-analysis. Comparisons were made against human-executed steps, which were considered the gold standard. Outcomes of interest included accuracy, sensitivity, specificity, positive predictive value, and negative predictive value for screening and full-text review tasks. We also assessed for discrepancies in pooled effect estimates and forest plot generation.
RESULTS: For title and abstract screening, ChatGPT achieved an accuracy of 70.4%, sensitivity of 54.9%, and specificity of 80.1%. In the full-text screening phase, accuracy was 68.4%, sensitivity 75.6%, and specificity 66.8%. ChatGPT successfully pooled data for five forest plots, achieving 100% accuracy in calculating pooled mean differences, 95% CIs, and heterogeneity estimates ( score and tau-squared values) for most outcomes, with minor discrepancies in tau-squared values (range 0.01-0.05). Forest plots showed no significant discrepancies.
CONCLUSION: ChatGPT demonstrates modest to moderate accuracy in screening and study selection tasks, but performs well in data pooling and meta-analytic calculations. These findings underscore the potential of AI to augment systematic review methodologies, while also emphasizing the need for human oversight to ensure accuracy and integrity in research workflows.