Mode
Text Size
Log in / Sign up

Scoping review identifies barriers and enabling strategies for medical AI deployment in LMIC settings

Scoping review identifies barriers and enabling strategies for medical AI deployment in LMIC setting…
Photo by ClinicalPulse / Unsplash
Key Takeaway
Consider qualitative barriers and strategies for medical AI in LMICs; evidence is heterogeneous and descriptive.

A scoping review analyzed 44 studies examining medical artificial intelligence deployment, implementation barriers, and enabling strategies in low- and middle-income country healthcare settings. The review did not report specific population characteristics, sample sizes, comparators, primary outcomes, or follow-up durations. The evidence represents a qualitative synthesis of implementation experiences rather than quantitative effectiveness data.

Main findings identified common implementation barriers including unreliable infrastructure, data quality and availability issues, limited AI familiarity among healthcare workers, and lack of governance frameworks. Enabling strategies focused on strengthening infrastructure, establishing data standards, building local capacity, conducting fairness audits, and integrating governance structures. No quantitative effect sizes, absolute numbers, or statistical measures were reported for these barriers or strategies.

Safety and tolerability data were not reported in the review. Key limitations include inclusion of English-language studies only and substantial heterogeneity among included studies that precluded quantitative synthesis. The review provides a descriptive overview of implementation challenges and potential solutions but does not establish causal relationships or quantify outcomes. Practice relevance was not specifically reported, and clinicians should interpret these findings as preliminary qualitative insights rather than evidence-based implementation guidelines.

Study Details

Study typeSystematic review
EvidenceLevel 1
PublishedApr 2026
View Original Abstract ↓
Artificial intelligence (AI) is increasingly used to enhance diagnostic accuracy, clinical decision-making, and health system efficiency. However, its sustainable and equitable deployment in low-resource settings (LRS) remains limited. In many low- and middle-income countries (LMICs), digital health efforts are still held back by weak infrastructure, fragmented health data, limited local skills, and gaps in governance. Bringing together lessons from existing evidence and practical, real-world solutions is essential for supporting digital health approaches that are fair, workable, and sustainable over time. Following the PRISMA-ScR framework, a scoping review was conducted of peer-reviewed literature published between January 2015 and January 2026. Searches were performed across PubMed, Scopus, Web of Science, IEEE Xplore, and Google Scholar. Eligible studies examined medical AI deployment, implementation barriers, or enabling strategies within LMIC healthcare settings. Data were extracted and analyzed thematically across four domains: digital infrastructure and connectivity, data quality and local capacity, ethics and governance, and policy and sustainability, guided by a human-centered implementation perspective and JBI methodological guidance. A total of 44 studies met the inclusion criteria. The analysis showed that making AI work in low-resource settings is less about advanced technology and more about having the right systems in place. Common problems included unreliable electricity and internet access, messy or incomplete data, limited familiarity with AI among healthcare workers, and a lack of clear rules to guide its use. Reported enabling strategies focused on investments in resilient digital infrastructure, adoption of interoperable data standards (e.g., HL7/FHIR), continuous capacity-building programs, fairness and bias auditing mechanisms, and integration of AI governance within national digital health and e-health policies supported by sustainable financing models. Sustainable and equitable deployment of medical AI in LMICs requires embedding human-centered values—transparency, accountability, privacy, and equity throughout the AI lifecycle. Aligned with the WHO (2021) and UNESCO (2021) AI ethics frameworks, this review underscores that meaningful innovation in digital health depends on augmenting, rather than replacing, human judgment through context-aware and trustworthy AI systems. However, this scoping review is limited by the inclusion of English-language studies and by the heterogeneity of studies, which precluded quantitative synthesis.
Free Newsletter

Clinical research that matters. Delivered to your inbox.

Join thousands of clinicians and researchers. No spam, unsubscribe anytime.