This scoping review examines the conceptualization and measurement of AI and algorithmic literacy among health workers, specifically within medical students, nursing students, and health professions education contexts. The review synthesizes data from 12 studies, noting that evidence was concentrated in health professions education, accounting for 10 of the 12 studies included. The scope is limited to education settings, with no studies exploring public health practice, representing 0 of the 12 studies in that domain.
The authors report that explicit, theory-grounded definitions of AI literacy were uncommon across the included literature. Measurement tools frequently relied on self-reported instruments, with the Artificial Intelligence Literacy Scale (AILS) used in 3 studies, the Meta Artificial Intelligence Literacy Scale (MAILS) in 2 studies, and the Scale for the Assessment of Non-Experts’ AI Literacy (SNAIL) alongside self-developed tools. Only one of the 12 studies explicitly defined and measured algorithmic literacy as a distinct construct.
Regarding digital health literacy (DHL), links were only implied rather than explicitly defined. Competencies assessed aligned mainly with functional and critical dimensions, particularly awareness, use, evaluation, and ethics, while communicative literacies were infrequently assessed. The review highlights that AI and algorithmic literacy among health workers remains underdeveloped, weakly integrated with digital health literacy, and inconsistently measured using non-health-specific self-report tools.
These findings point to the need for clearer conceptual alignment, health-specific measurement, and systems-based approaches to workforce readiness as AI-enabled tools expand across healthcare and public health. The authors caution that current research largely overlooks communicative competencies essential to clinical and public health practice, indicating a gap in the current evidence base.
View Original Abstract ↓
IntroductionArtificial intelligence (AI) and algorithmic systems influence health workers’ access, interpretation, and action on clinical and public health information, positioning them as intermediaries between algorithmically mediated outputs and patients, communities, and decision makers. This study examines how AI and algorithmic literacy are conceptualized and measured among health workers through a digital health literacy (DHL) lens.MethodsUsing Arksey and O’Malley’s scoping review framework, we searched Ovid MEDLINE, Ovid Embase, Scopus, IEEE Xplore, ACM Digital Library, Europe PMC, and arXiv for English language sources published between January 2020 and May 2025. Two reviewers screened records and extracted data using a theory informed charting framework grounded in Nutbeam’s model (functional: basic understanding and use; critical: evaluation and ethics; communicative: interacting with AI systems and explaining AI-mediated information). We synthesized findings using descriptive statistics and a narrative synthesis.ResultsTwelve studies published between 2021 and 2025 met inclusion criteria. Evidence was concentrated in health professions education (10/12), primarily among medical (6/12) and nursing students (2/12), with no studies exploring public health practice. Explicit, theory-grounded definitions of AI literacy were uncommon, and links to DHL were only implied. AI literacy was frequently operationalized through self-reported instruments, commonly the Artificial Intelligence Literacy Scale (AILS; 3 studies), Meta Artificial Intelligence Literacy Scale (MAILS; 2 studies) and the Scale for the Assessment of Non-Experts’ AI Literacy (SNAIL), alongside self-developed tools. Only one study explicitly defined and measured algorithmic literacy as a distinct construct; in other studies, algorithmic considerations appeared indirectly through recognizing AI presence in systems or evaluating AI generated content. Across studies, competencies aligned mainly with functional and critical dimensions of DHL, particularly awareness, use, evaluation, and ethics, while communicative literacies were infrequently assessed.DiscussionAI and algorithmic literacy among health workers is underdeveloped, weakly integrated with digital health literacy, and inconsistently measured. Research prioritizes AI literacy using non–health-specific self-report tools and largely overlooks communicative competencies essential to clinical and public health practice. These findings point to the need for clearer conceptual alignment, health-specific measurement, and systems-based approaches to workforce readiness as AI-enabled tools expand across healthcare and public health.