ORIGINAL RESEARCH ARTICLE
Natia Afriana Suria* , Festiyedb, Minda Azharb, Yerimadesib, Yuni Ahdab and Heffi Alberidab
aDoctoral Program of Science Education, Universitas Negeri Padang, Padang, Indonesia; bDepartment of Science Education Doctoral Program, Universitas Negeri Padang, Padang, Indonesia
Received: 26 December 2024; Revised: 9 June 2025; Accepted: 1 July 2025; Published: 4 August 2025
Digital literacy is a critical competency in education across all levels, from primary to higher education. It includes skills such as technical proficiency, information evaluation, online collaboration, creativity and ethical technology use. This study conducts a Systematic Literature Review (SLR), following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, to examine types of instruments used to assess students’ digital literacy, the competencies targeted and the methodological challenges in their development. A total of 23 peer-reviewed articles published between 2014 and 2024 were selected from Scopus, PubMed, Crossref and ERIC. This review shows that assessment instruments include Likert scale-based questionnaires, framework-aligned tools (DigComp and DQ Framework) and digital performance-based methods. These instruments are applied across diverse educational settings: primary, secondary, tertiary and adult education with varying emphases based on age and learning context. Whilst core competencies are addressed, several limitations persist, such as reliance on self-reporting, limited cross-cultural validation and lack of authentic performance assessment. This study highlights the need for more comprehensive, validated and context-sensitive instruments that integrate digital safety, ethics and practical digital skills. The findings offer insights for researchers, educators and policymakers to improve digital literacy measurement across education sectors.
Keywords: digital literacy; measurement instrument; digital competency
*Corresponding author. Email: natiaafrianasuri.s3unp@gmail.com
Research in Learning Technology 2025. © 2025 L. Ibeh et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.
Citation: Research in Learning Technology 2025, 33: 3413 - http://dx.doi.org/10.25304/rlt.v33.3413
Digital literacy is increasingly recognised as a vital skill for students in the modern era, as it supports access to diverse literacy practices, critical thinking and informed technology use (Momdjian et al., 2024; Newland & Handley, 2016; Yuan et al., 2019). It encompasses abilities such as information search, digital communication, content creation and ethical awareness (Herro, 2014; Shin, 2015), all of which are essential in today’s technology-integrated classrooms (Churchill, 2020). In response, researchers have developed various assessment tools including questionnaires, skill-based tests and self-assessments to measure students’ digital literacy (Choi et al., 2023; Mieg et al., 2023; Son & Ha, 2024). However, these instruments differ widely in focus, ranging from technical proficiency to critical thinking and online safety, and there is still no universally accepted standard, making it difficult to compare digital literacy outcomes across contexts (Afandi et al., 2024; Oh et al., 2021).
Recent international frameworks including selfie for teachers based on DigCompEdu (Economou et al., 2023), the DQ Framework that integrates technical, cognitive and socio-emotional competencies (DQ Institute, 2023; Park & Gentile, 2019) and UNESCO’s ICT Competency Framework for Teachers (UNESCO-UNEVOC, 2023) emphasise that digital literacy is inherently a multidimensional construct. These models underline the theoretical importance of developing assessment instruments that are holistic, theoretically grounded and context-sensitive, capable of capturing not only technical proficiency but also ethical reasoning, pedagogical integration and socio-emotional adaptability across diverse educational and cultural settings.
Although digital literacy has been widely studied, limited research has critically examined the instruments used to assess it in a comprehensive and multidimensional manner. Most existing studies focus on isolated components, such as internet safety or basic software usage, whilst essential dimensions like information evaluation, online collaboration and digital ethics are frequently overlooked (Mainz et al., 2024; Nguyen & Habók, 2023; Siddiq et al., 2016). Furthermore, few studies evaluate the applicability of these instruments across diverse educational contexts, including underserved schools and culturally diverse classrooms (Herzog-Punzenberger et al., 2020; Lawson et al., 2024).
This gap highlights the need for assessment tools that reflect the full scope of students’ digital competencies. Unequal access to digital tools (Snyder et al., 2002) and overreliance on technology without proper support (Ruffini, 2022) may limit learning, particularly for students with lower executive functioning. The lack of consensus on definitions and frameworks complicates efforts to create standardised instruments. Although holistic frameworks like DigCompEdu and Information and Communication Technology – Competency Framework for Teachers (ICT-CFT) consider cognitive, social and emotional aspects (Cabero-Almenara et al., 2023; Magnago et al., 2024; Villoria-Mendieta, 2023), their practical implementation is constrained by the limited availability of validated, context-sensitive tools (Guo, 2024). Therefore, this study calls for a systematic review to evaluate digital literacy assessment instruments across educational settings, identifying instrument types, the core competencies measured including technical, cognitive, socio-emotional and ethical dimensions and methodological limitations. Special attention is given to trends in authentic tasks, cross-cultural validation and alignment with global frameworks such as DigComp and DQ, to inform the development of inclusive and effective instruments for primary through higher education. Based on these objectives, the research questions guiding this review are:
RQ1. What types of assessment instruments have been developed to measure students’ digital literacy across different educational levels?
RQ2. What specific competency dimensions are targeted by these instruments, and how do they reflect evolving definitions of digital literacy
RQ3. What methodological limitations and contextual challenges are present in current instruments, and what directions can be proposed for future development to improve accuracy, inclusiveness and relevance?
This study employed a Systematic Literature Review (SLR) guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines (Page et al., 2021). The use of PRISMA ensured transparency, replicability and rigour in the review process. The review procedure consisted of four key stages: identification, screening, eligibility and inclusion, followed by thematic synthesis and analysis of the included studies.
To complement the systematic review, a bibliometric analysis was conducted using VOSviewer version 1.6.19, focusing on co-occurrence analysis of author keywords. This software was selected due to its recognised capacity to generate accurate and interpretable network visualisations, making it a widely adopted tool in bibliometric and educational research. The dataset comprised the 23 peer-reviewed articles included in the final synthesis. Through keyword co-occurrence mapping, VOSviewer facilitated the identification of key conceptual clusters, recurrent themes and emerging research trends such as ‘digital competence’, ‘assessment’ and ‘Educational Contexts’. This analysis provided additional interpretive depth to the review and supported the development of targeted recommendations for future research.
In the identification phase, relevant peer-reviewed articles were systematically collected from major academic databases like Scopus and Crossref to ensure comprehensive coverage. The Publish or Perish (PoP) software was used to enhance source quality and recency, enabling controlled filtering of metadata from Google Scholar and Crossref. Keywords such as ‘Digital Literacy Assessment’, ‘Measurement Instruments’, ‘Digital Competence’ and ‘Educational Contexts’ were selected based on recent systematic reviews and policy frameworks, combined using Boolean operators to maximise retrieval relevance. The primary search query was (‘Digital Literacy’ OR ‘Digital Competence’) AND (‘Assessment’ OR ‘Measurement’) AND (‘Education’ OR ‘Educational Contexts’). PoP allowed filtering by year (2014–2024), citation count and indexing source to identify high-impact studies. Articles were exported to Mendeley for duplicate removal and preliminary screening. Only those meeting inclusion criteria relevant to digital literacy assessment, focused on education, peer-reviewed, in English, and fully accessible were retained. Excluded were studies unrelated to assessment instruments, inaccessible or outside the defined timeframe.
The identification stage in this study is based on the Population, Intervention, Comparison, Outcome (PICO) approach. The PICO approach is a systematic method for determining the relevance of studies. This approach is recommended by Nishikawa-Pacher, A., 2022, to ensure that each piece of literature analysed contributes to the research objectives. The following is the application of PICO in this study:
During the screening phase, a structured approach was employed to guarantee that the chosen articles aligned with the research objectives. This stage focused on filtering out articles that failed to satisfy the predefined inclusion and exclusion criteria whilst maintaining the quality required for SLR analysis. Duplicate entries were identified and removed using the reference management software (Mendeley Desktop). Below are the criteria for article inclusion and exclusion in this review.
During the screening stage, articles were selected using search tools like PoP and filtered automatically by publication year, language and accessibility. Initially, 323 papers were identified, with 20 duplicates removed. Of the remaining 303 articles, only 133 met the inclusion criteria after full-text evaluation.
At the eligibility stage, articles that pass the screening are evaluated based on methodology, relevance and contribution to the research topic. This process ensures that the articles used in the SLR meet the standards of highly reputable international journals. Table 3 contains the evaluation criteria, including research focus, methodological design, context, data and analysis, and journal quality.
After the eligibility evaluation, 56 articles from the screening stage were evaluated. A total of 33 articles were eliminated due to irrelevance, inadequate methodology or not published in peer-reviewed journals. A total of 23 articles met the criteria and proceeded to the synthesis stage for further analysis.
This multi-stage selection process is visually summarised in the PRISMA flow diagram (Figure 1), which illustrates the number of records identified, screened, excluded and finally included in the review. The figure provides a clear overview of how the final 23 articles were systematically selected based on rigorous inclusion and exclusion criteria.
Figure 1. Flow chart of Preferred Reporting Items for Systematic Reviews and Meta-Analyses
Twenty-three articles were chosen based on their publication year, covering the years 2014 to 2024. The publications were chosen for study based on predetermined criteria and an SLR technique. The articles were analysed using research on the identification of existing instruments, specific competencies that are the focus of instrument measurement, analysis of the weaknesses and challenges encountered when using them, and solutions to overcome these weaknesses. There are several sorts of studies on the use of instruments to assess students’ digital literacy. Figure 2 illustrates the distribution of years of publication for publications in SLR research from 2014 to 2024.
Figure 2. Distribution of Article Types in 2014–2024.
According to Figure 2, the most papers about research on the application of tools to test or measure students’ digital literacy skills appeared in 2024. In this situation, the majority of articles discussing the topic are published in journals, with a few appearing in proceedings. In 2020, there has been no mention of measuring digital literacy for kids in schools, according to the graph. This information was collected by a study of the titles and abstracts of articles discussing the usage of these instruments. The following are the findings from an examination of 23 publications that passed screening based on the topic and research question in this literature review study, as shown in Table 4.
| No | Author’s name/year of publication | Educational level | Type of instrument | Competencies measured in the instrument | Challenges and weaknesses | ||||||
| 1 | (Yasa & Rahayu, 2023) | Elementary School | Survey-based questionnaire (4 aspects) | Hardware/software use, digital data processing, communication, collaboration and content creation | Low student ability in device use and content creation | ||||||
| 2 | (Ventivani et al., 2024) | Secondary School | Likert-scale questionnaire | Tech-supported learning, communication skills and curriculum-integrated digital literacy | Limited testing at lower education levels | ||||||
| 3 | (Tajuddin et al., 2024) | Secondary School/General | DQ-based assessment instrument | Media evaluation, identity protection, privacy awareness and digital risk awareness | High youth vulnerability to data misuse and privacy breaches | ||||||
| 4 | (Ussarn et al., 2022) | Higher Education (Community College) | Digital literacy questionnaire with statistical analysis | Use of spreadsheet, presentation and word processing tools | Curriculum does not yet support students’ digital literacy | ||||||
| 5 | (Sari et al., 2020) | Education Students | Closed-ended non-test questionnaire | Basic digital skills, evaluation and technology use | Unequal access in online surveys using mobile devices | ||||||
| 6 | (Rubach & Lazarides, 2021) | Elementary/Junior High School | Surveys, tests, interviews, group discussions and open-ended questions | Information/data literacy, communication, content creation, problem-solving and reflection | Complex design often misses participants’ actual context | ||||||
| 7 | (Ristiyana Puspita Sari et al., 2021) | High School/Vocational School | Digital Literacy Assessment Scale (DLAS) | Knowledge access, academic engagement and socio-emotional factors | Too many items may cause fatigue and reduce data quality | ||||||
| 8 | (Restrepo-Palacio & De María Segovia Cifuentes 2020) | Higher Education | Closed questionnaire (25 items) based on digital competence framework | Digital citizenship and technological dimensions | Overweighting of certain items; needs diversification | ||||||
| 9 | (Ramirez-Rodriguez et al., 2022) | Early University Students | Likert scale to measure digital information management | Information selection, searching and processing | Limits depth of perception and experience exploration | ||||||
| 10 | (Nguyen & Habók, 2024) | Higher Education Teachers | Self-evaluation combining subjective and objective assessments | Instructional tech, professional development and learning outcomes | Lack of objective evaluation limits understanding of competencies | ||||||
| 11 | (Mattar et al., 2022) | Higher Education | Questionnaires, self-assessment and closed-ended questions | Digital tech use, pedagogy and professional growth | Self-assessment prone to bias | ||||||
| 12 | (Lukitasari et al., 2022) | University Students | Questionnaire with numerical items | Communication, exploration and creation of digital content | Lacks depth in capturing students’ digital literacy experiences | ||||||
| 13 | (Jashari et al., 2024) | High School & Higher Education | Self-reports and authentic performance (multidimensional) | Technical, cognitive, ethical skills and academic representations | Difficult to assess real-world digital skills without skill-based tasks | ||||||
| 14 | (Irhandayaningsih, 2022) | Higher Education | Self-directed learning and assessment model | LMS usage, self-directed abilities and digital literacy relationships | Limited validation across diverse populations | ||||||
| 15 | (Hwang et al., 2023) | University Students | Questionnaire with 23 items (quantitative survey) | Critical understanding, AI awareness and ethical behaviour | Limited to university sample, not generalisable | ||||||
| 16 | (Hernández-Marín et al., 2024) | Higher Education Teachers | Likert-scale instrument using digital competency frameworks | Critical reflection, teacher competencies and media literacy | Focused on university students, underrepresents other education levels | ||||||
| 17 | (Harutyunyan et al., 2024) | General Public (Adult Learning) | Personal Information Index (PII) and econometric analysis | ICT access and usage quality | Regional and socio-economic variability affects generalisability | ||||||
| 18 | (Harlanu et al., 2023) | High School/Vocational School | Questionnaire with 20 items | Creativity, critical thinking, social understanding and digital safety | Questionnaire-only design limits practical skill measurement | ||||||
| 19 | (Forzani et al., 2021) | Junior High/High School | Questionnaire (MORQ) | Online reading motivation, self-efficacy and values | Does not assess technical digital skills | ||||||
| 20 | (Febliza & Okatariani, 2020) | Elementary, Junior High, High School and Teachers | Likert-scale (3 categories) | School readiness, teacher/student digital competence | Limited validity due to local context | ||||||
| 21 | (Cabrera & Sosa, 2024) | High School | Online questionnaire (39 items) | Digital skills and application in academic and personal settings | Regional limitation affects generalisability | ||||||
| 22 | (Baharuddin et al., 2021) | School Teachers | Content-validated questionnaire | Technical, cognitive and socio-emotional competencies | Limited expert review, no population testing | ||||||
| 23 | (Agormedah, et al., 2022) | High School Students | DHLI-based questionnaire | Health information literacy and digital navigation | Not tested across broader educational levels |
The distribution of article topics regarding the use of instruments to measure digital literacy has been widely discussed in research. From various types of instruments, ranging from closed, open and online-based instruments to Likert scale-based ones, there are self-report instruments, instruments related to self-learning and self-evaluation instruments. The distribution of article topics can be seen in Figure 3, which is displayed in the VOSviewer application.
Figure 3. Data distribution related to the topic of the digital literacy measurement instrument in VOSviewer.
Although many studies have examined various types of instruments for measuring students’ digital literacy skills, no one has produced universally applicable instruments for different types of research subjects. This is because applying these instruments to certain study topics requires various digital literacy competencies. This is the guideline for writing an SLR article that discusses what sorts of digital literacy instruments are utilised, what competencies serve as indicators of the instrument and the obstacles and limitations of utilising the instrument.
RQ1. What types of assessment instruments have been developed to measure students’ digital literacy across different educational levels?
The reviewed studies indicate that digital literacy assessment instruments generally fall into three categories: Likert scale-based questionnaires, competency framework-based tools and technology-driven assessments. Questionnaires, such as the DHLI (Agormedah et al., 2022) and tools by Mattar et al. (2022), are widely used but often face perception bias. Framework-aligned instruments like those based on the DQ Framework (Tajuddin et al., 2024) and DigComp (Hwang et al., 2023) assess broader competencies, including ethics and AI awareness. Technology-based methods, including online surveys and automated assessments (Cabrera & Sosa, 2024; Restrepo-Palacio & De María Segovia Cifuentes, 2020), enhance data collection but vary in depth. Some studies use expert validation (Baharuddin et al., 2021; Lukitasari et al., 2022), whilst emerging performance-based tools combine self-report with simulation to improve accuracy (Jashari et al., 2024; Nguyen & Habók, 2024).
In reinforcing the international perspective, it is important to highlight the Jisc Digital Capabilities Framework, which has been widely adopted by higher and further education institutions in the UK. This framework defines six key elements: (1) ICT proficiency & productivity, (2) information, data and media literacies, (3) digital creation and problem solving & innovation, (4) digital communication and collaboration & participation, (5) digital learning & development and (6) digital identity & well-being. When compared to the instruments reviewed in this study, which focus mostly on technical skills and digital access, the Jisc framework offers a more holistic approach and evolves over time. In addition to functional skills, the framework also emphasises critical engagement, professional digital identity and digital well-being, which are often undercovered in existing instruments (Jisc, 2023). This comparison reveals important gaps in current digital literacy measurement practices, particularly in capturing students’ ability to use technology responsibly, ethically and reflectively. Therefore, the Jisc framework is worthy of reference in designing digital literacy assessment instruments that are more integrated, inclusive and responsive to future developments.
These findings confirm that digital literacy assessment instruments are not one size fits all; they must be tailored to the cognitive and developmental stages of learners, as digital competencies and the contexts in which they are applied differ significantly across educational levels. In addition to formal education-based assessment approaches, some studies have also developed digital literacy instruments at the individual community level, such as the Personal Informatization Index (PII) used in the Armenian context. This instrument is based on quantitative and econometric approaches and is designed to measure affordability, ICT skills and overall quality of technology use. Whilst not a direct educational instrument, this approach broadens perspectives on the forms and methods of digital literacy assessment that can be adapted to educational contexts by considering socio-economic and geographic determinants. There are not many instruments that measure digital skills through a performance-based approach, and most instruments have not been tested across cultural contexts and education levels. This suggests the need for a new approach to digital literacy instrument development that is more holistic, practical and cross-cultural.
RQ2. What specific competency dimensions are targeted by these instruments, and how do they reflect evolving definitions of digital literacy?
The reviewed instruments assess a wide range of digital literacy competencies, with a strong emphasis on technical skills such as the use of hardware and software (Mattar et al., 2022; Yasa & Rahayu, 2023). Other key dimensions include information processing and evaluation (Ramirez-Rodriguez et al., 2022; Restrepo-Palacio & De María Segovia Cifuentes, 2020), online communication and collaboration (Rubach & Lazarides, 2021), socio-emotional aspects like empathy and digital ethics (Baharuddin et al., 2021) and digital creativity (Lukitasari et al., 2022). Some studies also address more specific competencies, such as digital health literacy (Agormedah et al., 2022) and awareness of AI’s social impact (Hwang et al., 2023). These diverse dimensions reflect the evolving and holistic nature of digital literacy in increasingly complex learning environments.
Several studies confirm that the dimensions of digital literacy competencies are dynamic and not always linear with education level. For example, research shows that undergraduates have higher digital skills than graduates in certain aspects, and gender differences in digital literacy are narrowing. This indicates that digital literacy dimensions should reflect the diversity of learner contexts and continue to evolve, measuring not only technical skills but also meaningful use of technology in academic and social life (Nalaila & Elia, 2024).
The variety of dimensions measured indicates an attempt to approach digital literacy more holistically. However, most instruments still place their main focus on technical aspects and self-perception, whilst the more complex social, ethical and contextual dimensions have not been deeply integrated in the assessment design. In fact, increasingly complex technological developments demand a holistic and adaptive approach to digital literacy, so that assessments are truly able to describe students’ readiness to face the challenges of the current and future digital era.
RQ3. What methodological limitations and contextual challenges are present in current instruments, and what directions can be proposed for future development to improve accuracy, inclusiveness and relevance?
Digital literacy instruments face notable challenges in terms of validity, dimensional coverage and contextual relevance. Heavy reliance on self-reporting can lead to perceptual bias (Mattar et al., 2022), whilst limited regional scope hinders generalisability (Cabrera & Sosa, 2024). Cross-cultural validation remains insufficient, as seen in the DHLI tested only in Ghana (Agormedah et al., 2022), and many instruments lack components that assess practical skills through real-world simulations (Jashari et al., 2024). Additionally, critical dimensions like digital safety, ethical use and online collaboration are often overlooked (Hwang et al., 2023; Sari et al., 2021). To address these gaps, future instruments should incorporate performance-based assessments, expand cross-context validation and include emerging competencies. Integrating technologies such as AI-driven simulations may also enhance assessment accuracy and relevance (Nguyen & Habók, 2024), making digital literacy tools more comprehensive and responsive to current educational needs.
A study by Nalaila and Elia (2024) revealed that first-year students outperformed final-year students in digital literacy, with no significant differences found by program or gender except for a narrowing gender gap, highlighting the need for assessments that consider contextual factors rather than assuming linear skill progression. In response to such methodological limitations, this article introduces the Model of Integrative Digital Literacy Assessment (MALDI), a conceptual innovation that integrates multi-method approaches to address weaknesses in existing instruments, particularly their reliance on self-reporting and limited cross-context validation. MALDI consists of four components: reflective questionnaires, real-world performance tasks, interactive AI-based simulations and digital portfolios each designed to capture a broader and more authentic range of competencies. These include technical skills, information processing, online collaboration, ethical literacy, AI awareness and digital social responsibility. The model also emphasises cross-cultural and cross-level validation to ensure global relevance and contextual adaptability. Its comprehensive yet efficient design supports practical implementation across educational settings, offering a meaningful contribution towards more adaptive, inclusive and forward-looking digital literacy assessment practices. Recent studies have emphasized the importance of contextual and culturally responsive approaches to digital literacy in vocational education (Yasa et al., 2024).
This study contributes significantly to the advancement of scientific knowledge by mapping and analysing existing digital literacy assessment instruments and identifying both methodological and contextual gaps. It highlights that most current tools rely heavily on self-perception and often fail to accurately capture learners’ actual digital skills in authentic educational settings. By addressing the evolving dimensions of digital literacy, including technical, informational, ethical, social-emotional and AI-related competencies. This study provides a foundation for developing more holistic and adaptive assessment instruments. The findings hold practical implications across all educational levels: in primary education, by emphasising age-appropriate tools that support safe and guided technology use; in secondary education, by encouraging performance-based assessments aligned with curricular goals; and in tertiary education, by supporting discipline-specific assessments that reflect real-world academic and professional demands, such as digital research competencies, data management and the ethical use of AI. The proposed recommendations offer valuable guidance for curriculum developers, educators and policymakers in designing inclusive, context-sensitive and globally relevant digital literacy assessment strategies.
This study concludes that digital literacy assessment instruments in education employ diverse approaches, such as Likert scale-based questionnaires, competency framework-aligned tools (e.g. DigComp and DQ Framework) and technology-based performance assessments. When categorised by educational sector, distinct emphases emerge. In primary and secondary education, instruments primarily focus on foundational digital skills, including the use of digital devices, information searching and basic online communication. In higher education, the assessments shift towards advanced competencies such as digital content creation, critical evaluation of information and responsible technology use. In adult education, instruments often target practical digital problem-solving and the application of technology for lifelong learning.
Despite these developments, several limitations persist across all sectors. Many instruments rely heavily on self-report measures, lack authentic performance-based tasks and are rarely validated across cultural or contextual boundaries. These limitations indicate that existing tools, although useful, do not yet capture the full spectrum of digital literacy required in an increasingly complex digital environment. Future research should prioritise the development of sector-specific instruments that incorporate simulations or real-world tasks to enhance practical assessment. Moreover, expanding validation efforts across diverse educational and cultural settings, as well as integrating emerging dimensions such as digital safety, well-being and technological ethics, is essential to ensure a more comprehensive and globally relevant understanding of digital literacy.
| Afandi, A. N. H., Kusumaningrum, S. R., Dewi, R. S. I., & Pristiani, R. (2024). Digital literacy questionnaire instrument: Based on the integration of elementary school students’ characteristics. International Journal of Elementary Education, 8(2), 344–353. https://doi.org/10.23887/ijee.v8i2.76773 |
| Agormedah, E. K., Quansah, F., Ankomah, F., Hagan, J. E., Srem-Sai, M., Abieraba, R. S. K., Frimpong, J. B., & Schack, T. (2022). Assessing the validity of digital health literacy instrument for secondary school students in Ghana: The polychoric factor analytic approach. Frontiers in Digital Health, 4, 968806. https://doi.org/10.3389/fdgth.2022.968806 |
| Baharuddin, M. F., Masrek, M. N., Shuhidan, S. M., Razali, M. H. H., & Rahman, M. S. (2021). Evaluating the content validity of digital literacy instrument for school teachers in Malaysia through expert judgement. International Journal of Emerging Technology and Advanced Engineering, 11(7), 71–78. https://doi.org/10.46338/ijetae0721_09 |
| Cabero-Almenara, J., Gutiérrez-Castillo, J.-J., Barroso-Osuna, J., & Palacios-Rodríguez, A. (2023). Digital teaching competence according to the DigCompEdu Framework. Comparative study in different Latin American universities. Journal of New Approaches in Educational Research, 12, 276–291. https://doi.org/10.7821/naer.2023.7.1452 |
| Cabrera, W. R. R., & Sosa, J. E. P. (2024). Validation of an instrument for assessing the digital literacy of Mexican students. Edutec, 88, 200–219. https://doi.org/10.21556/edutec.2024.88.3131 |
| Choi, J., Choi, S., Song, K., Baek, J., Kim, H., Choi, M., Kim, Y., Chu, S.H., & Shin, J. (2023). Everyday Digital Literacy Questionnaire (EDLQ): Development and validation for older adults in South Korea (Preprint). Journal of Medical Internet Research, 25, e51616. https://doi.org/10.2196/51616 |
| Churchill, N. (2020). Development of students’ digital literacy skills through digital storytelling with mobile devices. Educational Media International, 57, 271–284. https://doi.org/10.1080/09523987.2020.1833680 |
| DQ Institute. (2023). The 2023 DQ Global Standards Report: Defining digital literacy, digital skills, and digital readiness. Retrieved from https://jointresearchcentre.ec.europa.eu/digcompedu_en |
| Economou, A., Vuorikari, R., & Punie, Y. (2023). DigCompEdu: European framework for the digital competence of educators – SELFIEforTEACHERS adaptation and implementation. Publications Office of the European Union. Retrieved from https://op.europa.eu/en/publication-detail/-/publication/3a69fc0c-2e00-11ee-a9ce-01aa75ed71a1 |
| Febliza, A., & Okatariani, O. (2020). Pengembangan Instrumen Literasi Digital Sekolah, Siswa Dan Guru. Jurnal Pendidikan Kimia Universitas Riau, 5(1), 1. https://doi.org/10.33578/jpk-unri.v5i1.7776 |
| Forzani, E., Leu, D. J., Yujia Li, E., Rhoads, C., Guthrie, J. T., & McCoach, B. (2021). Characteristics and validity of an instrument for assessing motivations for online reading to learn. Reading Research Quarterly, 56(4), 761–780. https://doi.org/10.1002/rrq.337 |
| Guo, J. (2024). Approaches to improve digital literacy of higher vocational teachers. Occupation and Professional Education, 1(3), 25–30. https://doi.org/10.62381/O242304 |
| Harlanu, M., Suryanto, A., Ramadhan, S., & Wuryandini, E. (2023). Construct validity of the instrument of digital skill literacy. Cakrawala Pendidikan, 42(3), 781–790. https://doi.org/10.21831/cp.v42i3.59703 |
| Harutyunyan, G., Manucharyan, M., Muradyan, M., & Asatryan, H. (2024). Digital literacy of the Armenian society: Assessment and determinants. Cogent Social Sciences, 10(1), 2398652. https://doi.org/10.1080/23311886.2024.2398652 |
| Hernández-Marín, J. L., Castro-Montoya, M. D., & Figueroa-Rodríguez, S. (2024). Media, information and digital literacy: Assessment instrument analysis. Investigacion Bibliotecologica, 38(99), 55–73. https://doi.org/10.22201/iibi.24488321xe.2024.99.58865 |
| Herro, D. (2014). Techno Savvy: A Web 2.0 curriculum encouraging critical thinking. Educational Media International, 51(4), 259–277. https://doi.org/10.1080/09523987.2014.977069 |
| Herzog-Punzenberger, B., Altrichter, H., Brown, M., Burns, D., Nortvedt, G.A., Skedsmo, G., Wiese, E., Nayir, F., Fellner, M., McNamara, G., & O’Hara, J. (2020). Teachers responding to cultural diversity: Case studies on assessment practices, challenges and experiences in secondary schools in Austria, Ireland, Norway and Turkey. Educational Assessment, Evaluation and Accountability, 32, 395–424. https://doi.org/10.1007/s11092-020-09330-y |
| Hwang, H. S., Zhu, L. C., & Cui, Q. (2023). Development and validation of a digital literacy dcale in the artificial intelligence era for college students. KSII Transactions on Internet and Information Systems, 17(8), 2241–2258. https://doi.org/10.3837/tiis.2023.08.016 |
| Irhandayaningsih, A. (2022). Digital literacy assessment model in learning management system: A self-directed learning perspective. E3S Web of Conferences, 359, 03017. https://doi.org/10.1051/e3sconf/202235903017 |
| Jashari, X., Fetaji, B., & Guetl, C. (2024). Assessing digital literacy skills in secondary and higher education: A comprehensive literature survey of validated instruments. 2024 47th ICT and Electronics Convention, MIPRO 2024-Proceedings (pp. 688–692). https://doi.org/10.1109/MIPRO60963.2024.10569399 |
| Jisc. (2023). Digital capabilities definitions: Individual digital capabilities framework. Jisc. Retrieved from https://digitalcapability.jisc.ac.uk/what-is-digital-capability/individual-digital-capabilities/ |
| Lawson, T.K., Knox, J., Romero, E., Palacios, A.M., & Fallon, L.M, (2024). Check yourself! Exploring current culturally responsive teaching assessment measures. Psychology in the Schools, 61(6), 2649–2667. https://doi.org/10.1002/pits.23189 |
| Lukitasari, M., Murtafiah, W., Ramdiah, S., Hasan, R., & Sukri, A. (2022). Constructing digital literacy instrument and its effect on college students’ learning outcomes. International Journal of Instruction, 15(2), 171–188. https://doi.org/10.29333/iji.2022.15210a |
| Magnago, W., Baiôcco, L.V., Lima, R.C.M., Junior, A. B., Soprani, L. C. P., Azevedo, J. D. F., Pinheiro, R.B., & Nunes, P.D.C. (2024). Digital education and the role of technological literacy in cognitive and socioe-motional development. Lumen et Virtus, 15(42), 6548–6553. https://doi.org/10.56238/levv15n42-002 |
| Mainz, A., Nitsche, J., Weirauch, V., & Meister, S. (2024). Measuring the digital competence of health professionals: Scoping review. JMIR Medical Education, 10, e55737. https://doi.org/10.2196/55737 |
| Mattar, J., Ramos, D. K., & Lucas, M. R. (2022). DigComp-based digital competence assessment tools: Literature review and instrument analysis. Education and Information Technologies, 27(8), 10843–10867. https://doi.org/10.1007/s10639-022-11034-3 |
| Mieg, H.A., Klieme, K.E., Barker, E., Bryan, J., Gibson, C., Haberstroh, S., Odebiyi, F., Rismondo, F.P., Römmer-Nossek, B., Thiem, J., & Unterpertinger, E. (2023). Short digital-competence test based on DigComp2.1: Does digital competence support research competence in undergraduate students? Education and Information Technologies, 29, 139–160. https://doi.org/10.1007/s10639-023-12251-0 |
| Momdjian, L., Manegre, M., & Gutiérrez-colón, M. (2024). Digital competences of teachers in Lebanon: A comparison of teachers’ competences to educational standards. Research in Learning Technology, 32(1063519), 1–18. https://doi.org/10.25304/rlt.v32.3203 |
| Nalaila, S., & Elia, E.F. (2024). Students’ digital literacy skills for learning in selected Tanzania’s public universities. Cogent Education, 11(1), 2355350. https://doi.org/10.1080/2331186X.2024.2355350 |
| Newland, B., & Handley, F. (2016). Developing the digital literacies of academic staff: An institutional approach. Research in Learning Technology, 24(1063519), 1–12. https://doi.org/10.3402/rlt.v24.31501 |
| Nguyen, L. A. T., & Habók, A. (2023). Tools for assessing teacher digital literacy: A review. Journal of Computers in Education, 11, 305–346. https://doi.org/10.1007/s40692-022-00257-5 |
| Nguyen, L. A. T., & Habók, A. (2024). Tools for assessing teacher digital literacy: A review. Journal of Computers in Education, 11(1), 305–346. https://doi.org/10.1007/s40692-022-00257-5 |
| Nishikawa‑Pacher, A. (2022). Research Questions with PICO: A Universal Mnemonic. Publications, 10(3), 21. https://doi.org/10.3390/publications10030021 |
| Oh, S. S., Kim, K. A., Kim, M., Oh, J., Chu, S. H., & Choi, J. (2021). Measurement of digital literacy among older adults: Systematic review. Journal of Medical Internet Research 25(2), e26145. https://doi.org/10.2196/26145 |
| Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., …Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71 |
| Park, Y., & Gentile, D. (2019). Establishing digital intelligence (DQ) as a universal standard for digital literacy, skills and readiness. OECD Education Working Papers, No. 193. OECD. https://doi.org/10.1787/19939019 |
| Ramirez-Rodriguez, L. T., Sanchez-Pimentel, J. I., Osorio-Galvan, R. C., & Perez-Ortiz, J. O. (2022). Design and validation of an instrument to measure digital skills in university students of the first cycles of health careers. Proceedings of the 2022 IEEE 2nd International Conference on Advanced Learning Technologies on Education and Research, ICALTER 2022. 16–19 November 2022, Lima, Peru: ICALTER. |
| Restrepo-Palacio, S., & De María Segovia Cifuentes, Y. (2020). Design and validation of an instrument for the evaluation of digital competence in Higher Education. Ensaio, 28(109), 932–961. https://doi.org/10.1590/S0104-40362020002801877 |
| Ristiyana Puspita Sari, A., Sidauruk, S., Meiliawati, R., & Anggraeni, M. E. (2021). Development of Digital Literacy Assessment Scale to measure student’s digital literacy. Jurnal Ilmiah Kanderang Tingang, 12(02), 137–143. https://doi.org/10.37304/jikt.v12i02.128 |
| Rubach, C., & Lazarides, R. (2021). Addressing 21st-century digital skills in schools – Development and validation of an instrument to measure teachers’ basic ICT competence beliefs. Computers in Human Behavior, 118, 106636. https://doi.org/10.1016/j.chb.2020.106636 |
| Ruffini, C. (2022). La comprensione del testo digitale e cartaceo in età scolare: il ruolo delle Funzioni Esecutive. In S. Ulivieri (Ed.), Studies on Adult Learning and Education (pp. 305–317). Firenze University Press. https://doi.org/10.36253/979-12-215-0081-3.20 |
| Sari, C. M. W., Huda, I., Pada, A. U. T., Rahmatan, H., & Samingan. (2020). Construct validity of digital media literacy instrument for student teachers. Journal of Physics: Conference Series, 1460(1), 012053. https://doi.org/10.1088/1742-6596/1460/1/012053 |
| Shin, S. K. (2015). Teaching critical, ethical and safe use of ICT in pre-service teacher education. Language Learning & Technology, 19(1), 181–197. https://doi.org/10.10125/44408 |
| Siddiq, F., Hatlevik, O. E., Olsen, R. V., Throndsen, I., & Scherer, R. (2016). Taking a future perspective by learning from the past – A systematic review of assessment instruments that aim to measure primary and secondary school students’ ICT literacy. Educational Research Review, 19, 58–84. https://doi.org/10.1016/j.edurev.2016.05.002 |
| Snyder, I., Angus, L., & Sutherland-Smith, E. (2002). Building equitable literate futures: Home and school computer-mediated literacy practices and disadvantage. Cambridge Journal of Education, 32(3), 367–383. |
| Son, M., & Ha, M. (2024). Development of a digital literacy measurement tool for middle and high school students in the context of scientific practice. Education and Information Technologies, 30, 4583–4606. https://doi.org/10.1007/s10639-024-12999-z |
| Tajuddin, S. N. A. A., Bahari, K. A., Al-Majdhoub, F. M., Maliki, N. K., & Baboo, S. B. (2024). Developing and measuring an assessment instrument for media literacy among digital natives using Digital Intelligence (DQ) framework. Journal of Media Literacy Education, 16(2), 29–45. https://doi.org/10.23860/JMLE-2024-16-2-3 |
| UNESCO-UNEVOC. (2023). ICT competency framework for teachers (ICT-CFT) Version 3. UNESCO International Centre for Technical and Vocational Education and Training. Retrieved from: https://unevoc.unesco.org/home/ICT-CFT |
| Ussarn, A., Pimdee, P., & Kantathanawat, T. (2022). Needs assessment to promote the digital literacy among students in Thai community colleges. International Journal of Evaluation and Research in Education, 11(3), 1278–1284. https://doi.org/10.11591/ijere.v11i3.23218 |
| Ventivani, A., Ul Muyassaroh, L., Sakti, K. F. L., Masrur, M. F., & Kharis, M. (2024). A measuring tool for assessing digital literacy competence among Mandarin language students. International Journal of Engineering Trends and Technology, 72(10), 336–343. https://doi.org/10.14445/22315381/IJETT-V72I10P132 |
| Villoria-Mendieta, M. (2023). Teacher digital competences: Formal approaches to their development. In Digital education outlook. |
| Yasa, A. D., & Rahayu, S. (2023). A survey of elementary school students’ digital literacy skills in science learning. AIP Conference Proceedings, 2569, 060015. https://doi.org/10.1063/5.0113483 |
| Yasa, A. D., Rahayu, S. D., Handayanto, S. K., & Ekawati, R. (2024). Investigating the effects of digital literacy on primary student attitudes in Indonesia. International Journal of Elementary Education, 8(1), 11–19. https://doi.org/10.23887/ijee.v8i1.70413 |
| Yuan, C., Wang, L., & Eagle, J. (2019). Empowering English language learners through digital literacies: Research, complexities, and implications. Media and Communication, 7(2), 128. https://doi.org/10.17645/mac.v7i2.1912 |