ORIGINAL RESEARCH ARTICLE

Development and validation of a survey instrument to measure teacher educators’ educational technology integration in developing countries

Misganaw Tadesse Woldemariama,b* symbol.jpg, Amanuel Ayde Ergadoa symbol.jpg and Worku Jimmaa symbol.jpg

aDepartment of Information Science, Jimma University, Jimma, Ethiopia; bDepartment of Information Technology, Bonga College of Education, Bonga, Ethiopia

Received: 4 April 2025; final version received 29 July 2025; Published: 27 October 2025

This study developed and validated an instrument for measuring teacher educators’ (TEs’) educational technology (EdTech) integration in Ethiopian colleges of teacher education (CTE), filling a gap in context-specific tools. The instrument was developed using an established theoretical framework, following a six-step process including instrument design, expert review and psychometric evaluation with 126 TEs. Exploratory factor analysis (EFA) identified a 13-factor structure, which converged into a 12-factor (58 items) structure with 80% explained cumulative variance. Confirmatory factor analysis revealed strong internal consistency (α/CR > 0.7), convergent validity (Average Variance Extracted [AVE] > 0.5; factor loadings > 0.6, p < 0.001) and discriminant validity (Heterotrait-Monotrait ratio [HTMT] < 0.85). The tool demonstrated an acceptable fit (comparative fit index [CFI] = 0.94, Tucker-Lewis index [TLI] = 0.93, chi-square/degrees of freedom = 3.1), although root mean square error of approximation (RMSEA 0.13) and standardised root mean square residual (SRMR 0.13) slightly exceeded thresholds. Despite minor fit limitations, robust reliability, validity and contextual grounding confirm its utility for assessing EdTech integration in resource-constrained settings. This study underscores the instrument’s potential to inform evidence-based pedagogical practices, institutional policy reforms and cross-cultural research in teacher education. By bridging theoretical and practical gaps, this work contributes a validated tool tailored to the socio-technical realities of developing nations, offering stakeholders a scalable framework to assess EdTech integration in teacher training.

Keywords: educational technology; instrument development; college of teacher education; factor analysis; Ethiopia

* Corresponding author. Email: misganawt2012@gmail.com

Research in Learning Technology 2025. © 2025 M.T. Woldemariam et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2025, 33: 3487 - http://dx.doi.org/10.25304/rlt.v33.3487

Introduction

Educational technology (EdTech) integration is a foundation of modern pedagogical reform, promising enhanced teaching quality, equitable access to resources and developing 21st-century skills (UNESCO, 2023). In developing countries, systemic challenges such as infrastructural deficits (Roy et al., 2021), limited digital literacy (Laudari & Prior, 2020) and institutional readiness (IR) (Ifinedo & Kankaanranta, 2021) primarily affect EdTech integration. Despite these challenges, EdTech integration in CTEs is critical; hence, it is a foundation to prepare future educators for technology-mediated classrooms in primary and secondary schools.

Ethiopia, a representative case of a developing nation, has prioritised EdTech integration in its national education development roadmap to address gaps in the quality and accessibility of education (Teferra et al., 2018). However, CTEs in Ethiopia face multifaceted challenges, including inconsistent institutional support, inadequate training and sociocultural resistance to technological change (Woldemariam et al., 2025). Although the factors are identified, there is a scarcity of empirical tools to systematically measure their interplay and influence on EdTech integration.

Existing research on EdTech integration has predominantly focused on high-income countries, yielding validated scales that emphasise users’ perception (e.g. Davis, 1989; Tondeur et al., 2017). However, these models often overlook contextual realities in developing countries, such as infrastructure, IR and digital literacy (Ezumah, 2020). A prior constructivist grounded theory (CGT) study has identified 11 key factors influencing EdTech integration in Ethiopian CTEs (Woldemariam et al., 2025). Despite these insights, no validated measurement scale exists to quantify these factors or assess their interrelationships, limiting evidence-based policy making and targeted interventions.

The lack of a context-specific, validated survey tool hampers efforts to diagnose systemic barriers, assess EdTech impact or design scalable solutions in developing contexts. Unaligned interventions risk perpetuating underutilisation and inefficient resource allocation. Building on prior research (Woldemariam et al., 2025), this study develops and validates a survey instrument to assess factors influencing TEs’ EdTech integration in CTEs in Ethiopia. Objectives include: (1) designing a context-grounded instrument; (2) establishing content validity through expert review; and (3) testing scale reliability, validity and model fit via a pilot study.

The scale could address a critical methodological gap by offering a validated instrument tailored to the context in developing countries. It empowers stakeholders to: systematically diagnose context-specific factors to EdTech integration, prioritise specific areas of intervention, benchmark progress towards national and international EdTech goals (e.g. Sustainable Development Goal 4 [United Nations, 2015]) and facilitate cross-context comparisons to identify shared challenges against institutional challenges. Furthermore, its validation process ensures applicability across comparable low-resource settings.

Theoretical framework

The instrument’s theoretical framework, derived from a CGT study (Woldemariam et al., 2025), identified contextual factors influencing EdTech integration in CTEs. CGT’s rigor ensured constructs aligned with existing theories and those specific to the study context. Key constructs included curriculum alignment (CA), IR, professional development (PD), digital competence (DC), resource and support (RS), perceived ease of use (PEU), perceived usefulness (PU), readiness (R), attitude (A), colleague influence (CI), student digital competence (SDC) and EdTech integration (INT), which guided instrument development and validation.

In this framework, existing theories, including the technology acceptance model (TAM) (Davis, 1989) and unified theory of acceptance and use of technology (UTAUT) (Venkatesh et al., 2003), were used to situate some of the constructs. TAM provides a cognitive foundation, positing that PU and PEU shape users’ attitudes and subsequent use of technology (Davis, 1989; Scherer et al., 2019). UTAUT extends this by incorporating social and organisational factors: CI reflects social norms, whilst RS aligns with facilitating conditions (Venkatesh et al., 2003). However, these theories inadequately address institution-specific factors (Scherer et al., 2019), necessitating a contextual expansion.

The CGT study revealed five novel factors (DC, PD, IR, CA and SDC) critical to educational contexts in resource-constrained settings (Woldemariam et al., 2025). DC and PD address individual gaps in training and self-efficacy, resonating with social cognitive theory’s emphasis on mastery of experiences (Bandura, 1977). At the institutional level, IR (e.g. institutional policy and leadership) and CA (e.g. pedagogical fit) reflect organisational readiness for innovation (Weiner, 2009). The inclusion of SDC uniquely positions the framework to account for bidirectional influences, where TEs’ adoption decisions may depend on learners’ preparedness.

The integrated theoretical framework (see Figure 1) proposes that EdTech integration is directly predicted by the TAM and UTAUT constructs (e.g. PU and CI) and contextual factors, such as IR. For survey validation, items from established constructs (e.g. PU scales) were adapted, whilst CGT-derived constructs (e.g. CA) were inductively coded and translated into Likert-scale items. This dual approach ensures theoretical rigor whilst capturing context-specific nuances (Venkatesh et al., 2016), balancing theoretical depth and practical applicability.

Fig 1
Figure 1. Theoretical framework (Woldemariam et al., 2025).

Note: Readiness represents the individual teacher’s technology integration readiness.

Methodology

Research design

This study followed a cross-sectional survey design to develop and validate an instrument to measure the factors contributing to TEs’ effective EdTech integration. It was conducted between September 2024 and January 2025.

Context and participants

This study was designed to develop a theory-driven instrument to assess contextual factors influencing EdTech integration in CTEs in Ethiopia. Employing a comprehensive sampling approach, all TEs at a government college were targeted, yielding 126 valid responses (84% response rate) to meet factor analysis requirements (Hair et al., 2019). The comprehensive sampling strategy ensured the representation of teaching populations in resource-constrained settings, balancing methodological rigor with contextual fidelity for instrument validation.

Ethical considerations

This study received ethical clearance from Jimma University Institute of Technology Ethical Review Board with reference number RPD/JIT/152/16 on January 26, 2024. A consent form was included in the questionnaire to inform participants and confirm their willingness to engage in this study. Participants’ data were coded and aggregated to ensure confidentiality and anonymity.

Instrument development process

The instrument was developed using a widely recognised approach that includes six basic steps (Boateng et al., 2018; DeVellis, 2017; Younas & Porr, 2018). The steps are inherently iterative and involve (1) defining the construct, (2) generating an item pool, (3) developing the response set, (4) conducting expert review, (5) psychometric testing and (6) finalising the scale development. The next paragraphs thoroughly discuss each step.

Step1: Defining the construct

The constructs were informed by prior grounded theory research, which enabled the identification of context-specific factors (Woldemariam et al., 2025). As the theoretical framework outlines, the constructs distinctly represent TEs’ perceptions of the factors influencing EdTech integration. Table 1 summarises constructs with their conceptual definition.

Table 1. Conceptualisation of constructs.
Construct Conceptual definition References
RS Refers to the availability of ICT infrastructure, facilities, resources and technical support. Woldemariam et al. (2025)
IR Refers to the TEs’ perception of the leadership and institutional readiness to facilitate EdTech integration initiatives. It constitutes the leadership commitment, perception, attitude and institutional ICT vision and plan. Woldemariam et al. (2025)
DC Represents TEs’ perception of their digital competence in terms of their ICT knowledge, skills and EdTech integration experience. Aydin et al. (2024), Woldemariam et al. (2025), Gümüş and Kukul (2023)
R Refers to the TEs’ willingness, readiness and commitment to integrating EdTech in teaching and learning practices. Woldemariam et al. (2025), Venkatesh et al. (2003)
SDC Refers to the TEs’ perception of the students’ digital competence in terms of their ICT knowledge and skills. Woldemariam et al. (2025), Tzafilkou et al. (2022)
A Refers to the TEs’ predisposition to integrate EdTech in their teaching and learning activities. Hernández-Ramos et al. (2014)
PD Refers to the provision of short- or long-term training and professional development opportunities to enhance TEs’ digital competence to effectively use EdTechs. Woldemariam et al. (2025)
CA Refers to the alignment of the curriculum, the course and the teaching materials with the latest educational technologies. Woldemariam et al. (2025)
PEU Refers to how easy TEs feel about using EdTech to enhance their teaching and learning activities. Davis (1989)
PU Refers to the TEs’ belief that EdTech can improve their effectiveness, efficiency or satisfaction in teaching, learning or administrative tasks. Davis (1989)
CI Refers to the TEs’ perception of the influence of colleagues on their effectiveness in integrating EdTech. Venkatesh et al. (2003)
INT Refers to the TEs’ reflection of the extent to which they use EdTechs such as computers, educational apps, simulations, games and the internet in their teaching and learning activities. Inan and Lowther (2010)

Step 2: Generating an item pool

An extensive item pool was generated based on validated instruments from existing works, along with newly developed items reflecting the grounded theory results. As a result, 86 items were generated both inductively and deductively. Each construct was represented using multiple items designed to capture its conceptual meaning accurately. Table 2 summarises the number of items designed for each construct with their sources.

Table 2. Distribution of items with their sources.
Construct Number of initial items Sources
RS 10 Woldemariam et al. (2025), Ferede et al. (2022)
IR 10 Woldemariam et al. (2025), Ferede et al. (2022)
DC 10 Tondeur et al. (2017), Türel et al. (2017), Ferede et al. (2022)
R 10 Woldemariam et al. (2025), Davis (1989), Venkatesh et al. (2003), Yildiz and Arpaci (2024)
SDC 5 Tzafilkou et al. (2022)
A 5 Hernández-Ramos et al. (2014), Teo (2009), Venkatesh et al. (2003)
PD 6 Woldemariam et al. (2025), Ferede et al. (2022)
CA 5 Woldemariam et al. (2025)
PEU 5 Davis (1989), Teo (2009), Baddar and Khan (2023)
PU 5 Davis (1989), Hart and Laher (2015), Teo (2009), Baddar and Khan (2023)
CI 5 Ferede et al. (2022), Venkatesh et al. (2003)
INT 10 AlAjmi (2022), Ferede et al. (2022), Venkatesh et al. (2003), Mishra and Koehler (2006)

Step 3: Developing the response set

Two types of responses were developed to collect valid and meaningful insights from TEs. The first set, which included a five-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree), was developed as the response format to measure TEs’ level of agreement on the factors influencing EdTech integration. The second response was developed to rate the TEs’ integration of EdTech in their daily teaching-learning practices. It included a five-point Likert scale (1 = Never to 5 = Very Often), which represents the frequency of TEs’ EdTech integration.

Step 4: Conducting expert review

The instrument’s face and content validity were assessed by eight domain experts (two per department: information technology, curriculum and instruction, measurement and evaluation and English language), comprising one PhD and one MA/MSc holder per discipline. Experts evaluated item relevance through qualitative feedback (suggesting revisions/removals) and quantitative ratings (1–4 relevance scale) (Polit et al., 2007). Content validity index (CVI) was computed at the item-level (I-CVI ≥ 0.78) and scale-level (S-CVI ≥ 0.90) to ensure the individual item’s and the entire tool’s content validity, respectively (Almanasreh et al., 2019; Polit et al., 2019). Ambiguous, redundant or double-barrelled items were revised.

Step 5: Psychometric testing

The pilot instrument was administered to a comprehensive sample of TEs via printed questionnaires (January 2025). Data were analysed using IBM SPSS Statistics (Version 27) for exploratory data analysis and R (R Core Team, 2024) for exploratory and confirmatory factor analysis, ensuring methodological rigor.

EFA evaluated the hypothesised factor structure, informed by prior grounded theory. Data suitability was confirmed via Bartlett’s test (p < 0.05) and Kaiser–Meyer–Olkin test (KMO > 0.6). Parallel analysis and scree plots guided factor extraction. Due to Likert-scale non-normality (Shapiro-Wilk, p < 0.05), polychoric correlations and minimum residual factor analysis (MINRES) with oblimin rotation were applied, aligning with ordinal data standards (Watkins, 2018). Principal axis factoring using oblimin rotation was comparatively tested for robustness (Costello & Osborne, 2005). Items with suboptimal properties (loadings <0.4, cross-loadings >0.2, communality <0.5) were iteratively pruned over 26 iterations, balancing statistical thresholds (Costello & Osborne, 2005; Schreiber, 2021) and theoretical coherence. Final exclusions prioritised alignment with the conceptual model.

Confirmatory factor analysis (CFA) assessed the refined structure using weighted least square mean and variance adjusted (WLSMV) estimation (Schreiber, 2021), suitable for small samples (N = 126) and ordinal data. Convergent validity was established via AVE > 0.5 and outer loadings (≥0.7; retained ≥0.4 if AVE/CR remained robust) (Kline, 2015). Discriminant validity employed Fornell-Larcker Criterion (FLC) (√AVE > inter-construct correlations) (Fornell & Larcker, 1981) and (HTMT < 0.85) (Henseler et al., 2015). Internal consistency was ensured through the thresholds Cronbach’s alpha/Composite reliability (α/CR > 0.7) (Fornell & Larcker, 1981).

Model fit was assessed using CFI, TLI, RMSEA, SRMR and chi-square minimum discrepancy divided by degrees of freedom (CMIN/DF) (Hu & Bentler, 1999). The thresholds CFI/TLI ≥0.90 (acceptable)/≥0.95 (excellent); RMSEA/SRMR ≤0.06 (excellent)/<0.08 (acceptable); CMIN/DF <3 (excellent)/<5 (acceptable) were considered for evaluation (Schreiber et al., 2006). Analyses adhered to parsimony and theoretical alignment.

Step 6: Finalisation

This phase evaluated EFA/CFA results against psychometric thresholds (α, CR, AVE, HTMT and FLC) to guide item retention/removal. Items failing to meet criteria (e.g. α/CR > 0.7, AVE > 0.5 and HTMT < 0.85) were excluded, enhancing precision and consistency. The final instrument thus achieved theoretical and empirical robustness, ensuring readiness for application.

Results

This study employed a cross-sectional survey design. Paper-based, self-administered structured questionnaires were distributed to 150 TEs, of which 126 were completed and returned, yielding an 84% response rate. Participants had a mean age of 41.7 years (SD = 6.7) and a mean professional experience in teacher education of 13.4 years (range: 5–34 years). The majority held MSc as their highest qualification (83.3%), followed by those with a PhD (12.7%) and those with a BA/BSc degree (4%).

Content validity results

Initially, 86 items were derived from the literature and contextual subcategories identified in a CGT study. Eight domain experts assessed the face and content validity using a relevance scale (Polit et al., 2007). Two items (from RS and LIR, one from each) were removed for CVI < 0.78; 19 were revised to address ambiguities or double-barrelled phrasing per expert recommendations. The process ensured alignment between empirical rigor and theoretical relevance, adhering to best practices in instrument development. The instrument demonstrated robust validity (A-CVI = 0.96), qualifying 84 items for a pilot study. Constructs and scale-level validity indices are detailed in Table 3.

Table 3. Summary of CVI for constructs (A-CVI) and the scale (S-CVI).
Construct A-CVI Number of removed items Number of modified items Number of items after validation
RS 0.913 1 5 9
IR 0.925 1 4 9
DC 0.938 4 10
R 0.963 4 10
SDC 0.975 5
A 1.000 5
PD 0.979 1 6
CA 1.000 5
PEU 0.925 5
PU 0.950 5
CI 0.975 5
INT 0.975 1 10
Scale-CVI 0.960 2 19 84

EFA results

The instrument, grounded in a theoretically rigorous framework, employed EFA to test hypothesised constructs empirically. Data suitability was confirmed by Bartlett’s test (χ2 = Infinity, p < 0.001) and KMO = 0.87 (>0.60 threshold). The parallel analysis identified 13 latent factors – exceeding the original 12-factor framework – suggesting empirical refinement or contextual specificity. The scree plot (Figure 2) validated this solution, with eigenvalues exceeding simulated random data thresholds. This divergence underscores the interplay between theoretical constructs and data-driven adjustments, reinforcing the instrument’s adaptability to contextual nuances whilst maintaining methodological fidelity.

Fig 2
Figure 2. Number of factors identified for factor analysis.

EFA employing oblimin rotation (13 factors via parallel analysis) iteratively refined items over 26 cycles. Items with loadings < 0.40 or cross-loadings were sequentially removed, beginning with DC2 (loading = 0.001) followed by 25 others (e.g. DC3 and PD5). This reduced items from 84 to 58, improving cumulative explained variance from 77% to 80%. The final factor structure (Figure 3) aligns with the theoretical model (12-factor), confirming validity. Each retained item demonstrated robust loading onto its hypothesised factor, balancing empirical rigor with theoretical fidelity.

Fig 3
Figure 3. Factor structure after the twenty-sixth iteration.

Note: MR1 = INT, MR2 = R, MR3 = SDC, MR4 = A, MR5 = RS, MR6 = CI, MR7 = IR, MR8 = PU, MR9 = PD, MR10 = DC, MR11 = PEU and MR12 = CA. MR13 was removed. The result confirmed the robustness of the developed factor structure both theoretically and empirically.

Items with suboptimal loadings (<0.40) or cross-loadings were retained to uphold methodological rigor and contextual relevance. CI5 and PEU4 were preserved to ensure theoretical integrity in constructs with fewer indicators. Contextually critical items (e.g. RS2 and RS3) and theoretically aligned indicators (DC4, R6, INT2 and INT7) were maintained despite statistical nuances. All items exhibited communalities ≥0.50, with most exceeding 0.70, except few, for example, INT2 = 0.69 and PD2 = 0.58. This dual emphasis on empirical thresholds and substantive significance reinforced the factor structure’s robustness, aligning statistical rigor with contextual-theoretical coherence.

Confirmatory factor analysis results

The EFA-derived factor structure demonstrated robust psychometric properties (Table 4). The α (0.776–0.907) and CR (0.795–0.927) confirmed acceptable-to-strong internal consistency. Convergent validity was established through AVE (0.520–0.842), with all constructs exceeding the 50% variance threshold. Key constructs (PD, CA, PEU, PU, A, SDC and R) exhibited robust convergent validity (AVE ≥ 0.70). These metrics collectively affirm the instrument’s reliability and construct validity, meeting rigorous standards for latent variable modelling in EdTech research.

Table 4. Psychometric properties of convergent validity and reliability.
Latent factor Reliability Convergent validity
α CR AVE
IR 0.847 0.849 0.620
PD 0.852 0.896 0.822
CA 0.888 0.910 0.842
DC 0.776 0.795 0.520
RS 0.848 0.879 0.616
PEU 0.846 0.863 0.757
PU 0.855 0.869 0.728
A 0.861 0.839 0.748
CI 0.816 0.817 0.661
SDC 0.890 0.920 0.786
R 0.907 0.927 0.728
INT 0.902 0.917 0.637

Most indicators demonstrated strong factor loadings (>0.7), with exceptions (e.g. INT2 = 0.578, RS6 = 0.67, PEU4 = 0.68, CI3 = 0.695, IR3 = 0.596, DC8 = 0.593 and DC9 = 0.642). Loadings ≥0.7 signify robust indicator-construct alignment, consistent with literature thresholds for convergent validity. Whilst minor deviations occurred, retained items maintained theoretical relevance and communality standards (≥ 0.50). These results affirm that nearly all items reliably captured their hypothesised constructs, underscoring the instrument’s psychometric rigor.

Discriminant validity assessed using FLC confirmed that each construct’s square root of AVE exceeded its correlations with other constructs (Table 5), ensuring distinct measurement of intended concepts. HTMT ratios (Table 6) further validated discriminant validity (<0.85). These results affirm that constructs uniquely captured their target phenomena, meeting rigorous psychometric standards for latent variable distinctiveness.

Table 5. Evaluation of discriminant validity using FLC.
INT R SDC RS PEU CI IR PU PD DC A CA
INT 0.798
R 0.146 0.853
SDC 0.146 0.247 0.887
RS 0.452 -0.012 0.052 0.785
PEU 0.490 0.219 0.404 0.385 0.870
CI 0.546 0.197 0.333 0.355 0.473 0.813
IR 0.610 -0.002 0.169 0.739 0.411 0.521 0.788
PU 0.066 0.644 0.002 -0.117 0.067 0.057 -0.067 0.853
PD 0.329 0.322 0.312 0.213 0.521 0.247 0.277 0.079 0.907
DC 0.536 0.215 0.591 0.163 0.505 0.477 0.355 -0.067 0.612 0.721
A 0.033 0.373 -0.031 -0.270 0.160 -0.161 -0.091 0.618 0.195 0.063 0.865
CA 0.533 0.243 0.528 0.338 0.296 0.672 0.594 0.109 0.377 0.635 -0.004 0.918

 

Table 6. Evaluation of discriminant validity based on HTMT.
INT R SDC RS PEU CI IR PU PD DC A CA
INT
R 0.134
SDC 0.157 0.238
RS 0.449 0.017 0.056
PEU 0.483 0.116 0.458 0.349
CI 0.535 0.219 0.319 0.392 0.472
IR 0.565 0.073 0.171 0.716 0.403 0.485
PU 0.081 0.601 0.035 -0.093 -0.002 0.041 -0.016
PD 0.378 0.284 0.363 0.256 0.542 0.274 0.327 0.013
DC 0.517 0.145 0.601 0.257 0.517 0.440 0.385 -0.089 0.630
A 0.039 0.317 0.006 -0.214 0.130 -0.125 -0.020 0.578 0.151 0.027
CA 0.568 0.279 0.517 0.321 0.283 0.672 0.587 0.122 0.408 0.639 0.042

Discriminant validity was confirmed despite negative HTMT/FLC scores (Tables 5 and 6), attributable to methodological artifacts. A small sample (N = 126), inversely related constructs (e.g. resource scarcity vs. positive attitudes) and model complexity (12 factors) generated expected negative correlations in correlation-based metrics. Retained cross-loading items (for contextual/theoretical relevance) and estimation challenges in high-dimensional models further contributed. These scores reflect inherent statistical dynamics of opposing constructs rather than validity shortcomings, affirming the model’s robustness.

Model fit was evaluated via absolute and comparative indices using the WLSMV estimator. RMSEA/SRMR (0.13) slightly exceeded thresholds, indicating close fit; CFI (0.94) and TLI (0.93) met criteria. CMIN/DF (3.1) demonstrated acceptable parsimony. Minor fit deviations likely arose from limited sample size (N = 126) and model complexity (12 factors), typical in multidimensional CFA. Despite this, reliability and validity metrics affirmed theoretical alignment.

Final instrument structure and scoring guidelines

The finalised 58-item instrument (Table 7), refined via EFA and CFA, demonstrated robust construct validity (AVE > 0.5) and reliability (α/CR > 0.7). Model fit indices confirmed its applicability for assessing EdTech integration in CTEs across resource-constrained settings. The factors were measured using a 5-point Likert scale, 1 (Strongly Disagree) to 5 (Strongly Agree); EdTech integration frequency was rated 1 (Never) to 5 (Very Often).

Table 7. Summary of the developed scale.
Latent factor Sample items Number of items
IR My college has a clear vision to cultivate digitally literate teachers. 5
PD I try to update myself on everything that deals with EdTech. 3
CA The current curriculum supports the integration of EdTech in teaching-learning. 3
DC I can effectively use digital platforms (e.g., Google Classroom, YouTube, etc.) in my teaching. 5
RS The classrooms are suitable to use EdTechs for instruction. 6
PEU Learning to integrate new EdTechs would be easy for me. 3
PU Teaching using EdTech offers real advantages over traditional methods of teaching. 5
A In my opinion, using EdTech in my teaching improves student’s learning. 4
CI I feel motivated to use EdTech due to my colleagues’ influence. 3
SDC My students are skilled in using digital tools for learning. 5
R I have the readiness to incorporate EdTechs into my teaching practices. 8
INT How often do you prepare multimedia resources (e.g., audio, video and animations) to support classroom teaching? 8

Discussion

This study developed and validated a multidimensional instrument to assess factors influencing EdTech integration in CTEs in developing countries. The development and validation followed a rigorous six-step approach, broadly classified into three: scale development, content and construct validation (Boateng et al., 2018). It addressed a critical gap in the literature on measurement scales, particularly in resource-constrained settings.

The theoretical framework from a prior CGT study (Woldemariam et al., 2025) guided the scale development. The items of the five constructs, that is, readiness, CA, PD, RS and IR, were developed inductively in line with the subcategories in the CGT study and the literature (Boateng et al., 2018). The remaining constructs (i.e. EdTech integration, attitude, CI, SDC, DC, PU and PEU) were adapted from well-established theories (DeVellis, 2017). Consequently, 86 items were identified for the content validation.

Content validity was established through a panel of eight experts, yielding an excellent I-CVI ≥ 0.875, A-CVI ≥ 0.9 and S-CVI = 0.96. These values exceeded the recommended threshold for I-CVI ≥ 0.78 and A-CVI ≥ 0.9 (Almanasreh et al., 2019; Polit et al., 2007). The findings suggest that the items can effectively measure the constructs.

The suitability of the data for EFA was confirmed by a KMO (0.87), classified as ‘meritorious’ for factor analysis (Tabachnick & Fidell, 2019). The significance of Bartlett’s test of sphericity (χ2 = infinity, p < 0.001) rejected the null hypothesis of an identity correlation matrix (Field, 2024). Whilst the infinite χ2 value may reflect computational artifacts (Hair et al., 2019), the significance in Bartlett’s test and high KMO collectively affirm the data’s appropriateness for EFA (Watkins, 2018).

The EFA was conducted iteratively after identifying the optimal number of latent factors (13) using parallel analysis and a scree plot. In each iteration, items with loading < 0.4 were removed. Eight items with loading below 0.5 were retained due to fewer number of items (<3) (Kline, 2015), contextual relevance (Boateng et al., 2018) and theoretical alignment (Clark & Watson, 2019). The items RS2 and RS3 were retained for their theoretical importance to the construct, RS. Removing the items would compromise the conceptual coverage of the latent construct. The items PEU4 and CI5 were retained for their strong theoretical alignment with PEU (Davis, 1989) and peer-driven motivation in EdTech integration (Inan & Lowther, 2010; Venkatesh et al., 2003). Additionally, hence each of these constructs contain only three items, removal risks construct representation (Costello & Osborne, 2005). The remaining four items (DC4, R6, INT2 and INT7) were retained for their contextual importance in capturing core dimensions of DC, teacher readiness and EdTech integration, respectively. Consequently, after the 25th iteration, all items started loading onto the 12-factor structure, ensuring the theoretical validity of the measurement model (Howard, 2016).

Internal consistency was robust for all subscales (α = 0.776 – 0.907; CR = 0.795 – 0.927), satisfying thresholds for both exploratory and confirmatory research (Fornell & Larcker, 1981; Hair Jr et al., 2021). Construct validity was empirically supported, with all retained items demonstrating statistically significant factor loadings (standardised λ ≥ 0.6, p < 0.001) and convergent validity (AVE > 0.5 for all factors) (Fornell & Larcker, 1981). Discriminant validity was confirmed through HTMT ratio analysis (all values < 0.85) (Henseler et al., 2015) and FLC (Fornell & Larcker, 1981). The findings confirmed the instrument’s consistency and accuracy in measuring the theoretical constructs.

The CFA results demonstrated acceptable incremental fit indices (CFI = 0.94, TLI = 0.93), suggesting a reasonable model fit (McNeish & Wolf, 2020). The CMIN/DF (3.1) falls within the acceptable range and suggests an adequate fit for practical application (Kline, 2015). Although the RMSEA (0.13) and SRMR (0.13) values exceed commonly used thresholds, it is imperative to interpret in light of model complexity (12 constructs, 58 indicators), model estimation method and small sample size (126). As noted by Kenny et al. (2015), RMSEA can be biased upwards in models with a limited sample size. Similarly, Shi et al. (2021) recommend caution in rigidly applying cutoff values, suggesting that RMSEA and SRMR may not always reflect true misfit under these conditions. Several scholars (e.g. Cao & Liang, 2022; Hu & Bentler, 1999; Xia & Yang, 2019) have revealed the inconsistency of RMSEA and SRMR under such constraints. Importantly, both CFI and TLI exceed the 0.90 threshold and CMIN/DF ≈ 3, and we consider the overall model fit to be adequate. Furthermore, despite the limitations, strong reliability (α/CR > 0.7), convergent validity (AVE > 0.5) (McNeish et al., 2018) and a strong theoretical foundation ensured suitability of the instrument (DeVellis, 2017).

The validated instrument reflects key contextual drivers of EdTech integration in Ethiopian CTEs, such as IR, RS and the critical role of PD. These findings are aligned with recent national efforts, such as digital education strategy (Ministry of Education, 2023) and the Digital Ethiopia 2025 strategy (Federal Democratic Republic of Ethiopia, 2020), which emphasise advancing TEs DC and digital transformation, respectively. Thus, the instrument offers a timely and practical tool for policymakers, institutional leaders and researchers to assess current EdTech integration efforts and inform targeted interventions.

The instrument, although validated within the Ethiopian context, was developed based on constructs and indicators informed by both the global literature (e.g. Davis, 1989; Venkatesh et al., 2003) and context-specific framework (Woldemariam et al., 2025). Many of the identified factors (e.g. infrastructure limitations, DC gaps and institutional leadership) are common across developing countries (Ezumah, 2020). Whilst it offers a foundational framework for studying EdTech integration in teacher education in developing nations, its applicability in other contexts should be approached with caution. This implies that, with appropriate cultural and linguistic adaptation, the instrument holds promise for use in comparable settings.

Conclusion

This study addresses the lack of contextually grounded EdTech integration scales in developing countries by developing a validated, context-specific instrument. By incorporating factors consisting of IR, PD, DC, CA and SDC, it extends existing adoption models to assess TEs’ EdTech integration in underserved settings (e.g. CTEs). The tool enables policymakers to evaluate integration, identify barriers/facilitators and guide interventions. Despite robust psychometrics, a limited sample affects generalisability. Future studies are needed to adapt and validate the instrument across diverse socio-cultural and educational systems to ensure cross-national relevance and measurement invariance. Furthermore, researchers could undertake studies using larger samples and focusing on predictive validity testing and longitudinal assessments. The instrument supports evidence-based strategies to enhance DC and IR, advancing EdTech research and practice. By integrating theoretical and empirical insights, it offers a foundation for academic inquiry and institutional evaluation amid global digital transformation efforts.

Acknowledgements

We sincerely thank the study participants for their invaluable contributions. We extend gratitude to the domain area experts, Dr. Angel Ford, Dr. Dubale Sahile, Dr. Getachew Robo, Dr. Belay Bekele, Mr. Tegbaru Mengesha, Mr. Mulugeta Buche, Mr. Abera Moges and Mr. Getinet Kassahun, for their rigorous content validation. Our appreciation also goes to Jimma University for facilitating data collection and access to resources, which were pivotal to this study.

Disclosure statement

The authors report there are no competing interests to declare.

Data availability statement

Data will be made available upon a reasonable request from the corresponding author.

References

AlAjmi, M. K. (2022). The impact of digital leadership on teachers’ technology integration during the COVID-19 pandemic in Kuwait. International Journal of Educational Research, 112, Article 101928. https://doi.org/10.1016/j.ijer.2022.101928

Almanasreh, E., Moles, R., & Chen, T. F. (2019). Evaluation of methods used for estimating content validity. Research in Social and Administrative Pharmacy, 15(2), 214–221. https://doi.org/10.1016/j.sapharm.2018.03.066

Aydin, M.K., Yildirim, T., & Kus, M. (2024). Teachers’ digital competences: A scale construction and validation study. Frontiers in Psychology, 15, Article 1356573. https://doi.org/10.3389/fpsyg.2024.1356573

Baddar, A., & Khan, M. A. (2023). Teachers’ intention to use digital resources in classroom teaching: The role of teacher competence, peer influence, and perceived image. Higher Learning Research Communications, 13(2), 26–41. https://doi.org/10.18870/hlrc.v13i2.1397

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. https://doi.org/10.1037/0033-295X.84.2.191

Boateng, G. O. et al. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6, Article 00149. https://doi.org/10.3389/fpubh.2018.00149

Cao, C., & Liang, X. (2022). Sensitivity of fit measures to lack of measurement invariance in exploratory structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 29(2), 248–258. https://doi.org/10.1080/10705511.2021.1975287

Clark, L. A., & Watson, D. (2019). Constructing validity: New developments in creating objective measuring instruments. Psychological Assessment, 31(12), 1412–1427. https://doi.org/10.1037/pas0000626

Costello, A. B., & Osborne, J. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(1), Article 7. https://doi.org/10.7275/jyj1-4868

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 319–340. https://doi.org/10.2307/249008

DeVellis, R.F. (2017). Scale development: Theory and applications (4th ed.). Sage.

Ezumah, B.A. (2020). Critical perspectives of educational technology in Africa: Design, implementation, and evaluation. Palgrave Macmillan.

Federal Democratic Republic of Ethiopia. (2020). Digital Ethiopia 2025: A digital strategy for Ethiopia inclusive prosperity. https://www.pmo.gov.et

Ferede, B. et al. (2022). A structural equation model for determinants of instructors’ educational ICT use in higher education in developing countries: Evidence from Ethiopia. Computers & Education, 188, Article 104566. https://doi.org/10.1016/j.compedu.2022.104566

Field, A. (2024). Discovering statistics using IBM SPSS statistics (6th ed.). Sage.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.1177/002224378101800104

Gümüş, M. M., & Kukul, V. (2023). Developing a digital competence scale for teachers: Validity and reliability study. Education and Information Technologies, 28(3), 2747–2765. https://doi.org/10.1007/s10639-022-11213-2

Hair, J. F. et al. (2019). Multivariate data analysis (8th ed.). Cengage Learning EMEA.

Hair, Jr, J. F. et al. (2021). Partial least squares structural equation modeling (PLS-SEM) using R: A workbook. Springer Nature.

Hart, S. A., & Laher, S. (2015). Perceived usefulness and culture as predictors of teachers attitudes towards educational technology in South Africa. South African Journal of Education, 35, 1–13. https://doi.org/10.15700/SAJE.V35N4A1180

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8

Hernández-Ramos, J. P. et al. (2014). Teachers’ attitude regarding the use of ICT. A factor reliability and validity study. Computers in Human Behavior, 31, 509–516. https://doi.org/10.1016/j.chb.2013.04.039

Howard, M. C. (2016). A review of exploratory factor analysis decisions and overview of current practices: What we are doing and how can we improve? International Journal of Human–Computer Interaction, 32(1), 51–62. https://doi.org/10.1080/10447318.2015.1087664

Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118

Ifinedo, E., & Kankaanranta, M. (2021). Understanding the influence of context in technology integration from teacher educators’ perspective. Technology, Pedagogy and Education, 30(2), 201–215. https://doi.org/10.1080/1475939X.2020.1867231

Inan, F. A., & Lowther, D. L. (2010). Factors affecting technology integration in K-12 classrooms: A path model. Educational Technology Research and Development, 58(2), 137–154. https://doi.org/10.1007/s11423-009-9132-y

Kenny, D. A., Kaniskan, B., & McCoach, D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486–507. https://doi.org/10.1177/0049124114543236

Kline, R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford Publications.

Laudari, S., & Prior, J. (2020). Examining the technological, pedagogical and content knowledge of Nepalese teacher educators. Journal of NELTA, 25(1–2), 43–61. https://doi.org/10.3126/nelta.v25i1-2.49730

McNeish, D., Ji, A., & and Hancock, G. R. (2018). The thorny relation between measurement quality and fit index cutoffs in latent variable models. Journal of Personality Assessment, 100(1), 43–52. https://doi.org/10.1080/00223891.2017.1281286

McNeish, D., & Wolf, M. G. (2020). Thinking twice about sum scores. Behavior Research Methods, 52(6), 2287–2305. https://doi.org/10.3758/s13428-020-01398-0

Ministry of Education. (2023). Digital education strategy and implementation plan for Ethiopia (2023–2028), https://www.moe.gov.et/resources/policies-and-strategies/1

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x

Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30(4), 459–467. https://doi.org/10.1002/nur.20199

R Core Team. (2024). R: A language and environment for statistical computing. R Foundation for Statistical Computing, https://www.R-project.org/

Roy, G. et al. (2021). Response, readiness and challenges of online teaching amid COVID-19 pandemic: The case of higher education in Bangladesh. Educational and Developmental Psychologist, 40(1), 40–50. https://doi.org/10.1080/20590776.2021.1997066

Scherer, R., Siddiq, F., & Tondeur, J. (2019). The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Computers & Education, 128, 13–35. https://doi.org/10.1016/j.compedu.2018.09.009

Schreiber, J. B. (2021). Issues and recommendations for exploratory factor analysis and principal component analysis. Research in Social and Administrative Pharmacy, 17(5), 1004–1011. https://doi.org/10.1016/j.sapharm.2020.07.027

Schreiber, J. B. et al. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99(6), 323–338. https://doi.org/10.3200/JOER.99.6.323-338

Shi, D. et al. (2021). Evaluating SEM model fit with small degrees of freedom. Multivariate Behavioral Research, 57(2–3), 179–207. https://doi.org/10.1080/00273171.2020.1868965

Tabachnick, B., & Fidell, L. (2019). Using multivariate statistics (7th ed.). Pearson.

Teferra, T. et al. (2018). Ethiopian education development roadmap (2018–30): An integrated executive summary. Ministry of Education.

Teo, T. (2009). Modelling technology acceptance in education: A study of pre-service teachers. Computers & Education, 52(2), 302–312. https://doi.org/10.1016/j.compedu.2008.08.006

Tondeur, J. et al. (2017). Developing a validated instrument to measure preservice teachers’ ICT competencies: Meeting the demands of the 21st century. British Journal of Educational Technology, 48(2), 462–472. https://doi.org/10.1111/bjet.12380

Türel, Y. K., Özdemir, T. Y., & Varol, F. (2017). Teachers’ ICT skills scale (TICTS): Reliability and validity. Çukurova Üniversitesi Eğitim Fakültesi Dergisi, 46(2), 503–516. https://doi.org/10.14812/cuefd.299864

Tzafilkou, K., Perifanou, M., & Economides, A. A. (2022). Development and validation of students’ digital competence scale (SDiCoS). International Journal of Educational Technology in Higher Education, 19(1), Article 30. https://doi.org/10.1186/s41239-022-00330-0

UNESCO. (2023). Global education monitoring report, 2023: Technology in education: A tool on whose terms?, https://doi.org/10.54676/UZQV8501

United Nations. (2015). Transforming our world: The 2030 agenda for sustainable development, https://sdgs.un.org/2030agenda

Venkatesh, V. et al. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540

Venkatesh, V., Thong, J. Y., & Xu, X. (2016). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems, 17(5), 328–376. https://doi.org/10.17705/1jais.00428

Watkins, M. W. (2018). Exploratory factor analysis: A guide to best practice. Journal of Black Psychology, 44(3), 219–246. https://doi.org/10.1177/0095798418771807

Weiner, B. J. (2009). A theory of organizational readiness for change. Implementation Science, 4, Article 67. https://doi.org/10.1186/1748-5908-4-67

Woldemariam, M. T., Ergado, A. A., & Jimma, W. (2025). Factors influencing effective integration of educational technology in the colleges of teacher education in Ethiopia: A constructivist grounded theory. Australasian Journal of Educational Technology, 41(2), 50–70. https://doi.org/10.14742/ajet.9993

Xia, Y., & Yang, Y. (2019). RMSEA, CFI, and TLI in structural equation modeling with ordered categorical data: The story they tell depends on the estimation methods. Behavior Research Methods, 51(1), 409–428. https://doi.org/10.3758/s13428-018-1055-2

Yildiz, E., & Arpaci, I. (2024). Understanding pre-service mathematics teachers’ intentions to use GeoGebra: The role of technological pedagogical content knowledge. Education and Information Technologies, 29, 18817–18838. https://doi.org/10.1007/s10639-024-12614-1

Younas, A., & Porr, C. (2018). A step-by-step approach to developing scales for survey research. Nurse Researcher, 26(3), 14–19. https://doi.org/10.7748/nr.2018.e1585