ORIGINAL RESEARCH ARTICLE
Sony Yunior Erlangga, Sarwanto and Harlita
Doctoral Program in Science Education, Faculty of Teacher Training and Education, Universitas Sebelas Maret, Surakarta, Indonesia
Received: 5 August 2025; Revised: 6 October 2025; Accepted: 28 October 2025; Published: 12 March 2026
This study examines how Generative Artificial Intelligence Tools (GAIT) influence student learning performance (LP) through cognitive, affective and ethical pathways using Partial Least Squares Structural Equation Modeling (PLS-SEM). Data were collected from 292 Indonesian university students through a structured questionnaire. The results show that GAIT has a direct positive effect on LP (β = 0.920, p < 0.001). Mediation analysis identifies AI Knowledge (AIK) as the most dominant mediator (β = 0.715, p < 0.001), followed by AI Perception (AIP), Creativity (CRE), Fairness & Ethics (FE) and Cognitive Offloading (CO). Furthermore, AIK significantly moderates the GAIT–LP relationship (β = 0.006, p = 0.048). The model demonstrates high predictive power (R2 = 0.604) and good model fit (Standardized Root Mean Square Residual (SRMR) = 0.068). These findings highlight the central role of AI literacy and ethical awareness in maximising the benefits of GAIT for learning. This study contributes theoretically by integrating cognitive, affective and normative dimensions into a unified model of GAIT adoption and offers practical implications for designing AI literacy and ethics-oriented curricula in higher education.
Keywords: AI-based learning; digital literacy; cognitive mediation; technology ethics; higher education
*Corresponding author. Email: sarwanto@fkip.uns.ac.id
Research in Learning Technology 2026. © 2026 S. Y. Erlangga et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.
Citation: Research in Learning Technology 2026, 34: 3563 - http://dx.doi.org/10.25304/rlt.v34.3563
Advances in artificial intelligence (AI) technology have brought fundamental disruption to the higher education landscape. One of the most transformative innovations is the emergence of Generative Artificial Intelligence Tools (GAIT), such as ChatGPT, GitHub Copilot and DALL E that enable real-time content production, problem solving and natural language-based interactions (Atenas et al., 2024). This technology has gone beyond the role of conventional learning support instruments and has begun to reshape students’ cognitive practices in formulating ideas, understanding material and completing academic tasks (Geri, 2025). However, despite its increasingly widespread implementation, the in-depth understanding of the epistemic and pedagogical impacts of GAIT use on student learning performance (LP) remains relatively limited and fragmented (Islam, 2025).
Previous literature generally focuses on the functional efficiency of AI in the context of Education but has not comprehensively integrated the psychosocial and cognitive dimensions that accompany student interactions with the technology. Despite these advancements, little is known about how GAIT simultaneously affects cognitive, affective and ethical dimensions of student learning. Furthermore, studies on the role of mediating variables such as AI literacy (AI Knowledge (AIK)), perceptions of AI, creativity and ethical and fair values are still partial (Acosta-Enriquez, 2025). Meanwhile, the potential for moderating variables such as self-efficacy (SE) and conceptual understanding of AI that can strengthen or weaken the effects of GAIT use on learning outcomes have not been systematically tested (Bergdahl, 2025). This gap indicates the need for a conceptual model that is able to explain how GAIT operates not only as an aid but also as a complex cognitive partner that interacts dynamically with the internal characteristics of students (Blancaflor et al., 2023).
Although numerous studies have explored the use of AI in education, most have concentrated narrowly on adoption and efficiency, without synthesising how cognitive, affective and ethical dimensions collectively shape student learning. Systematic reviews (Passmore et al., 2025; Shahzad, 2025) indicate that research on generative AI in higher education remains fragmented often emphasising technical acceptance based on the Technology Acceptance Model (TAM) or cognitive offloading (CO) effects, while overlooking motivational and ethical mechanisms. This fragmentation hampers theoretical integration and creates uncertainty regarding how students internalise AI as both a cognitive and moral partner in learning. To address this gap, the present study develops an integrated, empirically tested model that unites key perspectives on technological understanding, cognitive regulation and self-belief to explain the impact of GAIT on learning outcomes.
Grounded in three complementary frameworks TAM (Davis, 1989), Cognitive Load Theory (Sweller et al., 2019) and SE Theory (Bandura, 1982), this study conceptualises GAIT as both a cognitive and affective agent influencing student LP. The TAM explains how perceived usefulness and ease of use shape technology adoption; Cognitive Load Theory highlights AI’s potential to optimise mental effort; and SE Theory clarifies the motivational processes underlying learners’ confidence in using technology. Through this theoretical synthesis, generative AI is positioned as both a cognitive enhancer and an ethical learning partner, providing a comprehensive lens for understanding its multidimensional influence in higher education.
The urgency of this research is reinforced by the empirical reality that students are increasingly intensively using GAIT in their learning process, often without adequate literacy regarding the limitations, ethical responsibilities or long-term impacts of its use. In this context, the use of technology without a deep conceptual understanding can have implications for reducing originality, decreasing critical thinking skills or even violating academic integrity (Acosta-Enriquez, 2025). Therefore, this study seeks to answer the key questions of to what extent and through what mechanisms GAIT contributes to students’ LP, as well as what psychosocial factors play a mediating and moderating role in this relationship. Testing a model using Partial Least Squares Structural Equation Modeling (PLS-SEM), this study not only provides important theoretical contributions in developing an AI-based learning framework but also produces practical findings that can be used to improve curriculum design, digital literacy strategies and higher education policies that are better prepared for the era of AI. It is hoped that this study can broaden academic insights on how generative technologies can be utilised ethically and effectively and bring positive changes to learning in the 21st century.
The rapid development of Generative Artificial Intelligence Tools (GAIT) has brought significant changes to the learning process in higher education. The use of GAIT allows students to access more interactive and adaptive learning resources, thereby directly improving LP. Research shows that the active use of GAIT is positively correlated with improved student academic outcomes (Anggoro & Khasanah, 2024). Besides that, Senent and Bueso (2022) assert that personalised AI-based learning environments are able to increase student engagement and academic achievement. In addition to direct impacts, the role of cognitive and affective factors as mediators is also increasingly being considered. Knowledge of AI is an important aspect that allows students to use generative technologies more effectively. Previous studies found that high AI literacy improves students’ ability to critically understand, evaluate, and effectively utilize AI-generated content, which in turn contributes to improved learning performance. Attitudes and perceptions towards AI also influence motivation and acceptance of this technology in the learning context. Passmore et al. (2025) show that positive perceptions of AI encourage deeper engagement in the learning process, resulting in improved academic performance.
Creativity is a key mediator influenced by the use of GAIT. Punyani and Chhikara (2023) found that AI enhances creative tasks by stimulating innovative thinking and improving problem-solving, leading to better learning outcomes. Awareness of fairness and ethics in AI use also shape responsible usage. Passmore et al. (2025) emphasised that ethical understanding fosters more cautious and effective use, supporting academic achievement. CO delegating cognitive tasks to AI reduces mental load, allowing students to focus on complex learning. Richmond (2025) reported that students who engaged in CO with AI performed better on higher-level tasks. AI literacy further strengthens the positive relationship between AI use and learning outcomes. Additionally, SE enhances the impact of AI use on academic performance. Drawing on Bandura’s (1982) theory, confidence in one’s ability to handle tasks boosts motivation and outcomes. Bergdahl (2025) found that SE mediates the link between AI understanding and performance, suggesting that higher competence increases confidence, improving learning.
Overall, literature shows that GAIT affect learning not only directly but also through cognitive, affective and ethical pathways, with AI literacy and SE as key enablers. Building on these foundations, the proposed framework (Figure 1) integrates cognitive (AIK and CO), affective (AI Perception (AIP) and SE) and normative (Fairness & Ethics (FE)) constructs to examine how GAIT influences LP. This integrated approach reflects the multidimensional nature of AI-based learning and provides the basis for formulating hypotheses H1–H8.
Figure 1. Conceptual framework.
Based on this, the following hypotheses are proposed:
H1: The use of GAIT has a direct positive and significant effect on students’ LP.
H2: GAIT positively affects LP indirectly through the mediation of AIK.
H3: GAIT positively affects LP indirectly through the mediation of AIP.
H4: GAIT positively affects LP indirectly through the mediation of Creativity (CRE).
H5: GAIT positively affects LP indirectly through the mediation of FE.
H6: GAIT positively affects LP indirectly through the mediation of CO.
H7: AIK moderates the relationship between GAIT and LP, so that the effect of GAIT on LP becomes stronger at higher levels of AI literacy.
H8: SE serially mediates the relationship between GAIT and LP through AIK.
This study uses a scale construction approach because it involves latent variables that cannot be observed directly (Sarstedt et al., 2022), as can be seen in Figure 1. This research instrument was developed based on indicators adapted from related literature and measured using a 5-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree) (Zhang et al., 2024). The GAIT construct is measured through five items (GAIT1–GAIT5) that assess the frequency and effectiveness of using AI such as ChatGPT in learning. AIK uses four items (AIK1–AIK4) to measure understanding of basic concepts and applications of AI. SE is measured with four items (SE1–SE4) about students’ beliefs in using AI for academic tasks. Four items of AIP (AIP1–AIP4) evaluate perceived benefits and ethical concerns related to AI.
Three CRE items (CRE1–CRE3) assess the impact of AI on innovation and creative thinking, while three FE items (FE1–FE3) measure perceptions of fairness and transparency of AI use. CO (CO1–CO3) evaluates the role of AI in lightening cognitive load through three items. Five LP items (LP1–LP5) measure perceived learning outcomes, including improved achievement and understanding of the material. Prior to the main data collection, the instruments were piloted on 31 respondents to ensure clarity and construct validity (Zhang et al., 2024). LP was measured using a validated self-perception scale adapted from Wang et al. (2023). Respondents rated perceived improvement in understanding, task efficiency and academic achievement after using GAIT. Objective grade data were not collected due to institutional privacy regulations; however, perceived learning outcomes have been widely accepted in PLS-SEM studies as reliable proxies for academic performance (Hair et al., 2020; Shahzad et al., 2024b). Reliability analysis confirmed α = 0.869, composite reliability [CR] = 0.905 and average variance extracted [AVE] = 0.658, indicating strong measurement consistency.
To further ensure the robustness of the measurement instrument, a pilot test and validation procedures were undertaken prior to the main data collection. A pilot test (n = 31) was conducted prior to the main survey to assess item clarity, reliability and construct consistency. All indicators demonstrated Cronbach’s α values above 0.80, confirming internal reliability. Construct validity was further assessed using Confirmatory Factor Analysis (CFA) within SmartPLS 4, and all items met the recommended thresholds for factor loadings (>0.70), CR (>0.70) and AVE (>0.50). These results ensured that the measurement model was both valid and reliable for the main study.
The measurement instruments were adapted from established scales validated in previous studies (Gansser, 2021; Wang et al., 2023). Content validity was ensured through expert review involving three scholars in educational psychology and learning technology. A pilot test (n = 31) was conducted to assess item clarity, translation accuracy and initial reliability. All constructs demonstrated strong internal consistency (Cronbach’s α > 0.80), indicating that the adapted items were suitable for the main data collection. Empirical validity and reliability were further tested through CFA using SmartPLS 4. Convergent validity was confirmed with all item loadings > 0.70 and AVE > 0.50. Discriminant validity was verified using Fornell–Larcker and Heterotrait–Monotrait Ratio (HTMT) criteria, both within recommended thresholds (<0.85). These results demonstrate that each latent construct possesses adequate validity and reliability for inclusion in the structural model. Table 3 presents the correlation matrix and discriminant validity results among the study constructs.
The purposive sampling approach targeted university students who had at least one semester of experience using GAIT (e.g. ChatGPT, Copilot and DALL·E). To minimise bias, inclusion criteria were verified via self-report screening questions. Responses that did not meet criteria were automatically excluded. The sample of 292 valid responses represents four universities across different regions in Indonesia, ensuring variability in institutional context while maintaining homogeneity of AI exposure. As shown in Table 1, the sample characteristics indicate that the respondents represent diverse backgrounds in terms of gender, age, educational level, and field of study.
Data were analysed using PLS-SEM with SmartPLS 4. This method was chosen because it is theoretically appropriate for testing complex models with multiple mediating and moderating variables and latent constructs (Hair et al., 2020). The inner model structure reflects theoretical linkages derived from the TAM, Cognitive Load Theory and Self-Efficacy Theory, which jointly explain how cognitive, affective and normative dimensions influence LP. Model fit was assessed using R2, SRMR and Q2 values to confirm explanatory and predictive power.
Several universities in Indonesia have begun integrating Generative Artificial Intelligence Tools (GAIT) to predict learning content, academic challenges and provide personalised learning recommendations (Wang et al., 2023). This study used a purposive sampling technique to ensure that participants met two main criteria: (1) came from institutions that had implemented generative AI in management, teaching or learning; and (2) had direct experience using AI technology. Participants were purposively selected based on two inclusion criteria: (1) enrolment at higher-education institutions that had formally integrated GAIT into their teaching or learning processes and (2) a minimum of one-semester experience using AI technologies such as ChatGPT or Copilot. Ethical approval for the study was obtained from the affiliated institutional review board, and an informed consent was secured from all respondents prior to participation.
This study focused on students enrolled in Science, Technology, Engineering, and Mathematics (STEM)-related disciplines, including Natural Sciences, Engineering and Mathematics, where generative AI technologies such as ChatGPT, GitHub Copilot and DALL·E are actively integrated into coursework for content generation, coding and analytical reasoning. These disciplines were selected because they represent contexts with intensive AI adoption in academic activities, including problem-solving, simulation and creative experimentation. The focus on these domains aligns with Cognitive Tool Theory (Jordan, 2023), which posits that AI technologies enhance cognitive processing by enabling learners to offload complex informational tasks and concentrate on higher-order thinking. Data were collected online from 292 respondents at four universities via Questionnaire Star, with survey distribution via WhatsApp groups.
Demographic profile shows that the majority of respondents are female (56.16%) with a young age predominance (18–24 years: 61.64%; 25–30 years: 29.11%). Most have a bachelor’s degree (71.92%), followed by a master’s degree (23.97%) and a doctorate (4.11%). Fields of study are concentrated in Natural Sciences (37.67%), Engineering (36.64%) and Mathematics (25.68%), reflecting a strong focus on science and technology disciplines. There are no respondents over the age of 36, indicating a relatively young and highly educated population. The data collection techniques used ensure the validity of the results while facilitating access to relevant participants (Shahzad et al., 2024a).
This study uses the PLS-SEM approach to analyse the data, considering that the conceptual model is complex and contains predictive components (Sarstedt et al., 2022). SmartPLS 4 was used for the analysis due to its ability to handle models that include reflective and formative concepts, as well as providing robust statistical solutions for data that do not meet the normality assumption. This analysis allows for a better understanding of the relationship between the use of generative AI technologies and various variables that influence student LP in higher education.
This section presents the findings of the PLS-SEM analysis in three main stages: (1) assessment of the measurement model, (2) evaluation of the structural model and (3) hypothesis testing. The results are systematically organised according to the proposed hypotheses (H1–H8) to improve clarity and logical flow. Table 2 presents the measurement model results, including reliability, convergent and discriminant validity indicators, whilst Table 4 summarises the structural relationships among constructs. Figure 2 illustrates the validated conceptual model, showing the direction and magnitude of the tested paths. The following subsections describe each stage of the analysis, highlighting significant path coefficients (β), t-values and p-values to provide an integrated understanding of how cognitive, affective and ethical factors influence LP in AI-enhanced education.
Figure 2. The model was analysed using bootstrapping with 5000 subsamples at a significance level of 0.05 in SmartPLS (v.4.1.0.2) (Wang et al., 2021). AI: artificial intelligence.
| Constructs and items | FL | α | CR | AVE | Source | |
| Generative artificial intelligence-based technology (GAIT) | 0.943 | 0.958 | 0.821 | (Shahzad et al., 2024) | ||
| GAIT1 | I use generative AI tools (eg, ChatGPT and Copilot) to help me understand course material. | 0.723 | ||||
| GAIT2 | I use generative AI to complete assignments or academic tasks. | 0.951 | ||||
| GAIT3 | Generative AI helps me generate new ideas and inspiration in learning. | 0.975 | ||||
| GAIT4 | I regularly use generative AI as part of my learning routine. | 0.963 | ||||
| GAIT5 | I integrate generative AI to improve the quality of my academic work. | 0.950 | ||||
| Cognitive Unpacking (CO) | 0.925 | 0.953 | 0.872 | (Shum, 2024) | ||
| CO1 | I use AI to remember information that is difficult to memorise. | 0.868 | ||||
| CO2 | I often rely on AI to help organise complex information. | 0.865 | ||||
| CO3 | AI helps me understand complicated topics more easily. | 0.856 | ||||
| Creativity (CRE) | 0.935 | 0.959 | 0.886 | (Wu, 2021) | ||
| CRE1 | I can develop unique ideas after using generative AI. | 0.860 | ||||
| CRE2 | Generative AI encourages me to think more creatively in my studies. | 0.787 | ||||
| CRE3 | I can generate original solutions for academic tasks. | 0.739 | ||||
| Fairness & Ethics (FE) | 0.716 | 0.875 | 0.777 | (Bernabei et al., 2023) | ||
| FE1 | I consider academic honesty when using AI. | 0.905 | ||||
| FE2 | I understand the ethical boundaries of using AI in learning. | 0.750 | ||||
| FE3 | I do not rely on AI to copy or plagiarise others’ work. | 0.709 | ||||
| Learning Performance (LP) | 0.869 | 0.905 | 0.658 | (Wang et al., 2023) | ||
| LP1 | I feel my academic performance has improved since using AI. | 0.829 | ||||
| LP2 | I understand course material better when using AI as a support tool. | 0.848 | ||||
| LP3 | I complete tasks more effectively after using AI. | 0.838 | ||||
| LP4 | I complete assignments more quickly with the help of AI. | 0.793 | ||||
| LP5 | My academic grades have improved through the use of AI in learning. | 0.837 | ||||
| AI Knowledge (AIK) | 0.972 | 0.980 | 0.924 | (Li, 2024) | ||
| AIK1 | I understand how generative AI tools like ChatGPT work. | 0.996 | ||||
| AIK2 | I know how generative AI generates responses or texts. | 0.927 | ||||
| AIK3 | I am aware of the strengths and limitations of generative AI. | 0.927 | ||||
| AIK4 | I can explain basic concepts of generative AI systems. | 0.992 | ||||
| Self-Efficacy (SE) | 0.940 | 0.957 | 0.849 | (Tan et al., 2021) | ||
| SE1 | I am confident in completing academic tasks with AI support. | 0.890 | ||||
| SE2 | I believe I can learn new topics, even difficult ones, with the help of AI. | 0.945 | ||||
| SE3 | I have the ability to learn independently using AI. | 0.932 | ||||
| SE4 | I am capable of using AI to support my learning process. | 0.917 | ||||
| AI Perception (AIP) | 0.779 | 0.821 | 0.705 | (Westphal et al., 2023) | ||
| AIP1 | I find generative AI beneficial for supporting my learning. | 0.844 | ||||
| AIP2 | Generative AI makes learning more efficient. | 0.867 | ||||
| AIP3 | I believe AI use can improve academic outcomes. | 0.747 | ||||
| AIP4 | I have a positive perception of using AI in education. | 0.743 | ||||
| FL: Factor loading; α: Cronbach’s alpha; CR: Composite reliability; AVE: Average variance extracted; AI: artificial intelligence. | ||||||
| Inter-construct correlations | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
| 1. AIK | 0.961 | |||||||
| 2. AIP | 0.157 | 0.802 | ||||||
| 3. CO | 0.009 | 0.328 | 0.934 | |||||
| 4. CRE | 0.187 | 0.638 | 0.212 | 0.941 | ||||
| 5. FE | 0.309 | 0.218 | 0.161 | 0.072 | 0.792 | |||
| 6. GAIT | 0.173 | 0.267 | 0.487 | 0.316 | 0.021 | 0.906 | ||
| 7. LP | 0.008 | 0.602 | 0.222 | 0.295 | 0.165 | 0.194 | 0.811 | |
| 8. SE | 0.210 | 0.114 | 0.072 | 0.198 | 0.122 | 0.210 | 0.008 | 0.921 |
| The values located on the diagonal are the square root of the AVE, while the values outside the diagonal reflect the level of correlation between constructs. This approach is in accordance with the Fornell-Larcker standard for testing discriminant validity. The calculation process of all data is carried out by applying the PLS Algorithm using SmartPLS software (Version 4.1.0.2) (Hu et al., 2025). CO: Cognitive Unpacking; GAIT: Generative artificial intelligence-based technology; CRE: Creativity; FE: Fairness & Ethics; LP: Learning Performance; AIK: AI Knowledge; SE: Self-Efficacy; AIP: AI Perception; AVE: Average Variance Extracted. | ||||||||
To minimise the potential for Common Method Bias (CMB) that can affect the validity of the results, this study implemented procedural and statistical strategies. Procedurally, the survey instrument was designed with careful item construction, pre-testing and explanation of the research objectives to respondents to ensure the accuracy of the responses. Statistically, Harman’s one-factor test through Principal Component Analysis (PCA) in Statistical Package for the Social Sciences (SPSS) showed that the largest variance explained by one factor was below the critical threshold of 50%, indicating the absence of dominance of one factor. This result is supported by the application of the one-way test approach (Lamberti, 2023), which further strengthens that CMB is not a significant threat in this study.
This study ensures the validity and reliability of the instrument through three comprehensive approaches. Content validity was ensured through pre-survey testing and adaptation of published scales (Sarstedt et al., 2022). Statistical analysis showed that all indicators met the validity criteria with a loading factor >0.70 (Hair et al., 2020). Construct reliability is proven through Cronbach’s Alpha and CR values >0.70 and AVE >0.50, which simultaneously confirms convergent validity. Discriminant validity is confirmed through two methods: (1) the square root of AVE, which is greater than the correlation between constructs, and (2) the HTMT value <0.90 (Shahzad et al., 2024b), ensuring that each construct is unique and precisely measured. These findings as a whole guarantee the quality of the metrics used in the study.
The results of the structural model analysis indicate that the use of GAIT has a positive and significant effect on students’ LP (β = 0.920; p < 0.001). In addition to this direct effect, five partial mediation paths through AIK, AIP, CRE, FE and CO were also found to be significant, with p < 0.001 for AIK, AIP and CO; p = 0.008 for CRE; and p = 0.001 for FE. The serial mediation analysis further revealed that AIK significantly enhances LP through SE (β = 0.006; p = 0.048), while the pathway through AIP was not significant (p = 0.111). Overall, the model demonstrates high predictive power, with an R2 value of 0.604 and an SRMR value of 0.068, indicating a good model fit.
The first hypothesis (H1) proposed that the use of GAIT would have a direct positive and significant effect on students’ LP. The results strongly supported this assumption, showing that GAIT had a substantial direct influence on LP (β = 0.920; p < 0.001). This indicates that students who actively utilise GAIT, such as ChatGPT, Copilot or DALL·E, tend to achieve better academic outcomes due to improved comprehension, faster task completion and enhanced engagement. Therefore, H1 is supported.
Hypotheses H2–H6 examined whether the relationships between GAIT and LP were mediated by several psychosocial and cognitive variables, including AIK, AIP, CRE, FE and CO. The mediation analysis revealed that all five constructs acted as significant mediators. Specifically, AIK emerged as the strongest mediator (β = 0.715; p < 0.001), followed by AIP (β = 0.246; p < 0.001), CRE (β = 0.226; p = 0.008), FE (β = 0.143; p = 0.001) and CO (β = 0.196; p < 0.001). These findings confirm that GAIT enhances LP not only through direct engagement but also through its effects on students’ understanding, perception, creativity, ethical awareness and cognitive management. Hence, H2–H6 are supported.
Hypothesis H7 proposed that GAIT indirectly affects LP through a serial mediation pathway involving AIK and SE. The results of the serial mediation test confirmed this hypothesis (β = 0.006; p = 0.048). This suggests that students with higher AI literacy (AIK) tend to develop greater confidence (SE) in applying AI tools to academic tasks, which subsequently improves their LP. Therefore, H7 is supported. Hypothesis H8 predicted that GAIT would influence LP through AIP and SE. However, this pathway was found to be statistically insignificant (p = 0.111). This indicates that positive perceptions of AI alone are insufficient to enhance self-efficacy or translate into improved academic outcomes unless supported by solid AIK and skills. Thus, H8 is not supported.
To further test moderation effects, this study examined whether AIK, AIP and SE moderated the relationship between GAIT and LP. The results showed that only AIK significantly strengthened the GAIT–LP relationship (β = 0.006; p = 0.048), indicating that students with higher levels of AI literacy gain more benefits from GAIT use. Meanwhile, AIP and SE did not show significant moderating effects (p > 0.05). This reinforces the notion that knowledge-based readiness, rather than affective perception alone, determines how effectively students can leverage GAIT for learning. Overall, the results provide robust empirical support for the proposed conceptual model (Figure 1), confirming that GAIT influences student LP both directly and indirectly through cognitive, affective and ethical mechanisms.
The structural model shows substantial explanatory power. The R2 value for the LP construct of 0.604 supports the claim that the variables in the model explain a large proportion of the variation in learning outcomes. The R2 values for other constructs also show predictive power: CRE (0.131), FE (0.114), SE (0.040) and CO (0.187). Furthermore, the SRMR value of 0.068 is within the recommended range (<0.08), indicating that the model has adequate structural fit.
The predictive power of the model was evaluated through cross-validated redundancy analysis (Q2) using the blindfolding technique. Based on the approach Sarstedt et al. (2022), the model is said to have predictive ability if the Q2 value > 0. The results of the analysis show that all dependent constructs in the model have positive Q2 values, which indicates that the model has good predictive ability for student learning achievement variables.
The findings of this study indicate that the use of GAIT contributes significantly to improving student LP, both directly and indirectly. The findings not only reaffirm previous results but also extend the theoretical understanding of how GAIT influence learning. Specifically, the strong direct and indirect effects found in this study align with the TAM (Davis, 1989) by emphasising that AIK increases perceived usefulness and engagement. However, the results go beyond the original TAM framework by showing that the integration of cognitive and ethical constructs modifies the adoption mechanism of AI-based learning (Gansser, 2021). This demonstrates that the value of AI in education lies not only in technological acceptance but also in how students internalise it as a cognitive and ethical learning partner. Such insights expand TAM’s applicability to the context of Generative AI, bridging cognitive and moral dimensions often absent in traditional acceptance studies.
The highly significant direct effect of GAIT on LP (β = 0.920; p < 0.001) confirms that the adoption of AI technology in the context of higher education has passed the experimental stage and has shown a real impact on learning outcomes. This is in line with recent literature that shows a paradigm shift in digital learning, where the presence of AI is not only assistive but also transformative (Matli, 2024). For instance, engineering students reported using ChatGPT to debug code and clarify algorithmic concepts, while mathematics students relied on GAIT tools to visualise formulas and proofs. Such practices demonstrate how generative AI functions as a cognitive partner that reduces mental load (Kennedy, 2024) and enhances self-efficacy (Bandura, 1982), resulting in accelerated comprehension and more efficient problem-solving. These practical examples support the statistical evidence and illustrate how GAIT not only improves task completion speed but also strengthens conceptual understanding in complex learning domains.
Furthermore, the contribution of the mediation pathway provides an in-depth understanding of the psychosocial mechanisms that mediate the relationship between AI use and academic achievement. AIK emerged as the most dominant mediator (β = 0.715; p < 0.001), confirming the importance of AI literacy in optimising the use of this technology. These results support the TAM theory, where the perception of usefulness and understanding of technology are the main determinants of the effectiveness of technology use (Gansser, 2021). In this context, students with a higher understanding of AI tend to be more able to integrate AI strategically into their learning activities, rather than simply as a passive tool (Lundberg & Mozelius, 2024). The dominant mediation of AIK corroborates the TAM proposition that technological understanding enhances perceived usefulness (Gansser, 2021). Similarly, the significant path through CO validates Cognitive Load Theory by demonstrating how GAIT reduces learners’ mental burden when dealing with complex information (Kennedy, 2024). In addition, the positive contribution of SE confirms Bandura’s (1982) principle that confidence in one’s capability increases motivation and persistence, which consequently improves LP. Together, these results empirically reinforce the integrated theoretical model proposed in this study.
Another interesting finding is the positive contribution of AIP, CRE, FE and CO. The role of AIP as a significant mediator (β = 0.246; p < 0.001) indicates that positive perceptions of AI are correlated with higher academic engagement. However, it should be noted that this perception only acts as a mediator, not a moderator. This means that its influence is limited if it is not accompanied by adequate understanding and competence (Srinivasan, 2023). On the other hand, the contribution of CRE and CO shows that AI can act as a catalyst for ideas and also a cognitive tool that helps manage students’ mental load (Rajput & Arora, 2024). This adds a new dimension to the study of cognitive load (Cognitive Load Theory), with AI serving as an external memory system that allows students to focus on high-level processing (Kennedy, 2024).
These results indicating on FE highlight the normative dimension of AI adoption. The significant mediation effect of FE (β = 0.143; p = 0.001) shows that the integration of ethical values in the use of AI is positively correlated with learning outcomes. This reinforces the view that digital ethics education is an integral element in shaping a generation of AI users who are not only technically proficient but also socially responsible (Shahzad, 2025). Significant serial mediation analysis through AIK and SE (β = 0.006; p = 0.048) adds an important conceptual layer in understanding the formation of academic self-confidence. High AIK contributes to increased SE, which then has a positive impact on LP (Kim, 2025). However, the mediation path through AIP and SE was not significant (p = 0.111), indicating that positive perceptions of AI do not automatically increase self-confidence, unless supported by substantial mastery of the technology. This finding underscores the importance of integrating skills-based AI training, rather than simply promoting positive perceptions. In the moderation analysis, only AIK showed a significant effect (β = 0.006; p = 0.048), confirming that the relationship between the use of GAIT and LP is highly dependent on students’ AI literacy levels. This result not only strengthens the assumption that technology utilisation is knowledge-dependent but also challenges the popular narrative that access and exposure to AI are sufficient to generate learning benefits. On the contrary, the effect of GAIT on LP is optimal only when users have adequate conceptual and procedural understanding of AI. This finding is based on the principles of cognitive tool theory (Jordan, 2023), which states that technology will only expand cognitive capabilities if users are able to actively construct and manage the information obtained from it (Fabia, 2024).
The insignificance of AIP and SE as moderators also presents important conceptual implications. Positive perceptions of AI (AIP) or confidence in learning (SE) do not seem to be enough to bridge the use of AI with improved learning outcomes. This suggests that affective readiness alone is not enough; what is needed is epistemic readiness. In this context, AI literacy is not just technical skills (e.g. how to use ChatGPT), but rather a critical understanding of the functions, limitations and ethical implications of AI in the learning process (Kulangareth et al., 2024). Therefore, AI literacy should be positioned as an essential competency in higher education, not a complement (Lang, 2024). An important message that emerges from these findings is that higher education institutions need to shift from simply promoting the use of technology to establishing a learning ecosystem that is oriented towards deep AI literacy (Chen, 2025). Practical skills-based training needs to be integrated with a curriculum that encourages reflective and analytical understanding of technology. In this way, students will become not only passive users of AI but also learning agents who are able to use AI strategically and ethically (Yim, 2024).
Furthermore, the high explanatory power of the model (R2 = 0.604 for LP) indicates that the conceptual framework developed in this study is able to capture the complexity of the relationship between technology, psychosocial and cognition in the context of AI-based learning. The existence of an SRMR value of 0.068, which is within the ideal limit, strengthens that this model is not only statistically adequate but also conceptually robust. The integration of cognitive (AIK, CO), affective (AIP, SE) and normative (FE) dimensions in the model reflects a holistic approach that is highly relevant in the increasingly digitalised higher education landscape. Overall, these findings present a new narrative that generative AI should not only be seen as instrumental technology but also as a cognitive-ethical partner in the learning process. When used by AI-literate learners, this technology not only accelerates academic achievement but also encourages the development of high-level thinking capacity, reflectivity and ethical responsibility. The practical implications are very broad, including curriculum design, lecturer training and national digital literacy policies. As illustrated in Figure 1 and supported by Table 4, the hypothesised model was empirically validated, demonstrating that cognitive (AIK, CO), affective (AIP, SE) and ethical (FE) dimensions jointly shape LP in AI-enhanced education.
Theoretically, this research advances the TAM and Cognitive Tool Theory by demonstrating how cognitive, affective and normative mediators operate concurrently within a unified PLS-SEM framework. The findings reveal that AIK not only enhances perceived usefulness (a core tenet of TAM) but also strengthens learners’ self-efficacy, validating the motivational dimension of technology acceptance. Simultaneously, the integration of ethical and affective constructs extends Cognitive Tool Theory by positioning generative AI as a socio-cognitive partner one that supports CO, creative reasoning and ethical decision-making in complex learning contexts. Through this synthesis, this study deepens theoretical understanding of how learners cognitively and morally co-adapt to intelligent technologies in the digital learning ecosystem.
Practically, this study provides a multi-layered roadmap for embedding generative AI into higher education in ways that are pedagogically meaningful, ethically grounded and institutionally sustainable. Beyond promoting technical proficiency, universities must cultivate critical AI literacy the ability to question, interpret and ethically deploy AI outputs. This entails curriculum reform that integrates reflective and project-based engagement with AI tools, faculty development programs that equip educators to facilitate human–AI collaboration and governance frameworks that uphold academic integrity in AI-mediated learning environments. These insights position the study as a foundational reference for institutions seeking to align technological innovation with ethical stewardship and human-centred learning.
This study confirms that the use of GAIT significantly enhances student LP, both directly and indirectly through mediating channels involving AIK, AIP, CRE, FE, and CO. Among these mediators, AIK emerged as the most dominant, indicating that a deep understanding of AI serves as a critical foundation for optimising technology in educational contexts. Further moderation analysis revealed that only AIK significantly strengthened the relationship between GAIT and LP, emphasising the importance of AI literacy that is not merely technical but also conceptual and reflective. With high explanatory power and good model fit, these results provide a robust empirical foundation for advancing AI-based learning frameworks in higher education. Beyond empirical findings, the results align closely with the theoretical frameworks underpinning this research. The observed relationships illustrate that perceived usefulness, cognitive support and motivational confidence interact to shape learning outcomes. This integration advances understanding of AI in education by positioning GAIT as both a cognitive enhancer and a socio-ethical learning partner. The findings also extend existing learning models by demonstrating how cognitive, affective and ethical dimensions jointly contribute to performance improvement in AI-supported environments.
Theoretically, this study contributes by providing a multidimensional explanation of AI adoption in learning contexts, integrating cognitive, affective and normative mediators within one framework. These findings strengthen the theoretical basis for viewing AI not just as a technological tool but as a transformative educational agent that supports human cognition and ethical awareness. Despite these contributions, this study has several methodological limitations, including its cross-sectional design, which limits causal inference, and its relatively narrow geographic and institutional scope. Future research should adopt longitudinal or mixed-method approaches and consider contextual factors such as faculty support, institutional readiness and access to digital infrastructure. Variables such as trust in AI, AI anxiety and algorithmic transparency may also offer valuable directions for further exploration.
Practically, these findings suggest that universities should move beyond technical promotion of AI and focus on embedding AI literacy and ethical reasoning into the curriculum. Faculty members should be trained to integrate GAIT-based pedagogy that emphasises CO benefits while safeguarding academic integrity. Institutional policies should therefore position AI literacy as a core academic competence rather than a peripheral skill. This aligns with international higher-education strategies advocating for critical AI literacy and responsible technology use as foundational competencies for the digital era. Furthermore, these insights provide strategic implications for curriculum development, AI literacy training and pedagogical design that can respond to technological disruption adaptively and ethically. Overall, this study not only provides empirical evidence for the multidimensional impact of GAIT on student LP but also offers a theoretical and practical roadmap for the ethical and effective integration of AI in higher education. The findings underscore that the success of AI-enhanced learning depends not solely on technology itself but also on how institutions and educators foster reflective, ethical and human-centred engagement with intelligent tools.
The author also extends gratitude to the Agency for the Assessment and Application of Technology (BPPT) and the Indonesia Endowment Fund for Education (LPDP), particularly the Indonesian Education Scholarship (BPI) under the Doctoral Scholarship scheme, for providing financial support for the research and article preparation. The author did not use any other AI technology in the ideation, design or writing of this research. Large Language Model tools (e.g. ChatGPT) were not used for writing or data analysis, except for minor language refinement assistance as stated here.
| Acosta-Enriquez, B. G. (2025). The mediating role of academic stress, critical thinking and performance expectations in the influence of academic self-efficacy on AI dependence: Case study in college students. Computers and Education: Artificial Intelligence, 8, 100381. https://doi.org/10.1016/j.caeai.2025.100381 |
| Anggoro, K. J. & Khasanah, U. (2024). Technology-infused teams-games-tournaments in English language class: A mixed method study on students’ achievement and perception. Research in Learning Technology, 32(1063519), 1–17. https://doi.org/10.25304/rlt.v32.3150 |
| Atenas, J., Havemann, L. & Nerantzi, C. (2024). Critical and creative pedagogies for artificial intelligence and data literacy: An epistemic data justice approach for academic practice. Research in Learning Technology, 32(1063519), 1–16. https://doi.org/10.25304/rlt.v32.3296 |
| Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37(2), 122–147. https://doi.org/10.1037/0003-066X.37.2.122 |
| Bergdahl, N. (2025). Attitudes, perceptions and AI self-efficacy in K-12 education. Computers and Education: Artificial Intelligence, 8, 100358. https://doi.org/10.1016/j.caeai.2024.100358 |
| Bernabei, M. et al. (2023). Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances. Computers and Education: Artificial Intelligence, 5, 100172. https://doi.org/10.1016/j.caeai.2023.100172 |
| Blancaflor, E. et al. (2023). A literature review of the legislation and regulation of deepfakes in the Philippines. In Proceedings of the March 2023 14th International Conference on E-business, Management and Economics (pp. 392–397). Association for Computing Machinery. https://doi.org/10.1145/3616712.3616722 |
| Chen, Y. H. (2025). Impact of basic artificial intelligence (AI) course on understanding concepts, literacy, and empowerment in the field of AI among students. Computer Applications in Engineering Education, 33(1), e22806. https://doi.org/10.1002/cae.22806 |
| Davis, F. D. (1989). Information technology perceived usefulness and perceived ease of use. MIS Quarterly, 13(3), 319–339. https://doi.org/10.2307/249008 |
| Fabia, J. N. V. (2024). Students satisfaction, self-efficacy and achievement in an emergency online learning course. Research in Learning Technology, 32(1063519), 1–18. https://doi.org/10.25304/rlt.v32.3179 |
| Gansser, O. A. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535 |
| Geri, A. (2025). Predicting teachers’ intentions to use virtual reality in education: A study based on the UTAUT-2 framework. Research in Learning Technology, 33(1063519), 1–15. https://doi.org/10.25304/rlt.v33.3429 |
| Hair, J. F., Howard, M. C. & Nitzl, C. (2020). Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. Journal of Business Research, 109, 101–110. https://doi.org/10.1016/j.jbusres.2019.11.069 |
| Hu, L., Wang, H. & Xin, Y. (2025). Factors influencing Chinese pre-service teachers’ adoption of generative AI in teaching: An empirical study based on UTAUT2 and PLS-SEM. Education and Information Technologies, 30, 12609–12631. https://doi.org/10.1007/s10639-025-13353-7 |
| Islam, M. R. (2025). Generative AI, cybersecurity, and ethics. In, M. R. Islam (Ed.), Generative AI, Cybersecurity, and Ethics. Wiley, pp. 1–81. https://doi.org/10.1002/9781394279326 |
| Jordan, J. (2023). Development of a lecture evaluation tool rooted in cognitive load theory: A modified Delphi study. AEM Education and Training, 7(1), e10839. https://doi.org/10.1002/aet2.10839 |
| Kennedy, M. J. (2024). Cognitive load theory: An applied reintroduction for special and general educators. Teaching Exceptional Children, 56(6), 440–451. https://doi.org/10.1177/00400599211048214 |
| Kim, B. J. (2025). The AI-environment paradox: Unraveling the impact of artificial intelligence (AI) adoption on pro-environmental behavior through work overload and self-efficacy in AI learning. Journal of Environmental Management, 380, 125102. https://doi.org/10.1016/j.jenvman.2025.125102 |
| Kulangareth, N. V. et al. (2024). Investigation of deepfake voice detection using speech pause patterns: Algorithm development and validation. JMIR Biomedical Engineering, 9, e56245. https://doi.org/10.2196/56245 |
| Lamberti, G. (2023). Hybrid multigroup partial least squares structural equation modelling: An application to bank employee satisfaction and loyalty. Quality and Quantity, 57, 683–705. https://doi.org/10.1007/s11135-021-01096-9 |
| Lang, M. (2024). Fostering critical thinking, AI and data literacy, and global competence amongst business students. In, Proceedings of the Information Systems Education Conference, ISECON, 43–48. Retrieved from https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85218427075&origin=inward |
| Li, B. (2024). Construction of an AI literacy general education curriculum based on ‘knowledge-skills’ navigation. Journal of Library and Information Science in Agriculture, 36(8), 34–42. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0670 |
| Lundberg, E. & Mozelius, P. (2024). The potential effects of deepfakes on news media and entertainment. AI and Society, 40, 2159–2170. https://doi.org/10.1007/s00146-024-02072-1 |
| Matli, W. (2024). Extending the theory of information poverty to deepfake technology. International Journal of Information Management Data Insights, 4(2), 100286. https://doi.org/10.1016/j.jjimei.2024.100286 |
| Passmore, J., Olafsson, B. & Tee, D. (2025). A systematic literature review of artificial intelligence (AI) in coaching: Insights for future research and product development. Journal of Work-Applied Management, ahead-of-print(ahead-of-print). https://doi.org/10.1108/JWAM-11-2024-0164 |
| Punyani, P., & Chhikara, R. (2023). Comparison of different machine learning algorithms for deep fake detection. In Proceedings of the March 2023 International Conference on Communication, Security and Artificial Intelligence (ICCSAI) (pp. 58–63). IEEE. https://doi.org/10.1109/ICCSAI59793.2023.10421164 |
| Rajput, T. & Arora, B. (2024). A systematic review of deepfake detection using learning techniques and vision transformer. In, Tanwar, S. et al. (Eds.). Lecture Notes in Networks and Systems: Vol. 991 LNNS. Springer Science and Business Media Deutschland GmbH, 217–235. https://doi.org/10.1007/978-981-97-2550-2_17 |
| Richmond, L. L. & Taylor, R. G. (2025). The benefits and potential costs of cognitive offloading for retrospective information. Nature Reviews Psychology, 4, 312–321. https://doi.org/10.1038/s44159-025-00432-2 |
| Sarstedt, M., Ringle, C. M. & Hair, J. F. (2022). Partial least squares structural equation modeling BT. In, Homburg, C., Klarmann, M. & Vomberg, A. (Eds.). Handbook of Market Research. Springer International Publishing, 587–632. https://doi.org/10.1007/978-3-319-57413-4_15 |
| Senent, R. M. & Bueso, D. (2022). The banality of (automated) evil: Critical reflections on the concept of forbidden knowledge in machine learning research. Recerca, 27(2), 6147. https://doi.org/10.6035/recerca.6147 |
| Shahzad, M. F. (2025). Exploring the impact of generative AI-based technologies on learning performance through self-efficacy, fairness & ethics, creativity, and trust in higher education. Education and Information Technologies, 30(3), 3691–3716. https://doi.org/10.1007/s10639-024-12949-9 |
| Shahzad, M. F., Xu, S. & Baheer, R. (2024). Assessing the factors influencing the intention to use information and communication technology implementation and acceptance in China’s education sector. Humanities and Social Sciences Communications, 11(1), 1–15. https://doi.org/10.1057/s41599-024-02777-0 |
| Shahzad, M. F. et al. (2024). Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon, 10(8), e29523. https://doi.org/10.1016/j.heliyon.2024.e29523 |
| Shum, S. B. (2024). Generative AI for critical analysis: Practical tools, cognitive offloading and human agency. CEUR Workshop Proceedings, 3667, 205–213. Retrieved from https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85192017258&origin=inward |
| Srinivasan, S. (2023). Understanding user perception of biometric privacy in the era of generative AI. In Proceedings of the 2023 4th International Conference on Communication, Computing and Industry 6.0 C216 2023 (pp. 1–6). IEEE. https://doi.org/10.1109/C2I659362.2023.10430931 |
| Sweller, J., van Merriënboer, J. J. G. & Paas, F. (2019). Cognitive architecture and instructional design: 20 Years later. Educational Psychology Review, 31(2), 261–292. https://doi.org/10.1007/s10648-019-09465-5 |
| Tan, F. C. J. H. et al. (2021). The association between self-efficacy and self-care in essential hypertension: A systematic review. BMC Family Practice, 22(1), 1–12. https://doi.org/10.1186/s12875-021-01391-2 |
| Wang, S. et al. (2021). Determinants of active online learning in the smart learning environment: An empirical study with PLS-SEM. Sustainability (Switzerland), 13(17), 1–19. https://doi.org/10.3390/su13179923 |
| Wang, S., Sun, Z. & Chen, Y. (2023). Effects of higher education institutes’ artificial intelligence capability on students’ self-efficacy, creativity and learning performance. Education and Information Technologies, 28(5), 4919–4939. https://doi.org/10.1007/s10639-022-11338-4 |
| Westphal, M. et al. (2023). Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance. Computers in Human Behavior, 144, 107714. https://doi.org/10.1016/j.chb.2023.107714 |
| Wu, Z. (2021). AI creativity and the human-AI co-creation model. Lecture Notes in Computer Science, 12762, 171–190. https://doi.org/10.1007/978-3-030-78462-1_13 |
| Yim, I. H. Y. (2024). A critical review of teaching and learning artificial intelligence (AI) literacy: Developing an intelligence-based AI literacy framework for primary school education. Computers and Education: Artificial Intelligence, 7, 100319. https://doi.org/10.1016/j.caeai.2024.100319 |
| Zhang, X. et al. (2024). Association between social media use and students’ academic performance through family bonding and collective learning: The moderating role of mental well-being. Education and Information Technologies, 29(11), 14059–14089. https://doi.org/10.1007/s10639-023-12407-y |