ORIGINAL RESEARCH ARTICLE
Omar Ali Al-Smadia, Radzuwan Ab Rashidb,c*, Raed Awad Al-Ramahid, Marwan Harb Alqaryoutie, Holmatov Shakhriyor Zokhidjon Uglif and Abdurakhmon Norinboev Vokhidovichg
aSchool of Distance Education, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia; bFaculty of Languages and Communication, Universiti Sultan Zainal Abidin, Terengganu, Malaysia; cApplied Science Research Centre, Applied Science Private University, Amman, Jordan; dDepartment of English Language and Literature, Faculty of Languages, The University of Jordan, Aqaba, Jordan; eDepartment of English Language, Literature and Translation, Faculty of Arts, Zarqa University, Zarqa, Jordan; fEnglish Language and Literature Department, Fergana State University, Fergana, Uzbekistan; gDepartment of English Language, Tashkent State University of Economics, Tashkent, Uzbekistan
Received: 30 October 2024; Revised: 6 January 2025: Accepted: 10 February 2025; Published: 4 June 2025
In the dynamic landscape of higher education, the integration of artificial intelligence (AI) into learning has emerged as a transformative force, ushering in tailored, adaptive, and immersive educational experiences for undergraduate university students. This study employed a thematic analysis to scrutinize focus group discussions with 25 undergraduate participants majoring in English language at a university in Jordan to examine how these learners engage with AI-supported self-regulated learning. The findings revealed five prominent themes: accessibility and inclusivity, adaptive feedback mechanisms, impact on learning habits, technological proficiency and preparedness, and social dynamics in AI-infused learning. Within these themes, diverse student views were categorized according to Ab Rashid and Yunus’ (2016) framework of perception evaluation: the Avid Category (very positive perception), the Analytic Category (enthusiast but critical), the Anxious Category (enthusiast but with worries and fear), and the Agnostic Category (negative view). These varied views collectively reveal the profound implications of AI integration in reshaping the educational landscape. This study contributes to the discourse on AI in education by highlighting the importance of integrating AI tools with pedagogical approaches that foster independent learning and critical engagement. Recommendations include combining AI feedback with peer reviews and instructor guidance, enhancing digital literacy programs, and ensuring robust support measures. By addressing these areas, educational institutions can create more inclusive and effective AI-supported learning environments that cater to diverse student needs and promote a balanced approach to technology in education.
Keywords: artificial intelligence; educational technology; higher education; student-centric learning
*Corresponding author. Email: radzuwanrashid@unisza.edu.my
Research in Learning Technology 2025. © 2025 O.A. Al-Smadi et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.
Citation: Research in Learning Technology 2025, 33: 3377 - http://dx.doi.org/10.25304/rlt.v33.3377
The integration of artificial intelligence (AI) into online learning is often touted as a transformative development in higher education, with policymakers and researchers emphasizing its potential. While AI offers personalized, adaptive, and immersive learning experiences, its impact on many educational practices remains under debate. This study seeks to investigate both the potential and limitations of AI, focusing on its application to self-regulated learning (SRL).
SRL is crucial in language learning, requiring regular practice of grammar, vocabulary, pronunciation, and comprehension. AI tools can provide immediate feedback and personalized pathways, which support autonomous learning essential for mastering a new language (Jin et al., 2023). However, concerns such as biases in AI algorithms, ethical implications, and the risk of over-reliance highlight the need for a balanced exploration of its benefits and drawbacks (Farooqi et al., 2024). For example, biases embedded in AI tools may disadvantage certain student groups, and ethical issues such as ownership require careful consideration.
This study examines how English language learners at a Jordanian university engage with AI-supported SRL, aiming to clarify the unique challenges faced by these learners such as disparities in technological access and digital literacy, cultural nuances in educational practices, and potential resistance to AI adoption. While the findings primarily apply to Jordanian learners, recommendations may also have broader relevance for language learners and higher education students globally.
To structure this exploration, the study addresses the following research questions:
How do Jordanian English language learners perceive and engage with AI-supported SRL?
What unique challenges and opportunities do these learners encounter when integrating AI tools into their educational practices?
By exploring students’ views, this research contributes to the discourse on AI in education, offering insights into both its potential and its limitations. Recognizing the fact that evidence for AI’s effectiveness and desirability is still emerging, the study emphasizes the need for cautious and evidence-based adoption. Furthermore, the study highlights the intersection of technological proficiency and social dynamics in AI integration. By doing so, it provides actionable recommendations for educators, policymakers, and developers to optimize AI tools for varied educational contexts.
AI is increasingly being integrated into educational contexts, yet its implementation raises significant questions regarding accessibility, inclusivity, and efficacy. While some studies argue that AI-driven tools can personalize learning and reduce barriers (Jin et al., 2023), others highlight critical issues such as transparency and racial biases embedded within AI systems (Rodway & Schepman, 2023). For instance, AI-powered tools may inadvertently perpetuate inequalities rather than mitigate them.
AI applications in education are diverse, ranging from adaptive learning platforms and language translation tools to automated grading systems and virtual tutors. These technologies offer potential benefits, including real-time feedback and personalized learning experiences, which can support the development of SRL skills (Wei, 2023). Adaptive tools like speech-to-text assist students with disabilities, while real-time translations help bridge language gaps (Zhai et al., 2021). However, studies on AI’s impact on inclusivity for diverse cultural and linguistic backgrounds are limited. Transparency issues in AI algorithms, ethical concerns, and over-reliance on AI tools further complicate its application in education (Niemi, 2021).
AI also provides real-time, adaptive feedback, distinguishing it from traditional methods. It continuously monitors performance, offering tailored guidance for quick error correction (Wei, 2023; González-Calatayud et al., 2021). This is particularly beneficial in language learning, as immediate feedback enhances grammar and pronunciation acquisition (Jin et al., 2023). However, over-reliance on AI tools may hinder critical thinking and creativity. For example, students relying on AI-generated outputs may disengage from deep cognitive processes required for learning (Rodway & Schepman, 2023).
Potential drawbacks of AI include over-reliance on tools, which can hinder critical thinking skills (Rodway & Schepman, 2023). Additionally, while AI aids communication, it lacks the empathy of human interactions (Järvelä et al., 2023). Moreover, ethical concerns, such as ownership of AI-generated work, warrant critical examination. As students increasingly use AI to produce content, educators must consider how these tools impact learners’ sense of ownership and accountability in their academic work (Chiu, 2023).
The success of AI implementation relies on students’ technological proficiency. Research shows that students with higher digital literacy can navigate AI tools better (Sanusi et al., 2022). Conversely, those lacking technological skills may struggle, exacerbating educational inequalities (Zhang & Villanueva, 2023). This study analyzes the impacts of AI on English language learners’ habits in Jordan, exploring both opportunities and challenges. By examining these dynamics, the research aims to provide insights that help educators maximize the benefits of AI while addressing its significant drawbacks, including ethical, cultural, and practical considerations.
Generative AI refers to algorithms capable of producing content such as text, images, or audio based on patterns learned from large datasets (Bandi et al., 2023). Popular tools like ChatGPT, DALL·E, and Bard rely on models trained using vast amounts of information, enabling them to generate outputs that mimic human-like responses or creativity. At their core, generative AI models use probabilistic predictions to determine the next word or structure in a sequence, creating outputs that appear coherent and contextual (Lv, 2023).
However, generative AI does not possess actual intelligence, intentionality, or understanding. These models operate based on patterns and probabilities rather than genuine comprehension or reasoning (Bandi et al., 2023). For instance, while AI-generated essays or language translations may seem accurate, they lack the capacity to verify facts, assess ethical implications, or adapt content to specific cultural contexts unless explicitly directed. Similarly, AI does not personalize or contextualize its outputs based on individual learner needs beyond its programmed capabilities. This contrasts sharply with the nuanced guidance provided by human instructors, who tailor feedback to students’ unique backgrounds, goals, and abilities (Seo et al., 2021).
Students learning with AI tools benefit from scalability and immediacy. Generative AI offers instant feedback, explanations, and recommendations, making it a valuable resource for SRL (Jin et al., 2023). For example, AI tools can provide error corrections for grammar, offer synonyms to expand vocabulary, or suggest essay structures to improve writing coherence. These features help students practice iteratively and independently, especially when access to instructors is limited.
However, traditional learning without AI relies heavily on human interactions, which foster critical thinking, problem-solving, and personalized engagement. In non-AI-supported contexts, students learn through peer collaboration, instructor feedback, and experiential methods that build cognitive and social skills. Human-led approaches emphasize intentionality, such as guiding learners to reflect on errors or adapt strategies for specific challenges (Chiu, 2023).
While generative AI accelerates certain aspects of learning, it lacks the empathetic, adaptive, and evaluative qualities inherent in human-led instruction. For instance, AI cannot discern when a student struggles due to emotional barriers or lacks foundational knowledge in a subject area. Furthermore, over-reliance on AI can undermine the development of critical thinking and creativity, as students may prioritize efficiency over deeper cognitive engagement (Rodway & Schepman, 2023).
The integration of AI and traditional learning approaches offers the most promise. Students can use generative AI for repetitive or time-intensive tasks while relying on instructors for guidance, mentorship, and nuanced skill development. By understanding how AI operates and its limitations, educators can create balanced environments that leverage AI’s strengths without compromising the holistic development of learners.
Self-Determination Theory (SDT), proposed by Ryan and Deci (2000), highlights intrinsic motivation through three psychological needs: autonomy, competence, and relatedness. In AI-supported SRL, SDT serves as a framework for understanding how AI can enhance these needs and improve outcomes.
Autonomy, the need for control over actions, is crucial for intrinsic motivation. AI technologies can support autonomy by offering personalized learning paths and activity choices. For example, AI platforms provide tailored materials that allow students to select tasks based on their interests (Xia et al., 2024). However, over-reliance on AI may diminish this sense of control if students perceive AI as decision-makers.
Competence involves effective interaction with the environment. AI fosters competence through immediate, personalized feedback and scaffolding, helping learners identify strengths and weaknesses (Järvelä et al., 2023). For instance, AI systems analyze data to offer insights and suggest interventions. However, excessive feedback can create dependency, undermining problem-solving skills, so AI support should challenge learners appropriately.
Relatedness, the need for connection, is the third psychological need in SDT. AI can enhance relatedness by facilitating collaborative learning and social interaction. AI chatbots can simulate peer interactions and provide social support, helping learners feel connected in isolated environments (Xia et al., 2024). However, AI cannot replace genuine human interaction, essential for fulfilling social needs. Thus, integrating AI with human interactions is vital.
While SDT highlights AI’s motivational benefits in learning, it also raises key considerations. Balancing AI support with learner autonomy is crucial, as overly directive AI may undermine motivation. Additionally, AI should foster independent learning skills without creating dependency. Finally, AI must complement human interactions to meet social needs.
Understanding student perceptions of AI-supported learning is essential for evaluating the effectiveness and acceptance of these technologies. This study applies SDT to interpret how AI tools influence students’ motivational needs and aligns these interpretations with the perception evaluation framework provided by Ab Rashid and Yunus (2016). For example, students categorized as Avid demonstrate high levels of autonomy and competence, embracing AI tools to enhance their learning experiences. Analytic students also exhibit competence but balance enthusiasm with critical evaluation, often expressing concerns about over-reliance on AI, which aligns with the cautionary aspects of SDT. Anxious students, who worry about dependency, reflect challenges in achieving autonomy and competence, as they may perceive AI as restrictive or invasive. Agnostic students, skeptical about AI’s efficacy, highlight gaps in relatedness and trust, preferring traditional learning methods over AI-driven approaches.
These findings illustrate how AI tools can both support and challenge the fulfillment of psychological needs outlined in SDT. For instance, while AI fosters autonomy through personalized pathways, it may also inadvertently hinder relatedness by replacing meaningful peer or instructor interactions. By integrating SDT, this study underscores the importance of balancing AI’s capabilities with pedagogical strategies that prioritize human connection, critical thinking, and learner autonomy, contributing to a nuanced understanding of AI’s role in supporting SRL.
This study employs a qualitative research design to explore the dynamics of AI-supported learning and online education among undergraduate students. Qualitative methods provide an in-depth understanding of participants’ perspectives, experiences, and perceptions, allowing for nuanced exploration of this complex subject (Merriam, 1998).
Twenty-five third-year English major students from a Jordanian university participated, purposively selected for their active use of generative AI tools in SRL. Each group was assigned specific AI purposes by their lecturer (see Table 1). This assignment allowed the study to capture a range of AI use cases but did not constrain the findings, as themes were derived inductively from the data. The students engaged in SRL outside formal hours over 3 months, from January to March 2024.
The sample size was determined by data saturation, where no new themes emerged from focus group discussions conducted from March to May 2024. Care was taken to monitor the diversity of responses across groups. If saturation had been achieved earlier, underrepresented groups would have been revisited. Conversely, new themes arising late would have been explored through additional focus groups. Participants were organized into groups of five for approximately 2-h discussions, which were transcribed verbatim. Inter-coders/inter-raters ensured reliability and validity in the coding process. The focus group discussions were conducted in English, as the participants were English language majors and proficient in the language.
Thematic analysis proposed by Braun and Clarke (2006) was used to analyze the discussions. Systematic coding identified recurring patterns and themes, providing a holistic understanding of perspectives on AI-supported learning. Themes were further examined using Ab Rashid and Yunus’ (2016) perception evaluation framework, categorizing perceptions into four groups: Avid (very positive perception), Analytic (enthusiastic but critical), Anxious (enthusiastic with concerns), and Agnostic (negative view). The Avid Category includes students who embrace AI-supported learning for enhancing educational outcomes. The Analytic Category consists of those who acknowledge AI’s benefits while critically evaluating its limitations. The Anxious Category encompasses students who worry about over-reliance on technology. Lastly, the Agnostic Category includes students skeptical of AI’s efficacy, preferring traditional methods. Applying Ab Rashid and Yunus’ framework provided a deeper understanding of students’ varying levels of enthusiasm and critical engagement with AI tools. Although Ab Rashid and Yunus’ framework served as a guiding lens, the thematic analysis primarily focused on emergent themes to ensure a data-driven approach. Specifically, the thematic analysis was used to derive key themes from the data, while the perception evaluation framework was applied to categorize participants’ responses into learner ‘types’ (Avid, Analytic, Anxious, and Agnostic). This explanation ensures coherence by demonstrating how the two approaches work together to provide a comprehensive understanding of the data, while maintaining the focus on learner types as the primary lens for analysis.
Ethical approval for the study was obtained from the university’s ethics review board. Participation was entirely voluntary, and students were informed of their right to withdraw at any time without penalty. To avoid conflicts of interest or power dynamics, none of the researchers held teaching roles with the participants. Anonymity and confidentiality were assured, with all personal data securely stored and accessible only to the research team.
Table 1 presents the details of each participant group and its AI use.
This study reveals multifaceted student perceptions of AI-supported learning, organized across five themes: Accessibility and Inclusivity, Adaptive Feedback Mechanisms, Impact on Learning Habits, Technological Proficiency and Preparedness, and Social Dynamics in AI-Infused Learning. These themes were derived through a thematic analysis of the focus group discussions, guided by Braun and Clarke’s (2006) framework. Utilizing Ab Rashid and Yunus’ (2016) perception framework, participants were categorized into four groups: Avid (strongly positive), Analytic (enthusiastic but critical), Anxious (enthusiastic but apprehensive), and Agnostic (skeptical). This categorization, applied after identifying emergent themes, provides an additional lens to interpret the findings. Each group’s perspectives offer unique insights into AI’s role in SRL, enriching the discussion with contextual support from relevant literature.
Participants across all groups discussed the role of AI in facilitating accessibility and inclusivity in their learning experiences. While AI is not inherently designed with accessibility or inclusivity in mind, participants perceived its tools as offering features that support diverse learning habits. Accessibility in education refers to providing learning opportunities that all students can easily access, regardless of their individual circumstances or needs. Inclusivity ensures these opportunities are equitable and cater to a diverse range of learners. However, it is essential to recognize that AI does not inherently understand individual needs or adapt intentionally; it operates as a predictive model offering generalized outputs based on user input and pre-trained data.
Participants highlighted the advanced capabilities of AI tools in making learning more accommodating and flexible. For example, they appreciated the immediate feedback and 24/7 availability of these tools, which allowed them to access support outside traditional classroom hours. An avid participant from Group 1 (Participant 2) shared:
AI has been a game-changer for my writing. It’s like having a personal editor available 24/7. I no longer have to wait for the next class to get feedback on my work. AI provides instant corrections and suggestions, which helps me learn faster and more effectively.
This sentiment illustrates a strong positive perception of AI’s immediacy and availability. The emphasis on the immediacy of feedback underscores AI’s potential to impact academic performance and boost confidence by offering timely, personalized assistance. This aligns with studies that have found immediate feedback to be critical for enhancing learning outcomes (Jin et al., 2023). While the participant valued these features, it is important to contextualize such claims: AI’s responses are not personalized in the sense of understanding individual needs but are instead probabilistic outputs generated from extensive datasets (Sanusi et al., 2022).
In contrast, an analytical participant from Group 2 (Participant 10) critiqued the limitations of AI’s predictive nature:
AI tools are generally accessible, but sometimes the vocabulary suggestions are just way too advanced or don’t fit the context. The words suggested might be too difficult for what I’m trying to learn or they don’t really make sense in the conversation.
This remark highlights a common limitation of AI tools: while they can generate a wealth of suggestions, they lack contextual understanding or intentionality. This aligns with research suggesting that AI’s lack of semantic comprehension can result in outputs that fail to meet specific learner needs (Järvelä et al., 2023; Sanusi et al., 2022).
An anxious participant from Group 3 (Participant 13) expressed concerns about over-reliance on AI:
I appreciate the access AI provides, but I worry about becoming too dependent on it for structuring my essays. It’s really helpful when I need to organize my thoughts and come up with an outline, but sometimes I feel like I’m not really learning how to do it myself. I realized I was just following the AI’s prompts without critically thinking about how to structure the essay on my own.
This perspective underscores the potential risks of using AI tools as a substitute for critical thinking and independent skill development. It highlights the need for balance in using AI, ensuring that it serves as a supportive tool rather than a crutch that undermines self-reliance and critical thinking. This concern is consistent with literature suggesting that dependency on AI tools may impede cognitive growth, as students rely on AI outputs rather than engaging deeply with the learning process (Annamalai et al., 2023; Rodway & Schepman, 2023).
Meanwhile, an agnostic participant from Group 4 (Participant 19) shared their doubts about AI’s ability to replicate the nuanced interactions of human teachers:
AI might be designed to be inclusive and cater to different needs, but it really doesn’t get the cultural shades that a human teacher understands. For example, a teacher can pick up on cultural traditions and local customs. AI, despite being advanced, often misses these details.
The participant’s remark reflects the reality that AI does not ‘miss’ cultural details; rather, it lacks the capacity to understand them entirely. AI operates based on its training data, which limits its ability to recognize or incorporate cultural nuances without explicit programming (Chiu, 2023).
The varied perspectives on accessibility and inclusivity illustrate the nuanced role AI plays in education. While it offers features that students perceive as supportive, such as instant feedback and availability, these are not inherently intentional but the byproducts of its design as a predictive model. Ensuring that AI systems are effective across diverse educational contexts requires continuous refinement, including training models on culturally diverse datasets and enhancing user customization options (Mena-Guacas et al., 2023). By integrating AI with human-driven pedagogical practices, educators can address its limitations while leveraging its capabilities to create more equitable and accessible learning environments (Mena-Guacas et al., 2023; Niemi, 2021).
The adaptive feedback mechanisms provided by AI have received mixed responses from participants, showcasing both significant advantages and critical areas for improvement. Adaptive feedback in education refers to the ability of educational technologies to deliver real-time responses tailored to learners’ inputs, thereby enhancing their learning experiences (Sadegh-Zadeh et al., 2023). Participants emphasized the immediacy of AI feedback as a key strength, aligning with research that highlights timely feedback as a crucial factor in improving learning outcomes and boosting student confidence (Sanusi et al., 2022; Seo et al., 2021).
AI’s potential to facilitate contextualized learning was evident in students’ experiences. For example, an avid participant from Group 2 (Participant 7) shared:
AI helps me learn new words in context, which has greatly improved my vocabulary. For instance, when I used the word ‘quintessential’ incorrectly in a sentence, the AI provided several examples of its correct usage, complete with different contexts and sentences.
This experience reflects the findings of research conducted by Wei (2023), which suggests that contextualized feedback is essential for language acquisition. The participant’s positive experience underscores how AI facilitates deeper learning by offering instant feedback that allows learners to practice and refine their skills in real time. By contextualizing feedback, AI not only supports vocabulary acquisition but also empowers students to apply new words appropriately in various scenarios, contributing to a more meaningful learning journey (Zhai et al., 2021). However, it is important to recognize that AI-generated feedback is not truly personalized but instead derives from probabilistic modeling of language patterns. While this can aid vocabulary acquisition and contextual application, it lacks intentionality or a deeper understanding of individual learner needs.
Conversely, an analytic participant from Group 3 (Participant 15) expressed that:
AI feedback is helpful for structuring essays, but it sometimes lacks depth and personalization. For example, it suggested I use more ‘cohesive devices’, but didn’t explain why my transitions were weak or how I could improve them. It’s like it gives you the right direction but not the complete map to get there.
This critique aligns with Jin et al. (2023), who noted that while AI can guide students toward general improvements, it often fails to deliver detailed, explanatory feedback necessary for deeper understanding. This limitation underscores the need for more sophisticated AI systems capable of identifying and addressing specific areas of improvement beyond surface-level suggestions.
Another concern, articulated by an anxious participant from Group 1 (Participant 3), points to potential over-reliance on AI:
I rely on AI for grammar checks, but I’m afraid it might make me less attentive to my own mistakes. For instance, I noticed that I often don’t double-check my work anymore because I expect the AI to catch everything. There was a time when I submitted an essay without reviewing it myself.
This concern mirrors findings by Zhang and Villanueva (2023), who noted that over-reliance on AI tools can lead to complacency in students’ self-editing and critical thinking skills. While AI can provide valuable support in the editing process, it may inadvertently foster a dependency that undermines the development of essential learning skills and processes such as self-regulation and independent problem-solving.
An agnostic participant from Group 5 (Participant 24) expressed skepticism about AI’s capability in areas requiring nuanced feedback:
AI can provide pronunciation feedback, but it doesn’t replace the feedback from a real conversation partner. For instance, during a practice session, the AI didn’t catch the subtle tone difference needed for a polite inquiry versus a statement.
This comment emphasizes the limitations of AI in replicating the emotional and contextual insights provided by human interaction. Researches by Seo et al. (2021) and Rodway and Schepman (2023) highlight the importance of social interaction in language learning, suggesting that AI cannot fully replicate the nuances of human feedback, particularly in areas like pronunciation and intonation, which are critical for effective communication.
The findings reveal that while AI’s adaptive feedback mechanisms offer significant benefits, such as immediacy and contextualized support, they also present notable challenges. AI feedback often lacks the depth and intentionality needed to address individual learning needs comprehensively. Moreover, the risk of fostering over-reliance and the inability to replicate human feedback in areas like intonation and cultural nuance further highlight these limitations.
To address these challenges, an integrated approach is essential. Combining AI tools with human-led feedback can balance the efficiency of AI with the depth and empathy of instructor or peer input (Mena-Guacas et al., 2023). For instance, AI can handle routine or time-intensive tasks like grammar checks, while human instructors focus on providing nuanced feedback and fostering critical thinking. Additionally, ongoing improvements to AI systems, such as incorporating more sophisticated algorithms and diverse datasets, can enhance their ability to deliver meaningful, context-aware feedback (Crawford et al., 2023).
By leveraging AI’s strengths while mitigating its limitations through human interaction and reflective practices, educational institutions can create a supportive and balanced learning environment. This integrated approach not only enhances the learning experience but also ensures that students develop both technical proficiency and critical autonomy in their educational journey.
The impact of AI on learning habits varied significantly among participants, revealing both positive transformations and potential concerns. Understanding these effects is crucial for informing sustainable integration of AI technologies into educational practices. Research suggests that AI tools can influence learning habits by providing structured, personalized support, enhancing efficiency, and fostering discipline (Zhai et al., 2021). However, these claims must be approached cautiously, as the study does not include longitudinal data to confirm sustained changes over time. Furthermore, there are risks that over-reliance on AI may hinder the development of independent learning and critical thinking skills (Sanusi et al., 2022; Wei, 2023).
An avid participant from Group 3 (Participant 11) shared:
AI has transformed my writing process entirely. I’m now more organized and efficient… For example, when writing an argumentative essay, AI tools helped me structure each point clearly, suggesting transitions that made my argument flow better.
This feedback illustrates how AI can support the development of organizational and methodical learning strategies. The participant’s improved efficiency aligns with research by Mena-Guacas et al. (2023), which highlights the role of structured feedback in enhancing academic performance. However, while AI offers immediate and structured assistance, it does not intentionally personalize feedback but instead generates outputs based on probabilistic modeling. This raises questions about the depth of its support in fostering autonomous learning habits (Annamalai et al., 2023).
In contrast, an analytic participant from Group 4 (Participant 20) noted:
While AI tools have certainly improved my reading comprehension, I’ve had to make sure that I’m still thinking critically about the texts. For example, AI helped me summarize a difficult text, but I knew I had to go back and question some of the key arguments myself.
This observation underscores the importance of maintaining critical engagement alongside AI assistance. Although AI can facilitate summarization and comprehension, students must critically analyze material to deepen their understanding (Seo et al., 2021). AI serves as a starting point but should not replace the active cognitive processes required for effective learning.
An anxious participant from Group 2 (Participant 8) voiced concerns about dependency:
I sometimes worry that I’m becoming too dependent on it… I feel like I’m just passively accepting its suggestions rather than actively engaging with the material.
This concern resonates with the findings of Zhang and Villanueva (2023), who note that over-reliance on technology can diminish self-motivation and independent learning efforts. Such dependency may lead students to disengage from deeper cognitive processes, undermining their capacity for self-directed learning (Wei, 2023).
Moreover, an agnostic participant from Group 1 (Participant 4) expressed skepticism:
AI might help with grammar corrections, but it lacks the ability to support critical thinking. For instance, when I’m writing an essay, AI can point out grammatical errors or suggest sentence improvements, but it doesn’t guide me in developing how or why or evaluating the strength of my ideas.
This critique reflects a widely held belief that AI cannot replace the depth of cognitive engagement required for critical thinking. Research by Niemi (2021) supports this perspective, noting that AI tools often focus on surface-level aspects of learning, such as syntax and grammar, without fostering deeper understanding or analysis.
These varied perspectives highlight both the benefits and limitations of AI in shaping long-term learning habits. While AI offers significant support in enhancing efficiency and organization, it does not inherently promote the critical thinking and autonomy required for independent learning. Claims about its long-term impact must be hedged, as the study lacks longitudinal data to verify sustained behavioral changes over time.
To mitigate the risks of dependency and superficial engagement, educational practices should integrate AI with human-driven strategies. For example, combining AI feedback with peer reviews and instructor-led discussions can create a more holistic learning environment (Annamalai et al., 2023). This approach leverages AI’s efficiency while ensuring that students continue to develop critical and reflective skills through collaborative and guided learning experiences.
Furthermore, future iterations of AI tools should focus on incorporating more nuanced feedback mechanisms that go beyond structural or grammatical corrections. Enhancing AI systems with contextual and conceptual understanding can make them more effective in supporting deeper cognitive engagement (Crawford et al., 2023). By adopting these strategies, educators can ensure that AI tools serve as a supplement to, rather than a replacement for, traditional learning methods, fostering well-rounded and sustainable learning habits.
Technological proficiency and preparedness are essential factors for effectively utilizing AI tools in education. This theme underscores the significance of digital literacy, which encompasses the skills needed to navigate and leverage technology effectively. As AI continues to shape educational practices, understanding how technological proficiency influences learning outcomes becomes paramount. Disparities in access to technology and digital skills can significantly impact students’ ability to fully engage with AI-supported learning environments, making it crucial for educational institutions to address these challenges.
An avid participant from Group 4 (Participant 16) remarked:
AI tools have significantly improved my reading comprehension skills, especially when dealing with complex academic texts. For example, when I struggled to understand a dense article, the AI tool broke down key points and summarized them in simpler terms. Being tech-savvy has allowed me to explore the full potential of AI in my studies.
This statement illustrates how students who are proficient with digital tools can effectively leverage AI to enhance their learning experiences. Technologically adept learners can unlock the full potential of AI to improve critical skills such as reading comprehension and information retention. This aligns with findings by Zhang and Villanueva (2023), which suggest that technological proficiency significantly enhances the effectiveness of AI-supported learning environments.
Conversely, an analytic participant from Group 5 (Participant 25) shared:
AI helps with pronunciation, and I’ve definitely noticed an improvement in my speaking skills. However, at first, I found it difficult to navigate the various features and settings of the AI tool. I often felt overwhelmed by the options available and wasn’t sure how to use the AI’s feedback effectively. That’s why I believe there should be more workshops on how to use AI tools in learning.
This perspective reflects the challenges associated with mastering new technologies. The learning curve for navigating AI tools can be a significant barrier for some students, underscoring the importance of comprehensive training and support. Providing workshops and user-friendly guides can mitigate these challenges, ensuring that students feel confident in using AI tools effectively (Rodway & Schepman, 2023).
An anxious participant from Group 1 (Participant 5) expressed their difficulties, stating,
I’m not very tech-savvy, and sometimes I struggle with using AI tools effectively. For example, when we had an assignment that required us to use an AI writing assistant, I found myself spending a lot of time just figuring out how to input my ideas and get useful feedback. While my classmates seemed to navigate the tool with ease, I often felt lost.
This comment underscores the barriers faced by students with lower technological proficiency, which can hinder their ability to fully benefit from AI tools. These students may feel disadvantaged compared to peers who are more comfortable with technology, potentially affecting their academic confidence and performance. This is in line with the findings of (Mena-Guacas et al., 2023) who suggests that technological proficiency determine to a large extent the level of engagement with AI technologies.
An agnostic participant from Group 3 (Participant 14) pointed out:
Not everyone has equal access to technology, which can increase the gap between students. Some of my classmates struggle to engage with AI tools simply because they don’t have reliable internet access or the latest devices. Those with better technology can benefit from AI tools while others fall behind.
This reflection underscores the critical issue of the digital divide in education. Disparities in access to technology exacerbate existing educational inequalities, creating an uneven playing field for students. This aligns with findings by Álvarez-Álvarez and Falcon (2023), who argue that addressing technological disparities is essential for fostering equitable learning environments.
These findings illustrate the dual role of technological proficiency and access in shaping students’ ability to engage with AI tools effectively. While tech-savvy students can fully leverage AI to improve their learning outcomes, those with lower proficiency or limited access face significant barriers that hinder their ability to benefit from these advancements.
To address these challenges, educational institutions must take a proactive approach. Investments in digital literacy programs are essential to equip students with the skills needed to navigate AI tools confidently. Additionally, institutions should ensure equitable access to technology, such as providing reliable internet connections and up-to-date devices, to close the digital divide. Complementing these efforts with ongoing support, such as workshops and helpdesk services, can foster a more inclusive and supportive learning environment (Bandi et al., 2023).
Integrating these strategies will help create an educational ecosystem where all students, regardless of their technological background, can harness the potential of AI tools. This balanced approach not only enhances learning outcomes but also reduces inequalities, ensuring that advancements in AI technology contribute to equitable and effective educational practices (Crawford et al., 2023; Järvelä et al., 2023).
The integration of AI into educational environments has sparked diverse responses regarding its impact on social dynamics within learning contexts. Social dynamics are fundamental to education, influencing collaboration, communication, and the overall learning experience (Wei, 2023). The ability of AI to facilitate collaborative learning and interaction suggests that AI tools can significantly enhance student engagement and teamwork (Seo et al., 2021). However, concerns about the impersonal nature of AI interactions highlight the necessity of balancing AI integration with human-led activities to preserve the social essence of learning (Annamalai et al., 2023).
An avid participant from Group 5 (Participant 21) noted:
AI has been fantastic for practicing pronunciation. I’ve noticed huge improvements in my speaking skills, especially when I was preparing for a presentation. The AI provided me with immediate feedback on my pronunciation, helping me adjust my intonation and articulation in real-time. This level of support has made me feel more confident speaking now, not just in classroom settings but also in informal conversations.
This observation underscores how AI can enhance language practice by offering continuous and immediate feedback. Such features help students improve their pronunciation and speaking confidence, offering a self-paced, personalized learning experience (Mena-Guacas et al., 2023). However, this type of support is task-specific and does not extend to fostering broader social engagement or dynamic collaboration.
Conversely, an analytic participant from Group 2 (Participant 9) remarked:
AI encourages group discussions and collaborative projects, which is great because it helps us stay organized and engaged with the material. However, it can sometimes feel impersonal. While the AI can analyze our inputs and suggest ideas, it lacks the emotional depth and understanding that come from human interactions.
While AI tools can structure tasks, analyze group inputs, and suggest ideas, they do not actively ‘encourage’ discussions or collaboration. These functions are algorithmic and based on user prompts rather than intentional facilitation. This distinction highlights the importance of human involvement in providing the emotional and contextual nuance that AI cannot replicate (Niemi, 2021).
An anxious participant from Group 4 (Participant 18) shared:
I worry that practicing with AI might not fully prepare me for real-life scenarios. While AI is great for practicing specific skills and getting instant feedback on my comprehension, it often lacks the spontaneity and unpredictability of actual human interactions.
This concern highlights AI’s limitations in providing realistic conversational practice. Although AI can deliver structured practice and immediate feedback, it may not capture the unpredictability and emotional richness inherent in real-life interactions (Jin et al., 2023). This preference for human feedback emphasizes the importance of emotional and contextual understanding that AI currently lacks. Human instructors are crucial for providing tailored feedback, understanding the subtleties of language use, and offering encouragement that are imperative for nuanced language learning (Järvelä et al., 2023).
An agnostic participant from Group 1 (Participant 1) argued:
AI can’t replace the social learning experience of interacting with peers and teachers. For instance, when I engage in group projects, the dynamics of human interaction, such as the exchange of ideas, laughter, and even debates, are irreplaceable.
This skepticism highlights the intrinsic value of human interaction in educational settings. Social learning, characterized by peer and teacher interactions, fosters critical thinking, collaboration, and deeper understanding (Seo et al., 2021). This consensus indicates that while AI can support and enhance learning, it should not supplant the fundamental human elements that contribute to a rich educational experience.
The findings highlight the dual role of AI in shaping social dynamics in learning environments. While AI supports specific tasks like pronunciation practice or organizing group activities, it does not actively promote social engagement or collaboration. Instead, these outcomes depend on how learners and instructors use AI tools. For example, AI can complement group discussions by providing resources or analyzing data, but the emotional and cultural dimensions of collaboration remain the domain of human interactions (Jin et al., 2023; Niemi, 2021).
Educational practices should focus on integrating AI with human-led activities to balance efficiency with social engagement. For instance, while AI can facilitate task management and provide feedback, peer discussions and instructor-led sessions are essential for fostering emotional connections, cultural awareness, and critical thinking (Álvarez-Álvarez & Falcon, 2023; Rodway & Schepman, 2023). Research suggests that such integration ensures that students benefit from the structured support of AI while preserving the interpersonal and collaborative aspects critical for a rich learning experience (Crawford et al., 2023).
Furthermore, refining AI tools to account for cultural and emotional nuances, as well as improving their capacity for contextual feedback, can enhance their role in supporting social dynamics. However, as AI lacks intentionality and true interpersonal understanding, its integration should remain supplementary to human interaction (Mena-Guacas et al., 2023; Wei, 2023). This approach ensures that the benefits of AI are maximized without compromising the social essence of learning environments.
This study explored how Jordanian undergraduate English language learners engage with AI-supported SRL. By examining their perspectives, the findings offer valuable insights into the opportunities and challenges associated with integrating AI tools into educational contexts. Specifically, the study identified key themes related to accessibility, adaptive feedback, impact on learning habits, technological proficiency, and social dynamics.
The findings suggest that while AI tools can enhance SRL by providing immediate feedback and personalized pathways, their efficacy depends significantly on students’ digital literacy and critical engagement. For instance, learners who were technologically proficient demonstrated higher adaptability and effectiveness in using AI tools, while those with limited skills faced challenges in maximizing the benefits.
Although this study focuses on Jordanian English language learners, the findings hold relevance for broader educational contexts, including higher education students worldwide. The universal nature of themes such as feedback, accessibility, and technological preparedness indicates that these findings can inform AI integration strategies in various educational settings.
Additionally, the findings highlight the need for balanced adoption of AI in education, integrating its capabilities with human-driven pedagogical practices. This study contributes to existing literature by emphasizing how AI can complement traditional learning approaches, particularly for fostering SRL skills. For example, students learning without AI often rely heavily on peer or instructor feedback, which is limited in scalability and immediacy. By comparison, AI tools offer a scalable solution for individualized support, albeit with certain limitations such as over-reliance and ethical concerns.
The study provides actionable recommendations for educators, policymakers, and developers to optimize AI tools for diverse learning needs. These include improving digital literacy programs, ensuring ethical safeguards, and designing AI systems that encourage critical engagement rather than passive reliance. Such measures are crucial for enabling students to harness the benefits of AI while retaining autonomy and ownership of their learning processes.
In summary, while this research demonstrates the potential of AI to transform language learning in higher education, it also underscores the importance of tailoring AI integration to students’ technological, cultural, and contextual needs. Future studies should further investigate how these tools interact with diverse learning environments to ensure equitable and effective educational outcomes.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors report no conflict of interest.
Data will be made available on request.
During the preparation of this work, the authors used Grammarly Software and ChatGPT to improve the readability of the paper. After using this tool/service, the authors reviewed and edited the content as needed and took full responsibility for the content of the publication.
| Ab Rashid, R., & Yunus, K. (2016). Teachers’ engagement with emotional support on a social networking site. The Social Sciences, 11(14), 3450–3457. https://doi.org/10.3923/sscience.2016.3450.3457 |
| Álvarez-Álvarez, C., & Falcon, S. (2023). Students’ preferences with university teaching practices: Analysis of testimonials with artificial intelligence. Educational Technology Research And Development, 71, 1709–1724. https://doi.org/10.1007/s11423-023-10239-8 |
| Annamalai, N. et al. (2023). Using chatbots for English language learning in higher education. Computers and Education: Artificial Intelligence, 5, 100153. https://doi.org/10.1016/j.caeai.2023.100153 |
| Bandi, A., Adapa, P. V. S. R., & Kuchi, Y. E. V. P. K. (2023). The power of generative ai: A review of requirements, models, input–output formats, evaluation metrics, and challenges. Future Internet, 15(8), 260. https://doi.org/10.3390/fi15080260 |
| Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa |
| Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interactive Learning Environments, 32(10), 6187–6203. https://doi.org/10.1080/10494820.2023.2253861 |
| Crawford, J., Cowling, M., & Allen, K. A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching & Learning Practice, 20(3), 02. https://doi.org/10.53761/1.20.3.02 |
| Farooqi, M. T. K., Amanat, I., & Awan, S. M. (2024). Ethical considerations and challenges in the integration of artificial intelligence in education: A systematic review. Journal of Excellence in Management Sciences, 3(4), 35–50. https://doi.org/10.69565/jems.v3i4.314 |
| González-Calatayud, V., Prendes-Espinosa, P., & Roig-Vila, R. (2021). Artificial intelligence for student assessment: A systematic review. Applied Sciences, 11(12), 5467. https://doi.org/10.3390/app11125467 |
| Järvelä, S., Nguyen, A., & Hadwin, A. (2023). Human and artificial intelligence collaboration for socially shared regulation in learning. British Journal of Educational Technology, 54(5), 1057–1076. https://doi.org/10.1111/bjet.13325 |
| Jin, S. H. et al. (2023). Supporting students’ self-regulated learning in online learning using artificial intelligence applications. International Journal of Educational Technology in Higher Education, 20, 37. https://doi.org/10.1186/s41239-023-00406-5 |
| Lv, Z. (2023). Generative artificial intelligence in the metaverse era. Cognitive Robotics, 3, 208–217. https://doi.org/10.1016/j.cogr.2023.06.001 |
| Mena-Guacas, A. F. et al. (2023). Collaborative learning and skill development for educational growth of artificial intelligence: A systematic review. Contemporary Educational Technology, 15(3), ep428. https://doi.org/10.30935/cedtech/13123 |
| Merriam, S. B. (1998). Qualitative research and case study application in education. Jossey-Bass. |
| Niemi, H. (2021). AI in learning: Preparing grounds for future learning. Journal of Pacific Rim Psychology, 15, 1–12. https://doi.org/10.1177/18344909211038105 |
| Rodway, P., & Schepman, A. (2023). The impact of adopting AI educational technologies on projected course satisfaction in university students. Computers and Education: Artificial Intelligence, 5, 100150. https://doi.org/10.1016/j.caeai.2023.100150 |
| Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68 |
| Sadegh-Zadeh, S. A. et al. (2023). Exploring undergraduates’ perceptions of and engagement in an AI-enhanced online course. Frontiers in Education 8, 1252543. https://doi.org/10.3389/feduc.2023.1252543 |
| Sanusi, I. T. et al. (2022). The role of learners’ competencies in artificial intelligence education. Computers and Education: Artificial Intelligence, 3, 100098. https://doi.org/10.1016/j.caeai.2022.100098 |
| Seo, K. et al. (2021). The impact of artificial intelligence on learner–instructor interaction in online learning. International Journal of Educational Technology in Higher Education, 18, 54. https://doi.org/10.1186/s41239-021-00292-9 |
| Wei, L. (2023). Artificial intelligence in language instruction: Impact on English learning achievement, L2 motivation, and self-regulated learning. Frontiers in Psychology, 14, 1261955. https://doi.org/10.3389/fpsyg.2023.1261955 |
| Xia, Q., Weng, X., Ouyang, F., Lin, T. J., & Chiu, T. K. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21(1), 40. https://doi.org/10.1186/s41239-024-00468-z |
| Zhai, X. et al. (2021). A review of artificial intelligence (AI) in education from 2010 to 2020. Complexity, 2021, 812542. https://doi.org/10.1155/2021/8812542 |
| Zhang, C., & Villanueva, L. E. (2023). Generative artificial intelligence preparedness and technological competence: Towards a digital education teacher training program. International Journal of Education and Humanities, 11(2), 164–170. https://doi.org/10.54097/ijeh.v11i2.13753 |