ORIGINAL RESEARCH ARTICLE
Juhee Kim*
Department of Leadership & Counseling, University of Idaho, Moscow, ID, USA
Received: 12 February 2025; Revised: 10 April 2025; Accepted: 11 April 2025; Published: 24 June 2025
The integration of generative artificial intelligence (AI) tools, such as ChatGPT and DALL-E, presents transformative opportunities and challenges for K-12 education. This mixed-methods study investigates educators’ perceptions, familiarity, and preparedness for AI adoption, as well as institutional strategies and barriers. Quantitative findings indicate strong relationships between AI familiarity, perceived readiness, and institutional planning stages. Qualitative analysis highlights challenges such as insufficient professional development, ethical concerns, and infrastructural inequities, alongside opportunities for enhancing personalised learning and operational efficiency. The findings underscore the need for targeted training, equitable resource access, and clear institutional policies to ensure effective and ethical AI integration. This research offers actionable insights for educators, policymakers, and leaders seeking to navigate AI’s potential in K-12 education.
Keywords: generative artificial intelligence; K-12 education; educator preparedness; AI integration; professional development
*Corresponding author. Email: 0904heeya@gmail.com or juheekim@uidaho.edu
Research in Learning Technology 2025. © 2025 J. Kim. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.
Citation: Research in Learning Technology 2025, 33: 3448 - http://dx.doi.org/10.25304/rlt.v33.3448
The rapid emergence of generative artificial intelligence (AI) tools, such as ChatGPT and DALL-E, is transforming educational practices and reshaping the landscape of K-12 education. These tools present opportunities to revolutionise instructional strategies, enhance operational efficiency, and provide personalised learning experiences (Braaten & Farnsworth, 2024; Klopfer et al., 2024). However, the integration of AI in K-12 settings is still in its infancy, with educators and institutions grappling with varying levels of familiarity, preparedness, and ethical considerations (Akanzire et al., 2023; Gestson & Core Team, 2024). This duality of promise and challenge makes understanding AI adoption in K-12 education critical for its successful implementation.
Recent studies indicate growing interest among educators in adopting AI, with 60% of surveyed teachers reporting experimentation with AI technologies in their classrooms (Klopfer et al., 2024). This enthusiasm reflects the potential of AI to support differentiated instruction, automate administrative tasks, and engage students through innovative approaches. However, significant barriers remain, including concerns about ethical implementation, fairness in student assessment, and the potential impact on teacher–student relationships (Akanzire et al., 2023; Braaten & Farnsworth, 2024). For instance, the use of AI in decision-making raises critical questions about data privacy, algorithmic bias, and equitable access to resources, particularly in rural and underfunded schools (Gestson & Core Team, 2024; Wargo & Hoke, 2022).
In particular, several research and policy guidance from Western regulatory and educational contexts have underscored the importance of developing responsible AI governance frameworks to guide educational use (Miao & Holmes 2023; Oh & Sanfilippo, 2024). While interest in AI policy and governance is rapidly growing, the technical feasibility and classroom readiness of these tools remain inconsistent across educational settings. Many proposed solutions, although promising, currently lack robust empirical validation in real-world K-12 environments. This highlights the need for further critical evaluation, context-sensitive implementation strategies, and ongoing research to ensure that AI adoption in education is both effective and equitable.
As educational leaders navigate the complexities of AI integration, understanding educators’ perceptions, readiness, alongside institutional strategies become imperative. These factors influence not only the adoption process but also its long-term sustainability and impact on teaching and learning (Cheah et al., 2025). The existing research has identified a gap in how educators are supported to integrate AI tools effectively, particularly in aligning them with pedagogical goals and addressing systemic inequities in access and training (Celik et al., 2022; Mishra et al., 2023).
This study seeks to address these gaps by exploring the dynamics of AI adoption in K-12 settings, focusing on the perspectives of educational leaders and teachers. Using a mixed-methods approach, this research aims to provide a comprehensive understanding of educators’ perceptions, their level of preparedness, and the institutional challenges and strategies shaping AI adoption. By analysing both quantitative and qualitative data, this study offers actionable insights to support the equitable and effective integration of AI technologies in diverse educational contexts (Akanzire et al., 2023; Gestson & Core Team, 2024). The research is guided by the following objectives:
By addressing these objectives, the study contributes to the growing body of literature on AI in education and provides guidance for educators, policymakers, and institutional leaders to navigate the complexities of AI integration in K-12 environments.
The integration of generative artificial intelligence (GenAI) in K-12 education is an emerging area of research, carrying significant implications for teaching, learning, and institutional practices. Generative AI tools, such as ChatGPT and DALL-E, have attracted increasing attention for their potential to enhance operational efficiency, deliver personalised learning experiences, and support educators in both instructional and administrative tasks (Braaten & Farnsworth, 2024; Klopfer et al., 2024). However, alongside these opportunities, the adoption of AI in education presents critical ethical, infrastructural, and pedagogical challenges that require thorough examination.
The integration of GenAI tools in K-12 education holds a significant promise for enhancing teaching and learning practices. GenAI tools have demonstrated the potential to automate repetitive tasks, provide personalised learning experiences, and enhance student engagement (Cheah et al., 2025). The AI tools can streamline administrative processes, support differentiated instruction, and foster greater engagement among students (Gestson & Core Team, 2024; Obeysekare, 2024). For example, Klopfer et al. (2024) highlighted that AI technologies could support differentiated instruction by adapting to individual learning needs, fostering greater equity in educational outcomes.
Teachers have leveraged GenAI to assist in lesson planning, resource curation, and administrative tasks, thereby freeing time for more meaningful instructional activities (Cheah & Kim, 2025). Moreover, studies emphasise that AI can cultivate higher-order thinking skills, such as critical reasoning and creativity, when integrated into project-based or inquiry-based learning frameworks (Holmes et al., 2019; Hsu et al., 2019; Shin & Shin, 2021). The AI also facilitates tailored feedback and adaptive assessments, enabling educators to address students’ unique strengths and weaknesses more effectively (Ding et al., 2024; Sanusi et al., 2024). Despite its potential, evidence suggests that GenAI integration is underutilised in daily classroom teaching. Instead, it is predominantly used for administrative and preparatory tasks, limiting its transformative impact on pedagogy (Chakraborty, 2024; Noroozi et al., 2024).
While the benefits of GenAI are compelling, its adoption in K-12 education faces several barriers. These include a lack of funding, insufficient training for educators, and disparities in technological infrastructure, particularly in rural and underfunded schools (Akanzire et al., 2023). Concerns around data privacy, bias, and ethical use of AI further complicate its adoption (Braaten & Farnsworth, 2024). One significant challenge is the lack of professional development (PD) opportunities tailored to GenAI technologies. Studies reveal that many educators feel unprepared to use AI effectively due to insufficient training, limited access to AI tools, and inadequate institutional support (Cheah & Kim, 2025; Celik et al., 2022). Furthermore, rural and underfunded schools, in particular, report challenges related to infrastructure gaps and funding constraints, which exacerbate disparities in technology adoption (Kim & Wargo, 2025; Wargo & Hoke, 2022).
Another critical barrier is the skepticism surrounding AI’s impact on education. Educators have expressed concerns about ethical issues, such as data privacy, algorithmic bias, and the potential for overreliance on AI among students (Mishra et al., 2023; Pangrazio & Selwyn, 2021). For instance, Zafar and colleagues (2025) found that some teachers worried that students using GenAI tools such as ChatGPT might bypass critical thinking processes, relying on AI-generated outputs instead.
Additionally, teachers’ pedagogical beliefs often influence their willingness to adopt new technologies. Resistance from educators, often stemming from unfamiliarity or discomfort with AI technologies, has also been cited as a critical challenge (Obeysekare, 2024). Educators with traditional, teacher-centred instructional approaches may view GenAI as misaligned with their teaching philosophy, while those with constructivist orientations are more likely to explore its potential (Ertmer et al., 2012; Liu, 2011).
Educators’ perceptions and preparedness are central to the successful integration of AI in K-12 education. Research highlights a significant gap between educators’ interest in AI technologies and their readiness to implement these tools effectively. For instance, a study by Cheah et al. (2025) revealed that while many teachers acknowledged the transformative potential of AI for personalised learning and administrative efficiency, a substantial proportion expressed feelings of unpreparedness. Similarly, Akanzire et al. (2023) reported that although 60% of educators were interested in exploring AI, only a minority felt adequately equipped to integrate it into their classrooms, citing a lack of training and understanding of AI applications as key barriers.
The lack of preparedness among educators underscores the critical need for targeted PD programmes. Effective PD initiatives should go beyond technical skills training to address pedagogical strategies for leveraging AI tools in diverse classroom settings (Ding et al., 2024; Kim & Wargo, 2025). For example, PD programmes could include hands-on workshops, co-design opportunities for AI-enhanced lesson plans, and resources for addressing AI-related ethical considerations (Manrique & Palomares, 2024). This holistic approach ensures that educators are not only competent in AI technologies but also confident in integrating them into their teaching practices to improve learning outcomes.
Another challenge lies in bridging the gap between educators’ varying levels of familiarity with AI tools. Cheah et al. (2025) found that less than half of surveyed educators had used generative AI tools such as ChatGPT or DALL-E for classroom activities, and many reported using these tools primarily for administrative or lesson planning purposes rather than student-centred learning. These findings suggest that teacher training should be differentiated to address varying levels of AI familiarity and encourage more pedagogically meaningful uses of AI in the classroom.
Institutional strategies and leadership support play an instrumental role in fostering the successful integration of AI in K-12 education. Clear policy frameworks and guidelines are critical for addressing the ethical, practical, and pedagogical challenges associated with AI adoption. Gestson and Core Team (2024) emphasise that such policies should promote equitable access to AI resources, ensure robust data privacy protection, and provide actionable strategies for sustainable integration at the district and school levels.
Leadership buy-in is equally essential, as it aligns AI initiatives with broader educational goals and ensures the availability of necessary resources for implementation (Kim & Wargo, 2025). For instance, institutional leaders can support educators by prioritising funding for AI-related PD and investing in the necessary technological infrastructure. Schools with strong leadership engagement are more likely to foster a culture of innovation and collaboration, enabling educators to experiment with and adopt new technologies confidently. Rural schools, in particular, face unique challenges in integrating AI due to infrastructure limitations and funding disparities. Kim and Wargo (2025) highlight that many rural educators lack access to reliable internet connections and modern computing devices, making it difficult to implement AI-enhanced teaching strategies. Addressing these disparities requires systemic investments in digital infrastructure and targeted support for underserved schools.
Ethical considerations must also be central to institutional policies on AI integration. Teachers have expressed concerns about data privacy, algorithmic bias, and the ethical implications of using AI in educational settings (Mishra et al., 2023; Pangrazio & Selwyn, 2021). Policy frameworks should include clear guidelines for the responsible use of AI, such as ensuring transparency in AI decision-making processes and providing training on the ethical implications of AI tools in the classroom. Long-term planning is another vital component of effective AI policies. Gestson and Core Team (2024) suggest that schools and districts adopt a phased approach to AI integration, starting with small-scale pilot programs and gradually scaling successful initiatives. This approach allows institutions to evaluate the impact of AI tools, gather feedback from educators and students, and make data-informed adjustments to their implementation strategies.
While existing research highlights the potential benefits and challenges of AI in K-12 education, significant gaps persist in understanding its broader impacts on equity, pedagogy, and long-term educational outcomes. Much of the current literature emphasises AI’s technical capabilities, such as personalising instruction and automating administrative tasks, while neglecting its socio-cultural and pedagogical implications (Holmes et al., 2019; Perrotta & Selwyn, 2019; World Economic Forum, 2024).
AI adoption in K-12 education often exacerbates existing inequities, particularly in underserved and rural schools that face technological and funding limitations (Wargo & Hoke, 2022). While AI can potentially bridge educational gaps, issues such as algorithmic bias and data privacy remain underexplored (Mishra et al., 2023; Pangrazio & Selwyn, 2021). Addressing these concerns requires designing inclusive AI systems that actively mitigate systemic inequities and ensure equitable access to technological resources.
The use of AI in education is underutilised in promoting active, student-centred learning. Most educators employ AI for administrative and lesson planning tasks rather than fostering higher-order thinking skills such as creativity and problem-solving (Sanusi et al., 2024; Shin & Shin, 2021). Research should focus on the pedagogical integration of AI, exploring how it can support inquiry-based and collaborative learning while aligning with teachers’ instructional practices.
This study employed a mixed-methods approach to examine K-12 educators’ perceptions, preparedness, and institutional readiness for AI integration. A survey instrument combined quantitative closed-ended questions, analysed using descriptive and inferential statistics, with qualitative open-ended prompts, explored through thematic analysis (Braun & Clarke, 2006; Creswell & Plano Clark, 2018).
The survey was piloted for validity (Dillman et al., 2014) and distributed electronically using snowball sampling to maximise reach across diverse educational contexts (Celik et al., 2022; Goodman, 1961; Heckathorn, 2011). To ensure the validity and reliability of the survey instrument, a pilot test was conducted with a small group of K-12 educators (n = 10) who were not part of the main study. Their feedback was used to refine question wording, clarify response options, and adjust item sequencing. However, since this exploratory study and our closed-ended survey questions were not derived from or tied to any preexisting instruments based on theoretical constructions, discussing the reliability of our survey instrument is not applicable (Kimberlin & Winterstein, 2008)
This study was reviewed and approved by the University of Idaho Institutional Review Board (IRB Protocol #026288), ensuring compliance with ethical research standards. Participation was voluntary and no sensitive or personally identifiable information was collected from respondents. The survey was anonymous and participants had the option to withdraw at any time without penalty.
The study included 250 K-12 educators from rural, suburban, and urban schools across public, charter, and private institutions. Participants were recruited through professional networks, conferences, and institutional email lists, with additional referrals from principals and teacher attendees at a statewide educational technology conference (Wargo & Hoke, 2022). The sample included teachers, principals, and instructional coaches with varying AI familiarity levels, ensuring a comprehensive exploration of factors influencing AI adoption.
An online survey was used to gather both quantitative and qualitative insights into AI integration. Initial recruitment began in February 2024, followed by email invitations to K-12 principals statewide in March 2024, with reminder emails sent before the semester ended in May and October 2024. Data collection concluded in December 2024, with responses from educators who completed at least 70% of the survey.
The survey instrument comprised three key sections: demographics, closed-ended questions, and open-ended prompts. The demographic section captured participants’ roles, years of experience, subject areas, and institutional types, enabling subgroup analyses on AI integration across different educational contexts. Closed-ended questions used Likert scales, multiple-choice options, and ranking formats to assess AI familiarity, perceptions, and institutional readiness, covering topics such as educators’ engagement with AI tools, perceived benefits, and institutional planning stages. Open-ended prompts provided deeper insights by allowing participants to elaborate on AI’s perceived impact, professional use, and institutional challenges.
This mixed-methods approach facilitated a comprehensive exploration of AI adoption in K-12 education, combining statistical analysis with thematic insights. The structured survey design ensured both breadth and depth, capturing quantitative trends while uncovering nuanced educator perspectives on the opportunities and barriers associated with AI integration.
The data analysis process for this study was designed to rigorously and inclusively address the research objectives of examining educators’ perceptions of generative AI, assessing their familiarity and preparedness for AI adoption, and identifying institutional strategies, challenges, and plans for AI integration. A combination of quantitative and qualitative analytical techniques was employed, ensuring a holistic interpretation of the data.
Descriptive statistical analyses were conducted to summarise the demographic characteristics of the sample and provide an overview of key trends. Frequencies and percentages were calculated to illustrate the distribution of participants across gender, ethnicity, roles (e.g. educational leaders, full-time and part-time teachers) and geographic locations (rural, suburban, urban). Means and standard deviations were calculated to summarise responses to Likert-scale questions on AI familiarity, perceptions of AI’s impact, and institutional readiness. For example, average familiarity scores were analysed to determine how well educators understood generative AI tools like ChatGPT and DALL-E, providing a baseline for interpreting their preparedness and perceptions. This descriptive analysis served as the foundation for identifying general patterns in the data and contextualising subsequent analyses.
Correlation analysis was performed to examine relationships between familiarity, preparedness, and institutional planning for AI adoption. Pearson correlation coefficients were calculated to assess the strength and direction of these relationships. For instance, the analysis tested whether higher familiarity with AI tools was associated with greater perceived preparedness for integration. Significant correlations informed subsequent regression analyses, which were conducted to predict preparedness and institutional planning based on familiarity and other demographic variables. Linear regression models were employed to explore the extent to which familiarity, role, and institutional type predicted educators’ preparedness for AI adoption. These models provided insights into the factors that most strongly influenced readiness for AI integration, directly addressing the study’s second and third objectives.
To enhance clarity and accessibility for a broader readership, technical terms such as ‘thematic analysis’, ‘taxonomy’, and ‘institutional planning stages’ are defined. Thematic analysis refers to a qualitative research method used to identify patterns or themes within data (Braun & Clarke, 2006). The taxonomy in this study represents categorised themes emerging from educators’ responses, while institutional planning stages indicate the phases of AI adoption in educational settings.
The taxonomy and thematic categories presented in the findings of this study were derived directly from participants’ responses through a rigorous thematic analysis process, rather than from expert consultation or existing governance frameworks. Following Braun and Clarke’s (2006) six-phase framework, qualitative data from the open-ended survey responses were coded inductively. Emerging codes were then grouped into broader themes and subcategories, which formed the basis for the taxonomy of challenges, opportunities, and strategies related to AI integration in K-12 education.
Responses were first read to identify initial codes related to perceptions of AI’s impact, challenges, and institutional strategies. These codes were then grouped into broader themes, such as ‘enhancing operational efficiency’, ‘ethical concerns’, ‘lack of infrastructure’, and ‘importance of PD’. Themes were refined and validated by revisiting the original data, ensuring that they accurately captured participants’ perspectives. For example, responses about AI’s positive impacts are often centred on themes of personalised learning and efficiency, while challenges highlighted concerns about data privacy and teacher resistance. This thematic analysis allowed the study to delve deeper into the nuanced experiences of educators, complementing the quantitative findings with rich qualitative insights.
The descriptive statistics provided an overview of the participants’ demographics, familiarity with AI, their perceptions of AI’s impact on education, and their institutions’ stages of AI planning. These statistics summarised the key characteristics of the sample and highlighted trends that informed subsequent analyses.
Most of the participants (82.8% or N = 207 out of 250) identified as White/Caucasian, with a small percentage (2.8%) identifying as American Indian or Alaska Native and another small percentage (4.0%) identifying as other ethnicities. Most participants (N = 183, 73.2%) were from rural areas, with smaller percentages from suburban (15.0%) and urban (3.8%) locations. Regarding gender, 30.1% of the participants identified as male, and 59.9% identified as female. Also, most respondents in this study were full-time teachers, representing 58.8% (N = 147) of the total sample. This reflects the substantial engagement of classroom educators in the survey. Part-time teachers constituted 19.2% (N = 48), followed by educational leaders such as principals, vice principals, and superintendents, who made up 17.6% (N = 44) of the participants.
This distribution highlights the focus of the survey on understanding the perspectives of educators who are directly involved in classroom teaching while also capturing the strategic viewpoints of educational leaders responsible for policy and planning. The significant representation of full-time teachers ensures that findings reflect the experiences of practitioners who are most likely to implement AI tools in daily educational settings. Conversely, the inclusion of leaders provides insights into institutional readiness and strategic planning for AI integration.
As shown in Table 1, the average familiarity score among participants was 3.2 (standard deviation [SD] = 1.1) on a scale where 1 indicated ‘Not familiar at all’ and 6 indicated ‘Used AI for both personal and professional purposes’. This indicates a moderate level of familiarity overall, with a substantial portion of respondents (86%) reporting at least some level of familiarity. Notably, 14% of participants indicated that they were entirely unfamiliar with AI, reflecting a gap in exposure that may influence preparedness for AI adoption. The relatively high percentage of participants familiar with AI suggests that many educators have engaged with these tools to varying extents, potentially as part of their PD or personal experimentation.
Participants’ perceptions of AI’s impact on K-12 education were overwhelmingly positive, with a mean score of 4.1 (SD = 0.8) on a five-point scale ranging from ‘Extremely Negative’ (1) to ‘Extremely Positive’ (5). Among the respondents, 40% viewed AI’s impact as ‘Extremely Positive’, while 35% rated it as ‘Somewhat Positive’. This distribution indicates that the majority of educators recognise the potential benefits of AI in education, such as enhancing student engagement, personalising learning, or streamlining administrative processes. However, a smaller subset of respondents expressed neutral or negative perceptions, highlighting the importance of addressing concerns about challenges such as data privacy, algorithmic bias, and the ethical use of AI.
When asked about their institutions’ planning and readiness for AI integration, 45% of respondents indicated that their institutions were either actively implementing AI or had completed initial phases of deployment (See table 2). Specifically, 30% reported being in the planning stages, while only 10% stated that their institutions had no interest in adopting AI. This distribution reflects a growing recognition of AI’s importance in education and a shift towards strategic planning for its integration. The variation in institutional readiness underscores the importance of targeted support, particularly for schools in the early stages of adoption, to ensure successful implementation.
A significant positive correlation was found between AI familiarity and perceived readiness (r = 0.45, p < 0.05). This result indicates that participants who reported higher familiarity with AI tools were also more likely to feel prepared to integrate these technologies into their teaching practices. This relationship underscores the importance of exposure to and experience with AI tools in building educators’ confidence and readiness for adoption.
A multiple regression analysis was conducted to identify predictors of AI adoption (binary outcome: 0 = No adoption, 1 = Adoption). The predictors included AI familiarity and institutional planning stage. The regression model was significant, explaining 35% of the variance in AI adoption (R² = 0.35), indicating a robust relationship between these predictors and the outcome (Table 3). Both AI familiarity (B = 0.40, p < 0.01) and institutional planning stage (B = 0.30, p < 0.05) were significant predictors of AI adoption, as shown in Table 4.
| Model | R | R² | Adjusted R² | Std. error of the estimate | F | Sig. (p) |
| 1 | 0.59 | 0.35 | 0.34 | 0.45 | 15.32 | <0.01 |
| Predictor variable | Unstandardised coefficient (B) | Standard error (SE) | Standardised coefficient (Beta) | t | P |
| (Constant) | 0.85 | 0.12 | 7.08** | <0.01 | |
| AI familiarity | 0.40 | 0.08 | 0.38 | 5.00** | <0.01 |
| Institutional planning stage | 0.30 | 0.10 | 0.25 | 3.00* | <0.05 |
| *p < 0.05, **p < 0.01. | |||||
The regression results further underscore the role of familiarity and institutional planning in predicting AI adoption. Educators who are more familiar with AI tools and belong to institutions actively planning or implementing AI strategies are significantly more likely to adopt these technologies. These findings emphasise the need for targeted PD programmes to increase familiarity with AI and institutional efforts to prioritise planning for AI integration.
The thematic analysis of open-ended survey responses revealed three overarching themes: positive perceptions of AI, challenges in AI integration, and future directions for implementation (Table 5). These themes provide a nuanced understanding of K-12 educators’ experiences and perspectives on generative AI.
Educators highlighted several benefits of AI integration, emphasising its potential to improve both teaching efficiency and student engagement. Respondents frequently mentioned that AI tools streamline administrative tasks, such as grading, lesson planning, and data analysis. For example, one participant stated, ‘AI tools reduce the time spent on repetitive tasks, allowing me to focus on teaching’. This operational efficiency was seen as particularly valuable for managing large classes or balancing instructional and administrative responsibilities.
AI was recognised for its ability to support differentiated instruction. Participants noticed how AI tools adapt to individual students’ learning needs, fostering equity in educational outcomes. As one respondent observed, ‘AI helps me tailor lessons to students who need extra support while allowing advanced learners to explore more challenging content’. Many educators found AI tools to be engaging for students, particularly through interactive and visual learning experiences. Tools such as ChatGPT and DALL-E were cited as effective for sparking creativity and critical thinking in project-based activities.
Despite its promise, educators identified significant barriers to the successful adoption of AI in K-12 settings. Insufficient financial resources were a recurring concern. Many educators reported that their schools lack the budget to invest in the necessary infrastructure, tools, and PD. One participant explained, ‘Without funding for AI tools and training, it’s impossible to implement AI effectively’.
A significant proportion of educators expressed feelings of unpreparedness to use AI in their classrooms, citing insufficient training and limited familiarity with AI technologies. For instance, one respondent noted, ‘We’ve been introduced to AI, but there’s no guidance on how to use it meaningfully in teaching’. Participants from rural schools emphasised the lack of reliable internet access and modern devices, which hinder their ability to integrate AI. This disparity underscores the need for targeted support for underserved communities.
Ethical Concerns. Issues such as data privacy, algorithmic bias, and plagiarism were raised as potential challenges. As one participant warned, ‘There’s a fine line between leveraging AI and relying on it too much, especially when it comes to student work’.
Participants provided insights into strategies for overcoming these challenges and advancing AI adoption in K-12 education. Respondents consistently emphasised the need for tailored PD programmes to build AI competencies among educators. Suggestions included workshops, hands-on training, and collaborative learning opportunities. One participant stated, ‘We need practical training that shows us how to use AI tools in real classroom scenarios’.
Strong leadership was identified as a critical factor for successful AI integration. Respondents highlighted the importance of institutional commitment, with one educator noting, ‘Our leaders need to champion AI initiatives and provide the necessary resources for implementation’. Clear guidelines and policies were deemed essential for addressing ethical concerns and ensuring responsible AI use. Educators stressed the importance of frameworks that prioritise equity and transparency. For example, one respondent advocated for ‘district-wide policies that set clear boundaries and expectations for AI use in schools’.
The findings of this study reveal significant variability in AI preparedness and perceptions among K-12 educators. Educational leaders reported higher levels of readiness, reflecting their strategic roles and responsibilities in institutional planning. This aligns with previous studies indicating that leadership involvement is a critical factor in fostering innovation and technological integration in schools (Cheah et al., 2025; Gestson & Core Team, 2024). Leaders were more likely to identify their institutions as being in advanced stages of AI planning or implementation, underscoring their role in driving organisational change.
In contrast, teachers expressed greater uncertainty regarding their preparedness to adopt AI tools, with many citing a lack of training and resources. This aligns with research showing that educators often feel unprepared to integrate emerging technologies due to insufficient PD opportunities (Celik et al., 2022). The discrepancy between leaders’ readiness and teachers’ perceptions highlights a gap in support mechanisms and training initiatives, which could hinder the widespread adoption of AI in classrooms.
The study also revealed positive perceptions of AI’s potential to enhance operational efficiency, personalise learning, and improve student engagement. However, concerns about funding, infrastructure, and ethical considerations such as data privacy and algorithmic bias emerged as significant barriers, particularly among educators in rural and underfunded schools. These findings are consistent with previous literature emphasising the structural and systemic challenges that disproportionately affect underserved educational contexts (Mishra et al., 2023; Wargo & Hoke, 2022).
These findings directly address the research gaps identified in the literature review, particularly the need to better understand educators’ preparedness and institutional readiness for AI integration. By highlighting the discrepancies between educators’ perceived preparedness and the level of institutional support, this study reinforces prior concerns regarding insufficient PD and infrastructure barriers (Celik et al., 2022; Kim & Wargo, 2025). In addition, the findings expand on the challenges related to equitable AI integration, emphasising how resource disparities, especially in rural and underserved schools, continue to influence educators’ capacity to implement AI technologies effectively.
Furthermore, while this study primarily focused on K-12 educators in the United States, the findings have broader relevance for international educational settings. Educators across diverse global contexts face similar challenges in integrating AI, such as infrastructure limitations, ethical concerns, and disparities in PD (Bautista et al., 2024; Gârdan et al., 2025). However, it is important to note that institutional readiness, policy frameworks, and available resources vary considerably across countries (Miao & Holmes, 2023). Future research should further explore how localised factors influence educators’ perceptions and preparedness for AI integration in international contexts, including settings with different technological and educational landscapes.
Targeted PD programmes are essential for building educators’ AI competencies. The PD initiatives should go beyond technical training to address pedagogical strategies for integrating AI into diverse classroom settings. For instance, hands-on workshops and collaborative learning opportunities can help teachers explore the practical applications of AI tools, such as using generative AI for personalised learning or creative projects (Ding et al., 2024). Leadership training for administrators should also be prioritised, as their support is critical in fostering a culture of innovation and ensuring the availability of resources for AI adoption.
Addressing disparities in funding and infrastructure is vital to ensuring equitable access to AI tools and resources. Rural schools, in particular, require targeted investments in reliable internet connectivity and modern devices to enable effective AI integration. Policymakers should allocate funding to bridge these gaps and provide underserved schools with the necessary technological and human resources (Wargo & Hoke, 2022). Efforts to ensure equitable access must also include initiatives to reduce algorithmic bias and promote inclusivity in AI systems.
Lastly, the development of clear policies and institutional strategies is crucial for addressing ethical, practical, and pedagogical challenges associated with AI adoption. Policies should include guidelines for data privacy, responsible AI use, and transparency in algorithmic decision-making. Schools and districts should adopt a phased approach to AI implementation, beginning with small-scale pilot programmes to evaluate the impact and gather feedback from educators and students. Leadership engagement is also critical, as institutional leaders play a key role in aligning AI initiatives with broader educational goals and priorities. Collaborative partnerships between schools, AI developers, and community stakeholders can further enhance the effectiveness and sustainability of AI integration efforts.
This study is limited by its cross-sectional design, which captures only a snapshot of educators’ AI perceptions and preparedness. Longitudinal research is needed to assess the sustained impact of AI on student learning, critical thinking, and teacher–student relationships (Holmes et al., 2019; Sanusi et al., 2024). Furthermore, interdisciplinary research that integrates education, technology, and policy perspectives is necessary to develop equitable AI frameworks, particularly for underserved schools (Dai et al., 2022; Gestson & Core Team, 2024). Future studies should also explore the role of institutional policies and leadership support in shaping AI adoption over time.
Generative AI has the potential to revolutionise K-12 education, but its effective integration requires addressing disparities in preparedness, training, and infrastructure. While educators recognise AI’s ability to enhance efficiency and engagement, challenges such as limited PD, inequitable access, and ethical concerns remain significant barriers. The gap between leaders’ strategic readiness and teachers’ preparedness underscores the need for targeted training initiatives and institutional support to foster confident and competent AI adoption.
Policy and leadership play a crucial role in guiding AI implementation, requiring ethically informed policies that ensure equitable access and responsible AI use. As AI technologies evolve, collaborative efforts across educators, policymakers, and institutional leaders will be essential to maximising their transformative potential. Long-term research is necessary to evaluate AI’s ongoing impact on student outcomes, teacher roles, and systemic equity, ensuring its sustainable and inclusive adoption in K-12 education.
| Akanzire, B. N., Nyaaba, M., & Nabang, M. (2025). Generative AI in teacher education: Teacher educators’ perception and preparedness. Journal of Digital Educational Technology, 5(1), ep2508. https://doi.org/10.30935/jdet/15887 |
| Bautista, A., Estrada, C., Jaravata, A.M., Mangaser, L.M., Narag, F., Soquila, R., & Asuncion, R.J. (2024). Preservice teachers’ readiness towards integrating AI-Based tools in education: A TPACK approach. Educational Process:International Journal, 13(3), 40–68. https://doi.org/10.22521/edupij.2024.133.3 |
| Braaten, E. & Farnsworth, K. (2024). Educators’ Perspectives on Generative AI in K-12: Informing AI in Education Guidance. Friday Institute for Educational Innovation, North Carolina State University. Retrieved from https://fi.ncsu.edu/resource-library/perspectives-ai-in-k12/ |
| Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa |
| Celik, I. et al. (2022). The promises and challenges of artificial intelligence for teachers: a systematic review of research. TechTrends, 66(4), 616–630. https://doi.org/10.1007/s11528-022-00715-y |
| Chakraborty, S. (2024). Generative AI in modern education society. arXiv preprint arXiv:2412.08666. https://doi.org/10.48550/arXiv.2412.08666 |
| Cheah, Y. & Kim, J. (2025). STEM teachers’ perceptions, familiarity, and support needs for integrating generative artificial intelligence in K-12 education. School Science and Mathematics, 1–16. https://doi.org/10.1111/ssm.18334 |
| Cheah, Y. H., Lu, J. & Kim, J. (2025). Integrating generative artificial intelligence in K-12 education: examining teachers’ preparedness, practices, and barriers. Computers and Education: Artificial Intelligence, 8, Article 100363. https://doi.org/10.1016/j.caeai.2025.100363 |
| Creswell, J. W. & Plano Clark, V. L. (2018). Designing and Conducting Mixed Methods Research (3rd ed.). Sage Publications. |
| Dai, Y. et al. (2022). Collaborative construction of artificial intelligence curriculum in primary schools. Journal of Engineering Education, 112(1), 23–42. https://doi.org/10.1002/jee.20503 |
| Dillman, D. A., Smyth, J. D. & Christian, L. M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method (4th ed.). Wiley. |
| Ding, A. C. E. et al. (2024). Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Computers and Education Open, 6, Article 100178. https://doi.org/10.1016/j.caeo.2024.100178 |
| Ertmer, P. A., Ottenbreit-Leftwich, A. T. & Sadik, O. (2012). Teacher beliefs and technology integration practices: a critical relationship. Computers & Education, 59(2), 423–435. https://doi.org/10.1016/j.compedu.2012.02.001 |
| Gârdan, I. P., Manu, M. B., Gârdan, D. A., Negoiță, L. D. L., Paștiu, C. A., Ghiță, E., & Zaharia, A. (2025). Adopting AI in education: optimizing human resource management considering teacher perceptions. Frontiers in Education, 10, 1488147, Frontiers Media SA. https://doi.org/10.3389/feduc.2025.1488147 |
| Gestson, C. & Core Team. (2024). Generative Artificial Intelligence in K-12 Education: Guidance for Arizona Schools and School Systems. Northern Arizona University. |
| Goodman, L. A. (1961). Snowball sampling. Annals of Mathematical Statistics, 32(1), 148–170. https://doi.org/10.1214/aoms/1177705148 |
| Heckathorn, D. D. (2011). Snowball versus respondent-driven sampling. Sociological Methodology, 41(1), 355–366. https://doi.org/10.1111/j.1467-9531.2011.01244.x |
| Holmes, W., Bialik, M. & Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign. |
| Hsu, T. C., Chang, S. C. & Hung, Y. T. (2019). How to learn and how to teach computational thinking: suggestions based on a review of the literature. Computers & Education, 126, 296–310. https://doi.org/10.1016/j.compedu.2018.07.004 |
| Kim, J. & Wargo, E. (2025). Empowering educational leaders for AI integration in rural STEM education: challenges and strategies. Frontiers in Education, 10, 1–13. https://doi.org/10.3389/feduc.2025.1567698 |
| Kimberlin, C. L. & Winterstein, A. G. (2008). Validity and reliability of measurement instruments used in research. American Journal of Health-System Pharmacy, 65(23), 2276–2284. https://doi.org/10.2146/ajhp070364 |
| Klopfer, E., Reich, J., Abelson, H., & Breazeal, C. (2024). Generative AI and K-12 Education: An MIT Perspective. An MIT Exploration of Generative AI. https://doi.org/10.21428/e4baedd9.81164b06 |
| Liu, S.-H. (2011). Factors related to pedagogical beliefs of teachers and technology integration. Computers & Education, 56(4), 1012–1022. https://doi.org/10.1016/j.compedu.2010.12.001 |
| Manrique, P. C. J. & Palomares, N. R. (2024). Embracing the future: exploring teachers’ perspective and readiness for integrating artificial intelligence (AI) in mathematics classrooms in selected public and private senior high schools. Ignatian International Journal for Multidisciplinary Research, 2(5), 2654–2675. |
| Miao, F. & Holmes, W. (2023). Guidance for Generative AI in Education and Research. UNESCO. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000386693 |
| Mishra, P., Warr, M. & Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235–251. https://doi.org/10.1080/21532974.2023.2247480 |
| Noroozi, O. et al. (2024). Generative AI in education: pedagogical, theoretical, and methodological perspectives. International Journal of Technology in Education (IJTE), 7(3), 373–385. https://doi.org/10.46328/ijte.845 |
| Obeysekare, E. (2024). Responsible use of Generative AI in K-12 STEAM Education (RAISE). Creative Inquiry. Retrieved from https://creativeinquiry.lehigh.edu/impact-fellowships/global-social-impact-fellowship/responsible-use-generative-ai-k-12-steam |
| Oh, S. & Sanfilippo, M. (2024). University governance for responsible AI. In Proceedings of the ALISE Annual Conference. https://doi.org/10.21900/j.alise.2024.1706 |
| Pangrazio, L. & Selwyn, N. (2021). Towards a school-based ‘critical data education’. Pedagogy, Culture & Society, 29(3), 431–448. https://doi.org/10.1080/14681366.2020.1747527 |
| Perrotta, C. & Selwyn, N. (2019). Deep learning goes to school: toward a relational understanding of AI in education. Learning, Media and Technology, 45(3), 251–269. https://doi.org/10.1080/17439884.2020.1686017 |
| Sanusi, I. T., Ayanwale, M. A. & Chiu, T. K. F. (2024). Investigating the moderating effects of social good and confidence on teachers’ intention to prepare school students for artificial intelligence education. Education and Information Technologies, 29, 273–295. https://doi.org/10.1007/s10639-023-12250-1 |
| Shin, W. S. & Shin, D. H. (2021). A case study on the application of plant classification learning for 4th grade elementary school using machine learning in online learning. Journal of Korean Elementary Science Education, 40(1), 66–80. |
| Wargo, E. & Hoke, I. (2022). Revisiting rural education access. Educational Considerations, 48(2), Article 5. https://doi.org/10.4148/0146-9282.2333 |
| World Economic Forum. (2024). Shaping the Future of Learning: The Role of AI in Education 4.0. Insight report. Retrieved from https://www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf |
| Zafar, S. et al. (2025). The effect of ChatGPT on the critical thinking skills of secondary students: a survey-based study. Journal of Social Signs Review, 3(03), 243–259. |