ORIGINAL RESEARCH ARTICLE

Augmented sociomateriality: implications of artificial intelligence for the field of learning technology

Aditya Johri*

Department of Information Sciences & Technology, George Mason University, Fairfax, VA, USA

(Received: 18 August 2021; Revised: 6 January 2022; Accepted: 23 February 2022; Published: 19 May 2022)

There has been a conscious effort in the past decade to produce a more theoretical account of the use of technology for learning. At the same time, advances in artificial intelligence (AI) are being rapidly incorporated into learning technologies, significantly changing their affordances for teaching and learning. In this article I address the question of whether introduction of AI and associated features such as machine learning is a novel development from a theoretical perspective, and if so, how? I draw on the existing perspective of sociomateriality for learning and argue that the use of AI is indeed different because AI transforms sociomateriality by allowing materiality to take on characteristics previously associated primarily with a human agent, thereby shifting the nature of the sociomaterial assemblage. In this data and algorithm-driven AI-based sociomateriality, affordances for representation and agency change, thereby modifying representational and relational practices that are essential for cognition. The dualities of data/algorithm, representational/agentic augmentation, and relational/participatory practices act in tandem within this new sociomaterial assemblage. If left unchecked, this new assemblage is prone to perpetuate the biases programmed within the technology itself. Therefore, it is important to take ethical and moral implications of using AI-driven learning technologies into account before their use.

Keywords: learning technology; artificial intelligence; augmented sociomateriality; theoretical perspective; conceptual framework

*Corresponding author. Email: johri@gmu.edu

Research in Learning Technology 2022. © 2022 Aditya Johri. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2022, 30: 2642 - http://dx.doi.org/10.25304/rlt.v30.2642

Introduction

The past decade has seen a conscious effort to produce a more theoretical account of the use of technology for learning, including in the pages of this journal (Jones & Czerniewicz 2011). Concurrently, the use of technology for learning has been increasing at a rapid pace and the coronavirus disease 2019 (COVID-19) pandemic has accelerated this shift (Dhawan 2020). From the perspective of information technology-based applications, services, and platforms, the immediate focus of this current shift has been on providing better access to content, enhancing the ability of participants to interact, and for supporting different ways of assessment.

One class of applications that have recently found favour within the growing use of technology for learning are those that integrate aspects of Artificial Intelligence (AI), broadly defined (see Table 1) (Williamson & Enyon 2020). Example of these offerings include personalised learning and tutoring, automation of grading and assessment, and content curation and recommendation, among others. Given these recent advances and introduction of AI, it is a critically important time to revisit our theoretical understanding of learning technologies (Bennett & Oliver 2011). What, if anything, is made different by AI in our understanding of how technologies shape learning?

Table 1. Definition of algorithms, machine learning, and artificial intelligence (AI).
Term Definition
Algorithm A process or set of rules, that is, finite sequence of well-defined instructions, to be followed in calculations or other problem-solving operations by a computing machine (primarily though not exclusively). An everyday, non-computing, example of an algorithm is a recipe for baking a cake.
Machine Learning Machine learning is the study of computer algorithms that can improve automatically, that is, learn, through experience and by the use of data. The two main categories are supervised (where data are pre-coded by humans and then fed to the machine) and unsupervised (where the machine itself detects patterns and then learns from them).
Artificial Intelligence (AI) AI is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence (i.e. cognitive capabilities typically associated with humans). Machine learning is the core component of how computers achieve this goal.

Most AI experts agree that although Artificial General Intelligence (AGI), the kind of AI that acts just like humans, is a long way off, Artificial Narrow Intelligence (ANI) is making rapid progress. Driverless cars, drone deliveries, and conversations with devices that detect and respond in human speech are all applications of ANI which just a few years back had been unfathomable. Although ANI applications have limitations, they have slowly started to augment human practices across a range of activities. By leveraging data across sources and by using algorithms that allow making sense of that data in novel ways, ANI built into to work applications has started to augment design and creativity (e.g. PowerPoint), and in education, grading of assignments or personalisation of problem sets. This shift towards AI-driven technologies is noteworthy because once we tease out the hyperbole of AI from its reality, advances in AI present a significant opportunity to create new forms of learning environments, including those that differ considerably from current practices.

To appreciate how AI might transform learning, especially from a theoretical perspective, it is important to look beyond the technological determinist viewpoint where technology serves as a solution – as either a replacement or substitute for an already existing function (Jones & Czerniewicz 2011). The move towards online teaching during the pandemic, for instance, is an exemplar of this approach where in-person activities are being substituted by technology-mediated interactions. In reality, as outlined in a practice-based view, learning is an ambiguous process where knowledge is produced continuously through situated action (Greeno 2006; Johri & Olds, 2011; Johri, Olds & O’Connor, 2014; Lave & Wenger, 1991). When people learn, they draw on their physical, and increasingly virtual, presence in a social setting, on their cultural background and experience, on sentient and sensory information, and on the material that is available to them (Blackler 1995; Orlikowski 2002; Tyre and von Hippel 1997). In order to fully grasp the shift in learning due to AI, we need to adopt an interpretive stance that allows us to comprehend human cognition from a practice-based viewpoint. One avenue for building this understanding is the sociomaterial perspective (Johri 2011; Latour 2005; Orlikowski 2000, 2002; Orlikowski & Scott 2008; Sørensen 2008; Suchman 2007).

A sociomaterial perspective argues for equal foci on the social and the material context in which learning takes place. Thus, sociomateriality is about encapsulating the meaning of the material, how it matters, in learning practices (Fenwick & Landri 2012; Johri 2011). The presence of material itself is less important than how the material is configured in practice and enacted in the moment. The material changes as it gets it meaning from practice and this meaning changes as practices change. This entanglement of the social and the material does not imply that there can be no analytical distinction between the two but that any such distinction is analytical and recognise that these entities constitute an assemblage and necessarily entail each other in practice (Orlikowski & Scott 2008). To apply this perspective to build an understanding of AI-driven learning technologies and to expand our understanding of how sociomaterial understanding itself is changing due to AI, it is important to reflect on how human cognition is augmented by new forms of technology, that is, what has changed because of AI?

AI and augmentation of sociomateriality for learning

Humans have augmented their life for centuries using tools and technologies (Ong 2002). In particular, whether through the development of language and oral culture or the symbolic system of writing, humans have always found ways to augment their cognitive capabilities to become more ‘intelligent’. By offloading their thinking and exchange of ideas to an external representational system (e.g. language), they have been able to make remarkable progress at a societal level (Hutchins 1999). In this, they have not only made use of their international representational system but have also created tools for support. A calculator, for instance, of one form or another – whether an abacus or a mechanical calculator – is an example of augmentation (Norman 1991). From a situated learning perspective, especially the distributed cognition lens, distributing some of our thinking to other artefacts has allowed us to handle complexity and take on tasks that might not have otherwise been tackled (Pea 1985). In the past half a century, the introduction of computation to this process has taken human ability to be intelligent to another level (Engelbart 1962). They have not only amplified but rearranged how they do what they do – electronic calculators have not just offloaded day to day tasks but they have allowed humans undertake tasks that require calculations that were previously untenable (Pea 1985).

In many ways AI is augmenting learning as many other tools and technologies, from books to whiteboards, have done over centuries, but the use of AI is different from previous technologies as AI is fundamentally shifting the role of humans in learning practices. AI-enabled devices and services exemplify a new sociomateriality whereas digital materiality can enact attributes that have largely been associated with a human and with social norms more broadly (Leonardi 2011). Primarily, our conception of communication, the notion of a self and identity, and that of agency and autonomy have developed in relation to other people; our very existence, at a social level, is in relation to others (Hancock, Naaman, and Levy 2020). And even though language, objects and artefacts, or technology in any form, has always shaped how humans communicate and interact, AI gives technology the agency, the power, to initiate interaction – to be a communicator on par with other humans. Although previously materiality could contain agency through its participation in an assemblage, a response from them was largely a result of human input. Materiality has now adopted the part of the agent – it can act on its own without any input from a human either immediate or through programmed algorithms. It can do so based on data and logic it generates on its own. Furthermore, through the incorporation of neural networks, and a class of algorithms called deep learning algorithms, it is possible to draw insights that are uniquely novel, a product of data and algorithms and not susceptible to human influence except for the initiation of the process. Machines now have the ability to provide ideas without being asked or prompted, and increasingly we are becoming comfortable with actions initiated by them. For instance, when your favourite voice-activated device suggests that you listen to a certain song, we often take it up on that. While watching our favourite shows or videos on digital content platforms, we often click based on what is recommended for us. Our actions in turn shape the outcomes for another user and over time an entire ecosystem of music recommendations is established beyond any real input from a programmer initially. Similarly, your favorite maps application on your phone learns from your behavior to adapt to you but also learns from the actions of other drivers in your vicinity to make it more useful. When you call customer service, or better still, login to chat, an AI-based “bot” asks you questions and uses information it already has in conjunction with your query, to provide you a response. Finally, the news that we read is now often written by automated systems and personalized to us. Beyond personal systems, AI now drives decision-making in admissions, for loans and credits, and for identifying potential crimes and criminals. It is this ability of AI-driven technology to undertake predictive actions that is unique and it is through the augmentation of these small but necessary tasks, rather than any one big device or software, that AI will impact learning practices in the long term. Already, we see many AI or ML based applications in education (Zhang & Aslan, 2021). For instance, writing is one of the core skills that we have to learn and the past few years have seen significant uptake of applications that correct your grammar or even make suggestions of words or sentences that you should write (Johri, 2020). There are also systems in use now that help predict student success based on their prior performance, allowing faculty and advisor to make timely interventions to support student success (Sweeney, Rangwala, Lester & Johri, 2016). Finally, many applications are now available that directly support a learner by provide a personalized intervention based on their current level of understanding of a topic. These intelligent tutors or agents provide “customized, timely, and appropriate materials, guidance, and feedback to learners” (Zhang & Aslan, 2021; pg. 5).

In the rest of the article, I present a conceptual framework, exemplified through a case study, that expands on the ideas presented so far and delineates specific dualities that work in tandem to allow an AI-driven technology to shape learning practices. Augmented sociomateriality is a core theoretical component of this framework. In the current context, I am using ‘framework’ to refer to any structure that holds or bring things together which can include theories, concepts, ideas, or viewpoints. A conceptual framework includes sets of concepts that can be derived from theories and also from personal experience or other empirical work that is not necessarily theory driven (Passey 2020).

Case study – Video-based monitoring of student assessment

In the context of AI-driven educational technology, Williamson and Eynon (2020) have recently argued that,

there is far less data from a critical perspective of what happens when these systems are used on a daily basis in varied educational contexts [and] [w]e know very little, for example, about how learners and teachers really use AI systems, and how AI is embedding (or not) into the everyday workings of schools, colleges and other sites of education and learning. (p. 231)

To explicate the framework I propose, I use an empirical case drawing on a critical perspective from research into student assessment, in particular the use of remote proctoring through video-based monitoring (VbM) for exams and assignments. The case is generated from personal experience, primary research, and secondary sources. Due to length limitations, I briefly describe the case here. The VbM system works by monitoring students as they take an exam through a video camera. The camera captures students’ test environment and the software application uses algorithms that detect student movement to ensure that students are focused on the screen in front of them. Before starting their exam or test students are usually required to show a picture identification and a view of their surroundings, including their desk, so that it can be ascertained that they have no relevant material that can be used for cheating. Recording of exam sessions for each student is available for instructors to go through and videos that are deemed non-compliant are flagged for the instructor to go over later. Videos can be flagged for a range of reasons, such as unnecessary movement by the test-taker, some other disturbance in the environment, or even for slow bandwidth. In a nutshell, VbM is an AI-augmented solution for the kind of assessment that used to take place in person. The AI component includes a range of algorithms implemented within the system and the different data streams that are used as input. The AI aspect is also evident in the way in which the application is supposed to mimic human practices associated with monitoring of exams in person. Although currently most VbM systems use off-the-shelf algorithms and machine learning components, there is a shift towards creating novel processes with advance AI capabilities with increase in data collection across institutions as well a growth of companies that provide this infrastructure.

I selected this case because assessment is a core component of learning practices. It reflects students’ recall of a topic and their ability to transfer their learning from one context or problem to another. Assessment also serves, over time, as a record of the overall teaching capabilities of an institution. This is especially true for institutions that are accredited. Assessment is also one area of learning technologies where many advances have been incorporated, including the use of ANI for a range of tasks. Exams can be designed to provide progressively more difficult questions based on responses. Another often used example is the use of natural language processing which is used to evaluate student assignments for errors as well as for plagiarism. Finally, it is also evident from the literature that assessment is a complex practice and it is hard to conduct assessments that are contextually valid and able to capture the nuances of learning.

The duality of elements within AI, augmented sociomateriality, and learning practices

Before delving into the relationship between AI, sociomateriality, and learning practices, there is a central concept – duality – that needs to be defined and described. In essence, duality refers to two elements within a larger concept that sculpt each other. The notion comes from Wenger’s work on Communities of Practice who defines duality as a single conceptual unit that is formed by two inseparable and mutually constitutive elements whose inherent tension and complementarity give the concept richness and dynamism’ (Wenger 1999, p. 66). Wenger reiterates that duality is not a dichotomy such as ‘tacit’ versus ‘explicit’ knowledge or ‘formal’ versus ‘informal’ learning but is meant to refer to two things that work in tandem. The construct of duality has been used by others to examine learning technologies. For instance, Barab, MaKinster and Scheckler (2003) use it to develop a system-level understanding of online learning communities and state that ‘Although both sides of a duality are considered separate units, the effective functioning of one… necessitates and is dependent on the existence of the other’ (Barab, MaKinster, and Scheckler 2003, p. 240).

I leverage the concept of duality to delineate core elements within each component of my conceptual framework – AI, augmented sociomateriality, and learning practices – such that it can provide an analytical way to understand how AI shapes learning (see Figure 1). The use of the idea of dualities is critical here as it keeps the framework relatively simple by helping us focus on the core issues involved. Dualities also provide consistency across all three aspects of the overall conceptual framework. Next, I discuss each aspect and the duality associated with it in detail exemplifying it with VbM case study.

Fig 1
Figure 1. Dualities of AI that augment sociomateriality and shape learning practices.

AI and the duality of data and algorithms

The core building blocks of AI are data and algorithms. Analytically, AI shapes learning practices by augmenting sociomateriality through this duality of data and algorithm. Data constitute the raw material essential for AI. They are the input required for any action. The action itself is a result of processing of data via some algorithm. Within machine learning-driven AI application, this data also assists the algorithms in improving themselves.

Algorithms are instructions embedded in code that tell a system how to behave especially when encountering some form of input or data. This behaviour can vary and take different shapes but more often than not algorithms work in a limited range. For instance, if an algorithm is going to suggest a new resource or piece of content, they are classified under recommendation algorithms and will work on models that have been designed for similar applications.

Creating new algorithms or tweaking them for a novel system requires tuning and the relevant data necessary to train them. The dependence of the algorithm on data and vice versa determines to a large extent the performance of the system and its ability to augment. Not every algorithmic outcome however is necessarily related to AI or ANI. There are many actions and outcomes that do not augment but simply provide information or take some other system action. The augmentation comes when the sociomaterial assemblage changes in specific ways, as discussed now.

Although algorithms tailored to specific domains are often created, at least within educational and learning application, currently off-the-shelf techniques are the norm. Within this context, data can range from any input to the system made by a student or teacher or any kind of interaction that a user has with a system or even stand-alone data repositories about users. The data can come from other sources such as video or audio recordings, conversations, attempts at problem-solving exercises, or reading or watching content. These data do not need to be processed or stored in advance and the use of dynamic data produced and analysed just-in-time is now possible due to advanced data storage and processing capabilities. Algorithms can act on these data immediately or if needed with a delay.

Drawing on the case study of VbM, the data that are analysed by the system are largely video data that are then fed into proprietary algorithms and output is provided to faculty. During the exam students get feedback based on how the system perceives their behaviour. Any changes in posture or position, for instance, create new data and so does change in lighting or any kind of movement in their space. Blinking of eyes, not blinking eye – everything is data and it all goes into the system. Typing or using the mouse are additional data. In summary, there is a lot of data (quantity) and a lot of different kinds of data (quality) that are being input into the system. The data enter the system fast, for instance, to capture any movement made by the learner and the software has the ability to process it almost on the fly. The VbM is able to process this data and identify aspects of student behaviour that are relayed to the instructor after the recording is complete and often to the student while the test is still underway. This is achieved through a range of algorithms that are running continuously. Overall, the duality of data and algorithms is essential to the functioning of VbM.

Augmented sociomateriality and duality of agency and representation

From a sociomaterial perspective, AI augments two salient aspects of the assemblage – representations and agency. Representational augmentation and agentic augmentation are a duality working in tandem. Representational augmentation refers to changes in the presentation of information related to people and their actions, or information about a piece of content. Agentic augmentation implies a shift in the locus of agency: who acts – makes decisions within the learning environment – who does what; including judgments about what is represented, when, and to whom. In a learning environment these representations include information we received about others, the content, as well as outcomes of actions.

Augmentation of representations by AI changes their nature and thus of the meaning a user makes with them. The data can be manipulated in different ways, even creating new forms of representations that were not previously available. Self-presentation and impression formation are key for sense-making and both are transformed in a digital environment where AI is present. Although agentic decision-making can seem to be less transformed, AI changes this relationship as the machine or platform can act without any direct input from the learner or the teacher. This can happen not only in pre-programmed ways but also in an emergent manner. As a system increasingly acts autonomously through algorithms, any action changes the nature of interaction that takes place within the assemblage. Based on the data and the algorithm, the output can change and even be initiated by the machine itself.

Within VbM, for instance, AI augments the sociomateriality of the assemblage whereby students are represented as virtual images or sequence of images. These are processed and run through a face recognition algorithm and the resultant output is presented back to them and/or to the instructor. In terms of agency, the system modifies their behaviour. If they move around too much or blink, the system sends them a warning message alerting them that their behaviour will be reported. Simple acts such as drinking water from a cup or bottle becomes a deviant behaviour according to the system. Students are in a sense ‘controlled’ by the technology – they have to follow what the system prescribes or they themselves are branded as deviants. Their agency – even to ask permission – has been taken away and delegated to an algorithm.

Learning and duality of relational and participatory practices

Representational and agentic augmentation changes learning practices by reconfiguring the duality of relational and participatory practices within an assemblage. Relational practices are enactments of how participants, learners and teachers, relate to each other – their perceptions of others, who they feel has more expertise, who has more power, how much they trust each other, how meaning is negotiated – and other aspects of self and identity that are a precursor to learning. In conjunction, participatory practices are affordances that shape learners’ self-regulation and autonomy, teachers’ allotment of content and assessment of knowledge, collaborative expectations and activities, and other supports for how learner acts within an assemblage. Regardless of the technology used within learning practices, these two aspects of learning practices change. For instance, digital representation – whether photos or comics depicting people – changes identification, whereas digital content changes access; the availability of content anytime, anyplace changes participation.

The relational and participatory duality is shaped by the representational and agentic duality. Representational augmentation shapes what people know of each other, an essential element of social context, and this over time changes how people participate. In particular, this shapes their range of interaction and critically if the participation is full or peripheral. Long-term full participation trajectory shapes their identity as a learner and also provides a sense of belonging and engagement, which in turn is also shaped by agentic augmentation, how much actual sense of control a learner can enact, and how much power the learner feels she has over ability to imagine – reflect and explore (Wenger 1999). AI changes learning by augmenting representations and agency within a sociomaterial assemblage, thereby changing relational and participatory learning practices.

In the VbM case study, relational and participatory practices evolve as learners’ representations – video or image based – as well as their lack of agency shift the power they have over the situation and even the power or agency the teacher has in this context. The teaching–learning environment is largely a context of power where usually the more powerful expert provides information and/or guidance to the less knowledgeable learner. The shift in this relationship towards AI-driven decision-making changes the learners’ perception of the context and the manner in which they relate to the teacher as well as their institution. Rather than an environment of trust formed in a face-to-face situation, an AI system is delegated the responsibility of ensuring a fair assessment.

Ethical and moral concerns related to AI-augmented sociomateriality of learning

Over half a century ago, Norbert Wiener, one of the founders of the AI movement, cautioned us to think carefully about the role of humans in the age of AI (Wiener 1954).

Whether we entrust our decisions to machines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations, we shall never receive the right answers to our questions unless we ask the right questions…the machine like the djinnee, which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. The hour is very late, and the choice of good and evil knocks at our door. (Wiener 1954; pp. 185–186)

What are the right questions to ask about the role of AI in learning and what is the right approach for theoretical and empirical studies (Aiken & Epstein 2000)?

Before using VbM in my online classes, my teaching and learning practices were largely designed for face-to-face classes. Even when I first started to teach online, I continued to hold final exams in person. Students were required to come to campus at a designated day and time. I checked their IDs and monitored the exam to ensure there was no cheating. In-person administration also helped prevent the exam from ending up as online curriculum and on course sharing websites, where material I developed for my courses, including weekly quizzes, often appeared without my permission. Although students sign an honour code at my institution, infringements are common. The burden of proof is often too high to pursue anything but the most egregious violations. Therefore, designing for prevention is the best strategy.

Over time I realised that students needed flexibility in the exam schedule and coming to campus was not a convenience or feasible option for many students. The timing of the exam did not work for nontraditional students who had full-time jobs or had childcare responsibilities. Some students were residing elsewhere and travel was a cumbersome and expensive option. Therefore, I decided to change my assessment practice and as a solution, my colleagues recommended using a VbM implemented within our Learning Management System and worked in tandem with a lockdown browser. The lockdown browser ‘locks’ the students’ computers – literally takes over their machine – so that the only screen they can work on is the exam. The students are then required to turn on their video and show their institutional identity card and a view of their environment – the room where they are taking the exam – and then take the exam all the while staying on camera. Faculty who had used the system told me that this technology reduced cheating drastically.

When I first used the VbM system, I was in awe of the technology and how well it worked. The integration with the existing LMS was seamless. It was easy to set up for use and it provided the flexibility students needed. Even though I felt uncomfortable watching students in their private spaces, like a voyeur, I justified it as a strategy I had deployed for their convenience. But my attitude towards using the system changed once I went over the recordings, especially ones that were earmarked by the system as problematic. Almost every such instance was of a male student with a darker skin colour. A student coughing, a poster in the background, or a lag due to lower bandwidth – simple deviations from some standard metric – also caused the algorithm to designate the exam as problematic. To me, these issues signalled a systematic issue with the algorithm and the data used to develop it. My search for information about the software proved fruitless – the company offered no transparency. I could also not find information about what happens to the data, the limits on its use and reuse, and answers to when, if ever, it is deleted.

As I further investigated this issue I found that students were equally concerned about how the software worked, their privacy, the potential long-term use of their data, and faculty’s uncritical reliance on the software for making decisions about cheating. Students also reported feeling anxious and stressed because of the system (Flaherty 2020). There were vigorous discussions about the use of VbM, including specific commercial applications, on Reddit™ and other online forums. Some students had also posted potential solutions for circumventing the system, pointing out those who want to cheat and have the intelligence to devise new solutions will do so.

Many of the AI elements that generate these functions are ‘invisible’ to us, that is, they are hidden under layers of hardware and software and not transparent to the user (teacher or the learner). The core technologies driving them however are similar to features that we use daily when we shop online, talk to voice-recognition applications such as Amazon’s Alexa, watch movies on Netflix™, or try to navigate to a new place using Google Maps™. We might not know the nuances of how these devices work, but we know there is an algorithm that processes the data and that over time these things tend to work better for us; they are more personalised. These concerns highlight one of the primary reasons why we need a deeper understanding of how AI is changing learning practices. We need to understand at a fundamental level how characteristics of AI – data and algorithm – change the very nature of things on which our understanding of learning is based. By not having a transparent understanding of what is going on, it is hard to design and even to respond to what is taking place. In particular, increasing use of AI is leading to surveillance not only through technologies such as VbM but even other data and data sharing practices (Atteneder & Collini-Nocker 2020). The framework I have presented here also alerts us to unintended ways in which data/algorithm duality can work to create stress and anxiety among students by penalising them for even a small movement they make during the exam. When the agency and thus power shifts to a machine and decision-making is driven by algorithms, often those that are opaque to us, the control we have over actions that we are required to take reduces significantly. Thus, augmented sociomateriality disrupts relational practices affecting learning practices.

Discussion and conclusion

In this article I have advanced augmented sociomateriality as a sociomaterial perspective for understanding how AI is shaping learning practices. The conceptual framework I offer as a way to understand how augmented sociomateriality shapes learning practices rests on the idea of dualities and provides a mechanism to account for the unique aspects of AI that have the potential to shape learning. A focus on augmenting sociomateriality of learning practices provides a unique vantage point from which to advance socio-cognitive understanding by incorporating both materiality and sociality without privileging either and by accounting for emergent characteristics of assemblages that are shaped by AI. I argue that AI specifically changes representations and agency within an assemblage, thereby, in the context of learning practices, changing relational and participatory practices. An augmented sociomaterial account is of course applicable to other non-AI assemblages but by making augmentation within learning practices salient this perspective allows those interested in learning technologies to make the use of technology a useful focus of study without marginalising other aspects of the practice.

I used the case study of VbM from my own experience but this framework is relevant to other AI-driven learning technologies. For instance, it alerts us to the limited learning gains that can come from more personalised tutoring because learners’ lack of agency limits essential participatory practices such as autonomy and self-regulation. It also tells us that we have to examine any intervention in the larger context if we truly want to understand how technology shapes learning. One limitation of this case study is that VbM makes use of relatively traditional machine learning and AI capabilities and is not necessarily at the forefront of the technology. The use of neural networks or deep learning models in VbM system is limited, if any. Yet, it is a useful case as it depicts that the novelty of a sociomaterial assemblage and the shifts in it can come about even from a simple AI-based capabilities and does not require very sophisticated or novel techniques or algorithms. In this case, by augmenting sociomateriality, humans’ actions within the assemblage have changed – in many ways, humans have to act how AI directs them to – and this has shifted the overall practice. AI might have limited capabilities and might not have new or novel thoughts, compared to humans, but the assemblage itself is shaped differently and acts uniquely. Students and instructors, for instance, relate with each other in a different manner. When the application produces a message informing a student that they are deviating from the exam norm, the machine is doing work that previously would have been within the purview of the instructor. This message is programmed in the sense that the algorithm detects certain movements or shifts (maybe even changes in light patterns) but this detection is dynamic in the sense that it works differently for different students (based on skin colour, movements, background, etc.) and also, it changes over time with new data and modifications in the algorithm itself, that is, it learns. This shift of agency to the AI-driven application is the fundamental shift in how augmented sociomaterial assemblages work. It might be designed and programmed by a human, but it works in its own way.

Finally, and maybe most crucially, this framework alerts us to the shifts that a lack of transparency and the introduction of bias can bring to relational and participatory aspects of learning practices. By changing this overall context of learning, for instance, through surveillance, we are changing how people learn and how much they trust what they are learning.

Acknowledgements

This work was partially funded by U.S. NSF Awards #1937950, 1939105; USDA/NIFA Award#2021-67021-35329. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. All material used here is for educational purposes only.

References

Aiken, R. & Epstein, R. (2000) ‘Ethical guidelines for AI in education: starting a conversation’, International Journal of Artificial Intelligence in Education, vol. 11, pp. 163–176.

Atteneder, H. & Collini-Nocker, B. (2020) ‘Under control: audio/video conferencing systems feed “Surveillance Capitalism” with students’ data’, 13th CMI Conference on Cybersecurity and Privacy (CMI), Copenhagen, Denmark, pp. 1–7.

Barab, S., MaKinster, J. & Scheckler, R. (2003) ‘Designing system dualities: characterizing a web-supported professional development community’, The Information Society, vol. 19, no. 3, pp. 237–256. doi: 10.1080/01972240309466

Bennett, S. & Oliver, M. (2011) ‘Talking back to theory: the missed opportunities in learning technology research’, Research in Learning Technology, vol. 19, no. 3, pp. 179–189. doi: 10.3402/rlt.v19i3.17108

Blackler, F. (1995) ‘Knowledge, knowledge work, and organizations: an overview and interpretation’, Organization Studies, vol. 16, pp. 1021–1046. doi: 10.1177/017084069501600605

Dhawan, S. (2020) ‘Online learning: a panacea in the time of COVID-19 crisis’, Journal of Educational Technology Systems, vol. 49, no. 1, pp. 5–22. doi: 10.1177/0047239520934018

Engelbart, D. (1962) Augmenting Human Intellect: A Conceptual Framework. Stanford Research Institute (SRI), Menlo Park, CA.

Fenwick, T. & Landri, P. (2012) ‘Materialities, textures and pedagogies: socio-material assemblages in education’, Pedagogy, Culture & Society, vol. 20, no. 1, pp. 1–7. doi: 10.1080/14681366.2012.649421

Flaherty, C. (2020) ‘Big proctor: is the fight against cheating during remote instruction worth enlisting third-party student surveillance platforms?’, Inside Higher Ed, 11 May. https://www.insidehighered.com/news/2020/05/11/online-proctoring-surging-during-covid-19

Greeno, J. (2006) ‘Learning in activity’, in The Cambridge Handbook of the Learning Sciences, ed K. Sawyer, Cambridge University Press, New York, NY, pp. 79–96.

Hall, R. (2011) ‘Revealing the transformatory moment of learning technology: the place of critical social theory’, Research in Learning Technology, vol. 19, no. 3, pp. 273–284. doi: 10.3402/rlt.v19i3.17115

Hancock, J. T., Naaman, M. & Levy, K. (2020) ‘AI-mediated communication: definition, research agenda, and ethical considerations’, Journal of Computer-Mediated Communication, vol. 25, no. 1, pp. 89–100. doi: 10.1093/jcmc/zmz022

Hutchins, E. (1999) ‘Cognitive artifacts’, in The MIT Encyclopedia of the Cognitive Sciences, eds. Robert A. Wilson & Frank C. Keil, MIT Press, Cambridge, MA, pp. 126–128.

Johri, A. (2011) ‘The socio-materiality of learning practices and implications for the field of learning technology’, Research in Learning Technology, vol. 19, no. 3, pp. 207–217. doi: 10.3402/rlt.v19i3.17110

Johri, A. & Olds, B. (2011). ‘Situated Engineering Learning: Bridging Engineering Education Research and the Learning Sciences’, Journal of Engineering Education, vol. 100, no. 1, pp. 151–185.

Johri, A. (2012). Learning to Demo: The Sociomateriality of Newcomer Participation in Engineering Research Practices. Engineering Studies, Vol. 4, Issue 3, pp. 249–269.

Johri, A., Olds, B. M. & O’Connor, K. (2014). ‘Situative Frameworks for Engineering Learning Research’, The Cambridge Handbook of Engineering Education Research, Eds. Johri, A. & Olds, B. Cambridge University Press, New York, NY, pp. 47–66.

Johri, A. (2020). Artificial intelligence and engineering education. Journal of Engineering Education, no. 3, pp. 358–361.

Jones, C. R. & Czerniewicz, L. (2011) ‘Theory in learning technology’, Research in Learning Technology, vol. 19, no. 3, pp. 173–177. doi: 10.3402/rlt.v19i3.17107

Latour, B. (2005) Reassembling the Social: An Introduction to Actor-Network-Theory, Oxford University Press, Oxford.

Leonardi, P. (2011) ‘When flexible routines meet flexible technologies: affordance, constraint, and the imbrication of human and material agencies’, MIS Quarterly, vol. 35, no. 1, pp. 147–167. doi: 10.2307/23043493

Norman, D. A. (1991). Cognitive artifacts. in Designing interaction: Psychology at the human-computer interface, ed John M. Carroll, Cambridge University Press, New York, NY, pp. 17–38.

Ong, W. J. (2002) Orality and Literacy: The Technologizing of the Word, Routledge, New York.

Orlikowski, W. (2000) ‘Using technology and constituting structures: a practice lens for studying technology in organizations’, Organization Science, vol. 12, no. 4, pp. 404–428. doi: 10.1287/orsc.11.4.404.14600

Orlikowski, W. (2002) ‘Knowing in practice: enacting a collective capability in distributive organizing’, Organization Science, vol. 13, no. 3. pp. 249–273. doi: 10.1287/orsc.13.3.249.2776

Orlikowski, W. J. & Scott, S. V. (2008) ‘Sociomateriality: challenging the separation of technology, work and organization’, Annals of the Academy of Management, vol. 2, no. 1, pp. 433–474. doi: 10.5465/19416520802211644

Passey, D. (2020) ‘Theories, theoretical and conceptual frameworks, models and constructs: limiting research outcomes through misconceptions and misunderstandings’, Studies in Technology Enhanced Learning, vol. 1, no. 1, pp. 95–114. doi: 10.21428/8c225f6e.56810a1a

Pea, R. D. (1985) ‘Beyond amplification: using the computer to reorganize mental functioning’, Educational Psychologist, vol. 20, no. 4, pp. 167–182. doi: 10.1207/s15326985ep2004_2

Suchman, L. A. (2007) Human–Machine Reconfigurations: Plans and Situated Actions, Cambridge University Press, Cambridge, UK.

Sweeney, M., Rangwala, H., Lester, J., & Johri, A. (2016). Next-Term Student Performance Prediction: A Recommender Systems Approach. Journal of Educational Data Mining, vol. 8, no. 1, pp. 22–51.

Sørensen, E. (2008) The Materiality of Learning, Cambridge University Press, Cambridge, UK.

Tyre, M. & von Hippel, E. (1997) ‘The situated nature of adaptive learning in organizations’, Organization Science, vol. 8, pp. 71–84. doi: 10.1287/orsc.8.1.71

Wenger, E. (1999). Communities of Practice: Learning, Meaning, and Identity. Cambridge University Press, New York, NY.

Wiener, N. (1954). The human use of human beings: Cybernetics and society. Boston: Houghton Mifflin.

Williamson, B. & Enyon, R. (2020) ‘Historical threads, missing links, and future directions in AI in education’, Learning, Media and Technology, vol. 45, no. 3, pp. 223–235. doi: 10.1080/17439884.2020.1798995

Zhang, K. & Aslan, A. B. (2021). AI technologies for education: Recent research & future directions. Computers and Education: Artificial Intelligence, 2, 100025. pp. 1–11.