ORIGINAL RESEARCH ARTICLE

Migration and transformation: a sociomaterial analysis of practitioners’ experiences with online exams

Stuart Allan*

Director of Online Learning, Edinburgh Business School, Heriot-Watt University, Edinburgh, Scotland

(Received: 6 June 2019; Revised: 19 October 2019; Accepted: 30 November 2019; Published: 27 January 2020)

Abstract

Many institutions are making the move from pen and paper to online examinations, but the literature offers relatively few critical reflections on the ramifications of such a shift. This research presents evidence of the ways in which the social and human practices of online exams are deeply entangled with the material and technological, and cautions against the reinscribing of essentialist or instrumentalist assumptions about technology in assessment practices. Through semi-structured interviews with eight practitioners in Norway, the Netherlands, the UK and Ireland, it analyses the impact, dimensions and limitations of two main discourses: migration, whereby exam technologies are assumed to be neutral instruments used independently by humans to realise their preordained intentions; and transformation, whereby the essential and inalienable qualities of technologies can be released to ‘transform’ or ‘enhance’ assessment. Its findings indicate that: (1) exam technologies are neither inherently neutral nor essentially transformational; (2) implementation projects underpinned by the migration discourse can be much more complex and resource-intensive than anticipated; and (3) ‘transformative’ change may be value-laden and driven by assumptions. Given the complex and entangled nature of online exams, practitioners are encouraged to think creatively about how assessment strategies align with educational goals, to consider the limitations of current discourses and to analyse critically the relational and performative roles of digital technologies.

Keywords: assessment; digital exams; online assessment; online exams; sociomateriality

*Corresponding author. Email: s.c.allan@hw.ac.uk

Research in Learning Technology 2020. © 2020 S. Allan. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2020, 28: 2279 - http://dx.doi.org/10.25304/rlt.v28.2279

Introduction

The emergence and adoption of digital technologies presents educators with new opportunities to think creatively about assessment and to increase its alignment with long-term educational goals (Boud and Soler 2016; Hepplestone et al. 2011; JISC 2018; O’Shea and Fawns 2014). Given the range of skills and capabilities that could feasibly be demonstrated in digital contexts – including new modes of research, meaning-making and collaboration – the apparent popularity of online exams1 (Ferrell 2014) may be viewed with surprise by those who have long critiqued the shortcomings of the examination (e.g. Gibbs and Simpson 2004). While some online exams are computer-based but otherwise largely traditional (i.e. physically invigilated, closed-book), others have attempted to broaden the scope by, for example, pre-releasing a case study or research task, allowing candidates to sit the exam at the time or place of their choosing, or facilitating access to online resources (Khare and Lam 2008; Myyry and Joutsenvirta 2015; Williams and Wong 2009).

Although the literature offers many insights on the intricate relationship between assessment and learning (Boud and Soler 2016; Carless 2007; Rust 2007), in digital contexts assessment practice is sometimes critiqued for adopting a relatively uncritical stance. Ferrell (2014), for example, voiced concern that digital technologies are too often ‘bolted on’ (p. 15) to existing assessment processes, with little consideration of underpinning educational values or goals. Hillier and Fluck (2015), meanwhile, decried many online exam technologies as ‘armoured word processors’ or ‘glorified multiple-choice quiz tools’ (p. 463), conceived of mainly as a means of extending the scale of education while mitigating the risk of students cheating.

Based on interviews with higher-education staff, I will argue that prominent discourses around practice with online exams embed particular assumptions about technologies and the educational purposes of assessment. Further, I will argue that these assumptions, if left unchallenged, could lead to practice that reinscribes the limitations of existing assessment paradigms or fails to meet institutions’ ambitions. I will critically analyse the impact of the entanglement of social and material, human and non-human in practitioners’ experiences with online exams, and will conclude by proposing future directions for research and practice.

Migration and transformation

Ripley (2009) outlined two main motivations among institutions adopting online exams. In the first approach, ‘migration’, technology is regarded primarily as an instrument: a vehicle to ‘move traditional paper-based tests to screen versions’ in order to generate ‘administrative gains and service improvements’ (p. 92). In the second approach, ‘transformation’, technology is positioned as an essentially disruptive, almost revolutionary force: transformation ‘sets out to redefine assessment and testing approaches in order to lead educational change’ (p. 92). ‘Migration’ and ‘transformation’ are understood here not only as key terms but also as discourses: discursive practices that constrain and enable what can be said and that define ‘what counts as meaningful statements’ (Barad 2003, p. 819). Within the migration discourse, technologies are conceptualised instrumentally: they are regarded as ‘neutral means employed for ends determined independently by their users’ (Hamilton and Friesen 2013, p. 3), with their role being ‘to enhance pre-existing personal and societal educational objectives’ (Bayne 2015, p. 9). In contrast, in the transformation discourse, technologies are positioned in a largely essentialist way: ‘independent forces for the realisation of pedagogical aims that are intrinsic to them prior to any actual use’ (Hamilton and Friesen 2013, p. 3), meaning that ‘“learning” can be transformed by the immanent pedagogical value of certain technologies simply by allowing itself to be open to them’ (Bayne 2015, p. 9).

Crucially, both the essentialist and the instrumentalist positions assume the social (human) and material (non-human and technological) to be discrete and separate entities. I will challenge this assumption in relation to online exams, drawing on sociomaterial theory.

Theoretical context and research questions

Sociomaterial theory problematises the idea that the material/technological is separate from, or subordinate to, the social/human. Instead, humans and technologies are seen to be interwoven and co-constitutive in complex, immanent assemblages (Fenwick, Edwards, and Sawchuk 2011; Orlikowski 2010). Meaning emerges dynamically in and through these assemblages, in which all elements (including technologies) take on a relational role: ‘material things are performative and not inert; they are matter and they matter. They act together with other types of things and forces to exclude, invite and regulate particular forms of participation in enactments’ (Fenwick, Edwards, and Sawchuk 2011, p. 4).

This research seeks to answer the following questions:

Literature review

The literature on online exams in higher education reflects a preoccupation with how this form of assessment can be administered most efficiently and securely. There is an emphasis on operational or technological concerns such as how to prevent cheating and maintain academic integrity (D’Souza and Siegfeldt 2017; Hylton, Levy, and Dringus 2016; Milone et al. 2017; Ullah, Ziao, and Barker 2019) and how to operate exam software most effectively (Karim and Shukur 2016). There is also some concern with how online exams are perceived by students: for instance, Berggren, Fili and Nordberg (2015) reported students’ apparent enthusiasm for typing as opposed to writing exam responses, while others (Cramp et al. 2019; James 2016; Karaman 2011) have reported anxiety among students regarding the use of digital technologies in high-stakes exams.

While the literature offers insights on both students’ and teachers’ experiences of online assessment more generally (Boud and Soler 2016; Carless 2007; Crisp, Guàrdia, and Hillier 2016; O’Shea and Fawns 2014), there are relatively few studies that specifically analyse the implications of online exams for the university staff who design or deliver them. Among those that can be found, some traces of the migration discourse can be detected: Schmidt, Ralph and Buskirk (2009), for example, referred to digital technologies as tools that can be used to ‘convert’ existing examination practices (p. 1), while Escudier et al. (2011) focused on possible efficiency gains while ‘maintaining the reliability and robustness of traditional methods’ (p. 441).

Of the relatively few published studies that provide detailed analysis of the educational implications of online exams in higher education, many highlight the potential benefits of using innovative question types. Williams and Wong (2009), for example, claimed that setting up wide-ranging, largely unscaffolded questions for students ‘fosters understanding of learning processes in terms of real-life performance as opposed to a display of inert knowledge … [and provides] an effective bridge between a learner’s education and the social context of their professional practice’ (p. 229). Others have identified the potential for specifically digital (again largely unstructured, often collaborative) question types to increase constructive alignment and assessment authenticity (Khare and Lam 2008; Myyry and Joutsenvirta 2015; Newhouse 2011) and facilitate peer-to-peer feedback discussions (Karaman 2011). However, Cramp et al. (2019) warned that in the absence of thoughtful question design underpinned by detailed engagement with the affordances of digital technologies, online exams might have a negative impact on students’ performance by increasing cognitive load.

This article extends the current literature by: providing a new theoretical perspective on practice with online exams by analysing it through a sociomaterial lens; gathering data on practices across multiple contexts2 via in-depth semi-structured interviews; and focusing its analysis on educational rather than administrative dimensions of practice. While not claiming to produce generalisable findings due to its small scale, this article identifies significant themes that can help to guide future practice and research in this area.

Methods

I recruited interviewees using a mixture of purposeful and snowball sampling. At first, through contacts from my professional network, I purposefully recruited six academics and practitioners from the Netherlands, Norway, Ireland and the UK on the basis that they had recent and direct experience of designing, coordinating, supporting and/or grading summative online exams. My intention was to seek out participants whose experiences were well aligned with the research questions and could offer rich information about practices ‘on the ground’. In order to yield further insights, I then asked the initial interviewees to suggest other potential participants. This ‘snowball’ stage led to the recruitment of a further two participants, both from the Netherlands. While this meant that half of the final sample (four out of eight interviewees) was located in the Netherlands, the Dutch participants had experience across a range of educational contexts, and the initial purposeful sample provided insights across several different countries. After the final interview, I felt that the main themes raised by participants were starting to overlap and that any improvements yielded by further interviews would be marginal. Therefore, I concluded that a sufficiently rich set of data had been collected to address the research questions (Tracy 2013).

Participants were either academic or technical leads on online exam projects across six organisations, with most working for universities (from a small university college in Norway to a large public university in the UK). Table 1 provides an overview of the research participants and the practices they described.

Table 1. Research participants (identified by pseudonyms).
Participant name(s) Institution Main role in online exams Practices described
Maria and John Large vocational university, the Netherlands Joint technical leads Large-scale, physically invigilated, closed-book online exams
Anna Large modern university, Ireland Academic lead Small-scale, closed-book online exams, mainly in the health sciences field. Both physical invigilation and online proctoring.
Julia National organisation for higher education, the Netherlands Project lead Overview of practice across several Dutch institutions
Lucas Small state university college, Norway Technical lead Various, ranging from invigilated, closed-book online exams to collaborative home research papers.
Martin and Sandra Large public university, UK Joint technical leads Large-scale, physically invigilated, closed-book online exams
Kim Large university teaching hospital, the Netherlands Academic lead Assessment of student radiologists via the interpretation of images from volumetric scans

I conducted semi-structured interviews via videoconferencing between February and April 2016, with each interview lasting 60−90 min. I prepared a short list of broad questions to gather participants’ reflections on, and future aspirations for, online exams; I then investigated emergent themes via follow-up questioning (Tracy 2013). Participants were sent full transcripts for feedback and approval before detailed analysis began. Participants’ names and any details that might disclose their identity (including the names of institutions and specific projects) were removed prior to coding and analysis. All participants were identified using pseudonyms throughout.

Using a grounded theory methodology, I performed initial and focused coding before checking and integrating theoretical categories, interrogating emergent themes via an iterative analytical process (Glaser and Strauss 1967).

This research project was approved in line with the Moray House School of Education (University of Edinburgh) ethics processes. All participants provided informed consent.

Results

As they reflected on their experiences, participants outlined some of the contours, dimensions and limitations of the migration and transformation discourses. Starting with migration, there was some evidence of instrumentality in participants’ accounts of their motivations for pursuing online exams:

From the academic staff perspective, marking time; so much could be saved … And then also the student experience side of things, there was a drive to provide a quick turnaround on marks and feedback. (Sandra)

I think an online exam works best when the technology is almost not thought of at all … the technology is almost in the background for them. It just works. (Martin)

Similarly, Lucas claimed that in his experience, Norwegian students, some of whom were lobbying universities to offer online exams, often viewed exam technologies as an instrument by which to increase ease of use:

My impression of the student movement was they just wanted to get this a little bit digitalised, to plug in the old examination model, to make it a little bit easier [to use]. I didn’t see so much innovative thinking in the student movement, where they really wanted to look at [the] use of [the] internet and applying knowledge. (Lucas, emphasis added)

Julia added that the transition to online exams in her context was motivated both by potential efficiency gains and by a more essentialist belief that technologies could enhance learning:

If you take an exam for 500 students in a digital way it's much cheaper than when you do it with a paper exam – but on the other hand there is also a growing awareness about … the possibilities of making use of digital forms of assessment to enhance learning. (Julia)

Likewise Maria described how the use of analytics in online exams would improve the quality of future exam questions:

Afterwards, every test is analysed and the questions that are bad will be thrown away … so the quality has improved a lot […] But when the questions are done on paper, we don’t have data on it. … Those [paper-based] exams were a lot worse than they are now because I’ve seen them. (Maria)

Some participants advanced the argument that the migration of traditional exams to digital contexts was a logical and pragmatic first step towards more wide-ranging changes at some point in the future. For example, Maria described migration online as a forerunner for more authentic summative assessment in the future:

It’s a process of growth, and you can’t do everything at the same time … I think in about five years the exams with multiple-choice questions … will become formative and we will need to test in [summative] exams only the ability to do professional things. (Maria)

However, Lucas saw such an approach as inherently flawed. Because resources were being absorbed by solving the technical and administrative problems of migration projects, he argued, some institutions were left with little time, money or energy to use digital technologies more creatively:

Lucas: The problems that arise and the requirements in terms of technical resources, personnel etc. are so great that there’s an enormous amount of time and energy and money that will have to go into that problem, and that aspect of assessment is what’s using up 95% of our resources now.

Interviewer: Ok. And do you think that’s a necessary step to take?

Lucas: No, I don’t. I see it as what’s happening, but it’s not a necessary step.

Lucas anticipated that for as long as universities continued to pursue an approach driven by the migration discourse, creative practice with summative assessment would continue to be a secondary concern:

When it comes to innovative exams, that’s going much more slowly [than the move from handwritten to typed exams]. I can imagine that within 10 years we’re completely upscaled in terms of using computers to take exams, but still we’re at 50% old-style exams and maybe by the end of this century or maybe by the end of this millennium we’ll finally be approaching really creative use of exams. (Lucas)

While adopting instrumentalist and essentialist positions, respectively, the migration and the transformation discourses are underpinned by a common assumption: a division between the social (human) and the material (technological). For Bayne (2015), this separation has a reductive effect on the scholarship of digital education, arguing that it ‘robs the field of its complexity and richness, reducing our capacity to understand it as a domain of genuine social significance’ (pp. 9−10). In this research, participants made some observations that pointed to a more entangled relationship between humans and technologies. For example, Anna said that the extent to which an online exam was appropriate in her context depended as much on the students’ confidence with, and prior experience of, digital technologies as it did on the technology itself:

I think it’s really important to map up the individual students themselves and where they’re coming from to the actual assessment that you decide to work with. … we definitely have shades of grey in there. I mean it’s not a one-size-fits-all, for sure. (Anna)

Meanwhile, Kim described how digital technologies were deeply intertwined with practice, education and assessment within her discipline, radiology:

At the beginning of the 21st century, the whole of clinical practice gradually changed from analogue viewing of images to digital viewing. And that's why also the task just changed, and so I think then you have to change the education as well as the testing. (Kim)

Others identified the potentially constraining influence of wider socioeconomic factors. Describing his home country (Norway) as ‘a stinking rich country’, Lucas argued that its relative wealth was having a negative impact on creativity and was reinforcing practices consistent with the migration discourse:

People have so much that they really aren’t forced to be creative […] We’re giving lip service to creative digital assessment but largely [we’re] trying to, as quickly as possible, take this old style of exams over to a digital style. (Lucas)

Some participants were particularly interested in the idea of universities collaborating on large question banks in order to create individually randomised question papers. Julia, for example, saw this as potentially increasing the validity and academic rigour of online exams across multiple institutions in the Netherlands. However, she indicated that a combination of human, financial and technological factors was constraining this collaboration, including incompatibility between platforms, lack of funding and lack of adequately trained staff. Maria voiced similar concerns, while also highlighting the impact of technology vendors’ resistance towards cross-platform collaboration:

It’s a technical thing. It’s ‘how you get the questions you make in system one, how do you get them in system two?’ [There is] a standard way to do that, but it doesn’t always work and … it’s also a bit cultural because the supplier … [doesn’t] want you to get the questions out; they want you to stay in [their] system. (Maria)

Meanwhile, Lucas highlighted the dialogical relationship between technologies and assessment practices and argued that truly innovative practice with online exams would problematise the term itself:

For me the notion of ‘online exams’, I’m not sure that even describes what I would like to do in terms of assessment […] If our assessment structure could reflect the use of co-operative activity between the students and somehow assessing the product of their co-operation to a much greater extent … then that’s where I’d land. And this has a digital and an online aspect, but that's not the primary aspect. It’s not that it's digital, it's just that it allows for the use of digital resources when appropriate. It’s still the human resources that are vital. (Lucas)

Likewise, Sandra perceived a need to shift towards more authentic assessment, which appeared to trouble aspects of traditional examination processes:

In the real world, when students graduate they’re going to go out and have access to all of this information and we should be setting assessments that are curating that information rather than saying ‘you need to know this stuff without being able to go and access the information’. (Sandra)

Discussion

Although the migration and transformation discourses position the use of technology in different ways (instrumentalism and essentialism, respectively), both are underpinned by the same assumption: that the technological and the human are discrete and separate. There are examples of both perspectives here, such as Martin positioning technology instrumentally (‘in the background … it just works’) and Julia describing technologies’ essential ‘possibilities’ in terms of ‘enhanc[ing] learning’. The strict separation of human and non-human that is embedded in migration and transformation discourses is arguably over-represented in the educational literature and elides significant complexities (Bayne 2015; Hamilton and Friesen 2013; Orlikowski 2010).

In contrast to instrumentalist and essentialist perspectives, this research reveals some of the ways in which technologies and humans are ‘entangled in cultural, material, political and economic assemblages of great complexity’ (Bayne 2015, p. 18) as well as some of the implications of technological entailments for organisational culture (Orlikowski and Scott 2008). Maria and John, for example, identified multiple (technological, social and cultural) dimensions to changing assessment practices.

In the migration discourse, the process of transplanting traditional exams to digital environments is positioned as what Marshall (2010) calls a ‘sustaining’ change; that is, one that ‘improve[s] the function of the organisation in ways that are consistent with previous activities’ (p. 180). Decision-makers may be convinced that migration is a first step towards transforming assessment practices over the longer term. However, to borrow Anna’s phrase, ‘it’s not a one-size-fits-all’: in reality, a two-step process appears highly challenging. Migration may require the resolution of a range of expensive and labour-intensive technological, pedagogical and human dilemmas that mean anticipated short-term benefits fail to materialise. Over the longer term, the compromises and resources required to get through the migration process may be so large that more wide-reaching change is shelved indefinitely − as Lucas says, such issues can absorb ‘95% of our resources …[but] it’s not a necessary step’.

Meanwhile, the essentialism of the transformation discourse, should also be approached with some caution. Hamilton and Friesen (2013) critique educational research that is ‘framed by assertions of the inevitable and pervasive changes’ (p. 3) resulting from technologies, while O’Keeffe (2016) argued that the analysis of data from online exams is often framed in administrative terms, is highly value-laden and ‘promote[s] a very specific and normative vision of how sociomaterial relations in the world should be configured’ (p. 101). Johnston, MacNeill and Smyth (2018) go further, critiquing ‘the myth of digital transformation’ (p. 63) and drawing attention to the ways in which a largely transactional transformation narrative builds on pre-existing practices (including pedagogies) without challenging them or proposing alternatives. We should consider what is left unchallenged by transformation and migration discourses, as exemplified by Lucas’s description of paying ‘lip service’ to creativity while migrating exams across to ‘a digital style’.

The discursive influence of terminology also deserves critical consideration. A range of terms is used in the literature to describe large-scale assessment events such as these, including ‘digital exams’, ‘e-assessment’ and ‘computer-based testing’. ‘Online exams’ seems to be the most widely used, although the term itself is rarely defined. Terminology can reveal stakeholders’ interests and represent an attempt to define the criteria by which practices and technologies are judged (Gillespie 2010). Therefore, we might ask ourselves: does the way we describe examinations in digital contexts embed instrumentalist and essentialist ideas about technology while tacitly reinscribing pre-existing assumptions about the educational purposes of assessment? Kim urges educators ‘to change the education as well as the testing’, but does the bolting on of a digital-era prefix (whether ‘online’, ‘digital’, ‘computer-based’ or something else) to a pre-digital stem (‘exam’) act to legitimise discourses that have significant limitations under the veneer of practices being, to quote Lucas, ‘a little bit digitalised’? Moreover, if institutions wish to organise assessment events that are high-stakes and take place under secure conditions, yet align with complex educational aims, then is a new term required? If so, my suggestion would be ‘online assessment under exam conditions’. ‘Exam conditions’ would respect operational concern with scale and security, while ‘online assessment’ could loosen ties between assessment design and delivery mode, thereby creating a space for educators to push creatively at the boundaries of traditional exams (e.g. through the prior release of materials, communicating via modes other than text or setting collaborative tasks).

The participants in this research offer a more complex picture of practices with online exams than the migration or transformation discourses would suggest. They provide evidence that materiality can take a performative role in facilitating, altering or even proscribing assessment practices. At the same time, social dimensions are enacted with and through technologies, resulting in what Hannon (2013) terms ‘significant unintended consequences’ (p. 168). These findings provide evidence that the material realities of online exam technologies are intertwined with the human and the social; as such, they problematise the instrumentalist and essentialist assumptions that underpin the migration and transformation discourses.

Conclusions

While small in scale, this research provides a detailed analysis of practitioners’ experiences with online exams across multiple national and institutional contexts. These findings surface and problematise both the migration and the transformation discourses: they illustrate some of the ways in which technology is not neutral and show how essentialism elides potentially significant human and cultural dimensions.

This research indicates that the challenges being negotiated in the use of online exams run much deeper than concerns with exam administration and security; to this end, I have argued that the assumptions underpinning practice should be surfaced and analysed critically. Building online exams onto unquestioned ideological foundations (such as prior assumptions about the role of technology, the educational outcomes that can or should be assessed, and even the language used to describe these events) risks reinscribing long-standing assumptions that may in fact work contrary to many of the professed aims of contemporary higher education (e.g. the development of skills and dispositions whose benefits extend beyond graduation; Boud and Soler 2016; Johnston, MacNeill, and Smyth 2018). Meanwhile, there are examples of university staff struggling to reconcile educational goals with the materiality of particular systems, and of the social and human practices of online exams being enmeshed with, constituted by, and in dialogue with material and non-human dimensions.

At a pragmatic level, any expectation of a two-step change process – whereby profound educational change follows on from the (apparently) more prosaic matter of migrating traditional exams online – appears to unravel as universities attempt to solve sometimes complex and unexpected problems. In the absence of critical reflection, the implementation of online exams may incur significant short-term demands that potentially jeopardise anticipated long-term developments. Meanwhile, those who would take a transformative approach must subject essentialist claims about technologies’ inherent capabilities to critical scrutiny and consider the ways in which technologies are interwoven with practices. As such, this study endorses Hillier and Fluck’s (2015) suggestion that skilful handling of ‘embedded cultured attitudes’ (p. 465) is a key requirement of future developments in online exams.

Practitioners are encouraged to ask the following questions when considering current or future practice with online exams:

To enhance the existing literature, future research on online exams could seek to stimulate debate; challenge assumptions about the roles of digital technologies; and articulate multiple, bold visions for the future. For example, future studies could:

As an alternative to migration and transformation discourses, critical engagement with technology and the complexities of institutional contexts is required in order to drive a productive dialogue around practice (Orlikowski 2010). The analysis of online exams through the theoretical lens of sociomateriality can help practitioners to challenge instrumentalist and essentialist discourses and to ask new questions about what types of assessment could or should be offered securely and at scale. In resisting these discourses, educators can strive to cultivate ‘a potentially fruitful dialogue between pedagogical values, educational philosophy and technological design’ (Hamilton and Friesen 2013, p. 14) based on a shared understanding of exam technologies as being deeply entangled with, and co-constitutive of, educational practices.

Acknowledgements

This research was originally conducted for a masters dissertation in Digital Education at the University of Edinburgh. I thank my supervisor there, Dr Jen Ross, for her feedback and guidance during the research and in the preparation of this article. My thanks also go to the anonymous reviewers for their constructive comments and to Gill Ferrell and Martha Gibson for their help in recruiting participants.

Competing interests

The author declares no conflict of interest with any organisation regarding the material discussed in this article.

References

Barad, K. (2003) ‘Posthumanist performativity: toward an understanding of how matter comes to matter’, Signs, vol. 28, pp. 801–831. doi: 10.1086/345321

Bayne, S. (2015) ‘What’s the matter with ‘technology-enhanced learning?’, Learning, Media and Technology, vol. 40, pp. 5–20. doi: 10.1080/17439884.2014.915851

Berggren, B., Fili, A. & Nordberg, O. (2015) ‘Digital examination in higher education – experiences from three different perspectives’, International Journal of Education and Development Using Information and Communication Technology, vol. 11, pp. 100–108.

Boud, D. & Soler, R. (2016) ‘Sustainable assessment revisited’, Assessment and Evaluation in Higher Education, vol. 41, pp. 400–413. doi: 10.1080/02602938.2015.1018133

Carless, D. (2007) ‘Learning-oriented assessment: conceptual bases and practical implications’, Innovations in Education and Teaching International, vol. 44, pp. 57–66. doi: 10.1080/14703290601081332

Cramp, J., et al., (2019) ‘Lessons learned from implementing remotely invigilated online exams’, Journal of University Teaching and Learning Practice, vol. 16, no. 1., [online] Available at: https://ro.uow.edu.au/jutlp/vol16/iss1/10

Crisp, G., Guàrdia, L. & Hillier, M. (2016) ‘Using e-assessment to enhance student learning and evidence learning outcomes’, International Journal of Educational Technology in Higher Education, vol. 13. doi: 10.1186/s41239-016-0020-3

D’Souza, K. A. & Siegfeldt, D. V. (2017) ‘A conceptual framework for detecting cheating in online and take-home exams’, Decision Sciences: Journal of Innovative Education, vol. 15, pp. 370–391. doi: 10.1111/dsji.12140

Escudier, M. P., et al., (2011) ‘University students’ attainment and perceptions of computer-based and traditional tests in a high-stakes examination’, Journal of Computer Assisted Learning, vol. 27, pp. 440–447. doi: 10.1111/j.1365-2729.2011.00409.x

Fenwick, T., Edwards, R. & Sawchuk, P. (2011) Emerging Approaches to Educational Research: Tracing the Sociomaterial. Routledge, London.

Ferrell, G. (2014) Electronic Management of Assessment (EMA): A Landscape Review, JISC, Bristol, [online] Available at: http://www.eunis.org/wp-content/uploads/2015/05/EMA_REPORT.pdf

Gibbs, G. & Simpson, C. (2004) ‘Conditions under which assessment supports students’ learning’, Learning and Teaching in Higher Education, vol. 1, pp. 3–33, [online] Available at: http://eprints.glos.ac.uk/3609/

Gillespie, T. (2010) ‘The politics of ‘platforms”’, New Media and Society, vol. 12, pp. 347–364. doi: 10.1177/1461444809342738

Glaser, B. B. & Strauss, A. L. (1967) The Discovery of Grounded Theory: Strategies for Qualitative Research, Transaction, New Brunswick, NJ.

Hamilton, E. & Friesen, N. (2013) ‘Online education: a science and technology studies perspective/Éducation en ligne: perspective des études en science et technologie’, Canadian Journal of Learning and Technology/La Revue Canadienne de l’Apprentissage et de la Technologie, vol. 39, [online] Available at: https://www.learntechlib.org/p/54417/. doi: 10.21432/T2001C

Hannon, J. (2013) ‘Incommensurate practices: sociomaterial entanglements of learning technology implementation’, Journal of Computer Assisted Learning, vol. 29, pp. 168–178. doi: 10.1111/j.1365-2729.2012.00480.x

Hepplestone, S., et al., (2011) ‘Using technology to encourage student engagement with feedback: a literature review’, Research in Learning Technology, vol. 19, pp. 117–127. doi: 10.3402/rlt.v19i2.10347

Hillier, M. & Fluck, A. (2015) ‘A pedagogical end game for exams: a look 10 years into the future of high stakes assessment’, in Proceedings of the Australasian Society for Computers in Learning in Tertiary Education (Ascilite), eds T. Reiners et al., Perth, Australia, 29 November – 2 December 2015, pp. 463–470, [online] Available at: http://www.2015conference.ascilite.org/wp-content/uploads/2015/11/ascilite-2015-proceedings.pdf

Hylton, K., Levy, Y. & Dringus, L. P. (2016) ‘Utilizing webcam-based proctoring to deter misconduct in online exams’, Computers and Education, vol. 92–93, pp. 53–63. doi: 10.1016/j.compedu.2015.10.002

James, R. (2016) ‘Tertiary student attitudes to invigilated, online summative examinations’, International Journal of Educational Technology in Higher Education, vol. 13. doi: 10.1186/s41239-016-0015-0

Johnston, B., MacNeill, S. & Smyth, K. (2018) Conceptualising the Digital University: The Intersection of Policy, Pedagogy and Practice. Palgrave Macmillan, London.

Joint Information Systems Committee (JISC) (2018) Designing Learning and Assessment in a Digital Age. JISC, Bristol, [online] Available at: https://www.jisc.ac.uk/guides/designing-learning-and-assessment-in-a-digital-age

Karaman, S. (2011) ‘Examining the effects of flexible online exams on students’ engagement in e-learning’, Educational Research and Reviews, vol. 6, pp. 259–264.

Karim, N. A. & Shukur, Z. (2016) ‘Proposed features of an online examination interface design and its optimal values’, Computers in Human Behaviour, vol. 64, pp. 414–422. doi: 10.1016/j.chb.2016.07.013

Khare, A. & Lam, H. (2008) ‘Assessing student achievement and progress with online examinations: some pedagogical and technical issues’, International Journal on E-learning, vol. 7, pp. 383–402.

Marshall, S. (2010) ‘Change, technology and higher education: are universities capable of organisational change?’, ALT-J, Research in Learning Technology, vol. 18, pp. 179–192. doi: 10.1080/09687769.2010.529107

Milone, A. S., et al., (2017) ‘The impact of proctored online exams on the educational experience’, Currents in Pharmacy Teaching and Learning, vol. 9, pp. 108–114. doi: 10.1016/j.cptl.2016.08.037

Myyry, L. & Joutsenvirta, T. (2015) ‘Open-book, open-web online examinations: developing examination practices to support university students’ learning and self-efficacy’, Active Learning in Higher Education, vol. 16, pp. 119–132. doi: 10.1177/1469787415574053

Newhouse, C. P. (2011) ‘Using IT to assess IT: towards greater authenticity in summative performance assessment’, Computers and Education, vol. 56, pp. 388–402. doi: 10.1016/j.compedu.2010.08.023

O’Keeffe, C. (2016) ‘Producing data through e-assessment: a trace ethnographic investigation into e-assessment events’, European Educational Research Journal, vol. 15, pp. 99–116. doi: 10.1177/1474904115612486

Orlikowski, W. (2010) ‘The sociomateriality of organisational life: considering technology in management research’, Cambridge Journal of Economics, vol. 34, pp. 125–141. doi: 10.1093/cje/bep058

Orlikowski, W. J. & Scott, S. V. (2008) ‘Sociomateriality: challenging the separation of technology, work and organization’, Academy of Management Annals, vol. 2, no. 1, pp. 433–474. doi: 10.1080/19416520802211644

O’Shea, C. & Fawns, T. (2014) ‘Disruptions and dialogues: supporting collaborative connoisseurship in digital environments’, in Advances and Innovations in University Assessment and Feedback, eds C. Kreber et al., Edinburgh University Press, Edinburgh, pp. 259–273.

Ripley, M. (2009) ‘Transformational computer-based testing’, in The Transition to Computer-based Assessment, eds F. Scheuermann & J. Bjornsson, European Commission Joint Research Centre, Ispra, Italy, pp. 92–98.

Rust, C. (2007) ‘Towards a scholarship of assessment’, Assessment and Evaluation in Higher Education, vol. 32, pp. 229–237. doi: 10.1080/02602930600805192

Schmidt, S. M. P., Ralph, D. L. & Buskirk, B. (2009) ‘Utilizing online exams: a case study’, Journal of College Teaching and Learning, vol. 6, pp. 1–8. doi: 10.19030/tlc.v6i8.1108

Tracy, S. J. (2013) Qualitative Research Methods: Collecting Evidence, Crafting Analysis, Communicating Impact, Blackwell Publishing, Oxford.

Ullah, A., Ziao, H. & Barker, T. (2019) ‘A study into the usability and security implications of text and image based challenge questions in the context of online examination’, Education and Information Technologies, vol. 24, pp. 13–39. doi: 10.1007/s10639-018-9758-7

Williams, J. B. & Wong, A. (2009) ‘The efficacy of final exams: a comparative study of closed-book, invigilated exams and open-book, open-web exams’, British Journal of Educational Technology, vol. 40, pp. 227–236. doi: 10.1111/j.1467-8535.2008.00929.x

Footnotes

1 ‘Online exams’ are understood here as high-stakes summative assessment events, mediated by digital technologies, often taking place in a defined place or time and under secure conditions (e.g. invigilation, restrictions on access to course materials, notes or communication).

2 Myyry and Joutsenvirta (2015) acknowledged that ‘the specific context and tradition of university examinations in Finland’ (p. 129) might have influenced their results.