ORIGINAL RESEARCH ARTICLE

Improving marking effectiveness and feedback provision in an OSCE assessment using Microsoft Forms: A pilot study in Sport and Exercise Therapy

Kassie A. Ciglianaa*, Tom Grayb and George Gowerb

aEducation Office, Solent University, Southampton, UK; bDepartment of Sport and Health, Solent University, Southampton, UK

(Received 26 May 2023; Revised 12 February 2024; Published 23 April 2024)

An objective structured clinical examination (OSCE) has been recognised as a reliable but workload-intensive assessment method across health sciences studies. Though a variety of digital marking tools have been employed to improve marking and feedback provision for OSCEs, many of these require specialist software or maintenance. This pilot study examines the development and trialling of Microsoft Forms as a marking and feedback instrument for an OSCE within a Sport and Exercise Therapy module. This study aims to assess whether the use of a non-specialist digital tool, such as Microsoft Forms, might be able overcome limitations in current assessment procedures and ultimately provide a more effective method for marking and feedback provision for an OSCE. Results from OSCE examiners (N = 8) and students (N = 30) who participated in the pilot indicate that Microsoft Forms does have the potential to provide a more effective experience for examiners and ultimately improve upon feedback provision for students when compared with a paper-based marking tool. However, concerns around the form’s ease-of-use may ultimately influence its adoption as a marking instrument above current paper-based methods.

Keywords: OSCE; assessment design; learning technologies; musculoskeletal therapy; authentic assessment

*Corresponding author. Email: Kassie.cigliana@solent.ac.uk

Research in Learning Technology 2024. © 2024 K.A. Cigliana et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2024, 32: 3097 - http://dx.doi.org/10.25304/rlt.v32.3097

Introduction

As with traditional studies in medicine or pharmacy, students of other allied health professions, such as musculoskeletal therapy or occupational therapy, are often assessed using practical, hands-on assessments to evaluate their mastery in clinical skills such as administering treatment or working with patients.

In the context of this pilot study, students studying on the BSc (Hons) Sport and Exercise Therapy course at a university in the UK are required to undertake an examination in which they are assessed on their skill to discuss the various components of subjective history taking, perform an objective assessment and administer safe and effective spinal mobilisations on a hypothetical patient presenting with a spinal complaint. This contributes 70% towards the weighting for the level-5 Spinal Assessment & Mobilisations module through formal summative assessment. Assessors guide students through four stages of the assessment, each with a thematic focus and predetermined assessment criteria against which a student’s performance is graded, as per an Objective Structured Clinical Examination (OSCE).

Historically, within the course, the marking of this practical real-time assessment consists of a paper marking form aligned to a rubric and the four stages of the assessment. During the examination, assessors hastily hand-write comments on the form though these can often lack contextualisation to specific criterion. These comments are then summarised into three: strengths, areas for improvement and feed-forward recommendations. The student’s final grade is uploaded and released via the institution’s virtual learning environment, Moodle, specifically using the Moodle Assignment module (Moodle, 2020); however, the feedback comments remain on the paper marking sheet. Students can request their original assessment form, though lecturers report that students often only review their overall mark online, therefore missing the opportunity to receive valuable feedback designed to facilitate personal and professional growth.

This study aims to explore whether a digitised version of the marking form could improve upon the effectiveness of recording real-time student performance whilst providing more accessible and comprehensive feedback.

Literature review

Objective structured clinical examinations have been used as an assessment tool in medicine studies globally since the 1970s with a broad agreement amongst educators that they are an authentic and reliable assessment tool (Harden et al., 1975; Rushforth, 2007). More recently, other allied health studies have also adopted the OSCE as a key assessment tool, specifically in the fields of nursing and pharmacy (Kristina & Wijoyo, 2019; Rushforth 2007) and most recently in the training of physical therapy and musculoskeletal specialist professions, such as physiotherapists, occupational therapists and sports therapists (Snodgrass et al., 2014; Swift et al., 2016).

The OSCE typically involves the assessment of clinical skills using simulated patients (SPs) who have been informed as to their role in the assessment, including skills such as patient communication, diagnosis and administration of therapy or treatment. Traditionally, OSCE examiners record students’ ability to perform the skill using a paper-based form or rubric.

Several key challenges have been identified in the implementation of the traditional OSCE, despite its recognition as a reliable assessment strategy. Wardman et al. (2018) and Luimes and Labrecque (2018) point out that feedback from an OSCE is most effective for student learning when it is both personalised and delivered in a timely manner. Yet, many authors point out that the translating of paper-based forms into discernible individualised feedback for students is a lengthy process, resulting in students receiving limited actionable feedback oftentimes several weeks after the assessment has taken place, resulting in reduced satisfaction (Ashby et al., 2016; Cham & Cochrane, 2020; Meskell et al., 2015; Snodgrass et al., 2014). Others cite the administrative workload on assessors when processing assessment documents across large cohorts of students (Cham & Cochrane, 2020) and risks such as transcription errors and data security when managing paper-based assessment forms (Judd et al., 2017; Meskell et al., 2015). Harrison et al. (2015) highlight that the delivery of comprehensive OSCE feedback is often limited to students who have failed the exam due to the workload it demands.

One suggested alternative to the traditional OSCE format was the introduction of an electronic system for marking. For instance, Snodgrass et al. (2014) piloted the use of an iPad and a licenced assessment-support software during an OSCE for students of physiotherapy and occupational therapy and found that the use of an electronic marking tool improved examiners’ perceptions of providing equitable student feedback and reduced administration time post-examination. Judd et al. (2017) reported similar findings by using bespoke marking software on an iPad. Several authors reported that OSCE examiners tended to prefer an electronic marking tool when compared to the traditional paper-based form (Judd et al., 2017; Meskell et al., 2015; Swift et al., 2016). Swift et al. (2016) also reported that the use of an electronic form resulted in less fatigue amongst examiners.

Moreover, Cham and Cochrane (2020) reported that student satisfaction improved significantly in their study of an iPad-based alternative to OSCE marking due to the quality of individualised feedback and the speed at which it was received. These findings are echoed by Daniels et al. (2019), whose participants commented positively on the potential impact of receiving immediate feedback via a tablet-based OSCE marking tool on their ability to develop as student-clinicians.

Based on the aforementioned literature, it may seem obvious that the introduction of an electronic marking system for OSCE assessment would be a logical improvement upon a paper-based marking form. However, Bennett et al. (2017, p. 679) found that academics are often reluctant to adopt a digital assessment solution because of limitations in the support in its use or development, stating that logistical and developmental limitations of tools along with a lack of support often led to unwanted compromises when using digital technologies for assessment, or even abandonment of initiatives entirely. These sentiments are reflected in frameworks such as the Technology Acceptance Model, or TAM (Davis, 1989, 1993), which posits ‘perceived ease-of-use’ and ‘perceived usefulness’ as factors that greatly influence the adoption of new technologies, and Van Der Vleuten’s (1996) Utility Formula, which proposes that characteristics such as practicality and reliability can impact on the development of new assessment methods in health sciences education.

In the cases of Snodgrass et al. (2014) and Judd et al. (2017), the development of such electronic solutions was supported by an external service provider and an internal applications developer, respectively. Meskell et al. (2015) reported that examiners required bespoke training to use the electronic OSCE system in their study, whilst Cham and Cochrane (2020) discuss how the development of an electronic OSCE tool for optometry studies took more than 1 year to implement. Such investments are inevitably costly for universities or courses wishing to develop a new electronic marking tool, requiring either a third-party marking application or a specialist in-house developer.

Research aim

This pilot study aimed to identify an alternative solution to address these constraints using existing tools within the suite of learning technologies already available at the participating university, specifically Moodle-based plugins, or Microsoft applications, which would require less extensive development and would take advantage of the existing digital capabilities of examiners. The trial and development of this alternative assessment tool were commissioned by the university’s BSc (Hons) Sport and Exercise Therapy course team to address the aforementioned limitations of their marking procedures and feedback provision for an OSCE.

The analysis of feedback gathered during the pilot considers: the characteristics a tool would need to provide a more effective marking experience for examiners, more comprehensive and contextualised feedback to students, a sustainable solution to cut paper waste and a digital means of storing marking forms to improve data security.

Procedure

Digital marking form development

First, the original paper-based marking form and exam structure were scrutinised by the research team consisting of experienced musculoskeletal clinicians and lecturers in Sport and Exercise Therapy, alongside the assessment criteria and rubric, to determine which aspects of the paper form needed to be translated into a digital format. The intention of this was to ensure that all elements of the original OSCE could be represented in a potential digital version.

For instance, the team identified those elements of the OSCE that were standardised for all students, such as the assessment of their breadth and depth of knowledge into the red flag conditions and symptoms relevant to spinal pathology, due to its clinical safety implications. The rubric consisted of sections aligned to assessment criteria, with 16 grade boundaries ranging from A1 to F3. For the second part of the exam, students were assigned a nature of injury (pertaining to the joint, muscle or nerve) and spinal area (cervical, upper thoracic, lower thoracic or lumbar) at random. Clinical scenarios were designated proportionally to the relative frequency of the injury in patients. Each type of injury had its own set of rubric criteria that needed to be incorporated into the digital form. The original marking form also allowed for hand-written comments to be made beside each section.

The OSCE required examiners to physically follow students and SPs through each stage of the assessment. Therefore, it was determined that a mobile-optimised tool would be preferred to allow examiners to mark via a tablet. Finally, feedback collected would have to be saved securely and made available privately for students in accordance with institutional policy.

Once the requirements for the design and administration of the digital alternative had been identified, the research team then explored a variety of digital tools with the aim of replicating the experience, including a simple cloud-based document and marking rubrics in Moodle. The team limited their exploration to tools already available at the participating institution to minimise additional costs or training requirements. The characteristics of the TAM were also considered during the selection of a tool, as ‘perceived ease-of-use’ and ‘perceived usefulness’ would need to be achieved in order for any later adoption of the new marking approach (Davis, 1989, 1993).

Ultimately, Microsoft Forms was selected due to its variety of question types, which could align to the rubric, for example, Likert scale, free text, audio-input, its compatibility with tablet interfaces and its ability to save data securely (Microsoft, 2021). Additional benefits included the ability to output individual responses as PDF, which could be shared with students as feedback via Moodle, the data analytics dashboard, which could provide an overview of responses for assessors, and the Microsoft Excel output, which could allow for marks to be calculated using formulae. With respect to the TAM, Microsoft Forms was already widely in use at the participating institution as a survey tool, thus fulfilling the requirement for ‘ease-of-use’ in principle (Davis, 1989, 1993).

The development phase lasted approximately 4-months. Each rubric criterion was created as a Likert scale question in the Microsoft Form. Likert scale questions were then grouped according to distinct procedures or processes measured on the OSCE. Microsoft Forms allowed for separate sections to be created so that each phase of the exam could be recorded before moving on to the next. Conditional outcome questions allowed examiners to identify which of the random allocations had been designated to the student, which then opened the appropriate rubric for that allocation.

Free-text input boxes were added at the end of each section for examiners to type short comments following each phase of the exam. The functions for audio feedback and file upload were also included as optional questions at the end of each section if an examiner wanted to draw a diagram and upload this as part of the feedback, for example. A final free-text section was added at the end of the exam for examiners to provide more generalised feedback in the form of strengths, areas for improvement and feed-forward recommendations.

Following the creation of the digital form, an internal risk assessment was conducted to ensure the protection of student data and to minimise the risks of losing data. Initial testing of Microsoft Forms on an iPad Pro (2018 version) found that data in the form would be cached in the case of a loss of Wi-Fi, closing a browser window and even in the case of the iPad powering off and on again, simulating a battery failure. The risk assessment, therefore, determined that Microsoft Forms would be a secure and viable means through which to log student assessment data and feedback.

Phase 1 testing

To test the administration of the digital form in practice, an initial trial was completed by 4 test examiners. Each tester was asked to complete all sections of the form in a timely manner, as if they were assessing a student. These initial trials of Microsoft Forms served as a proof of concept. A second round of testing subsequently took place in the laboratory where the exam would be hosted. The four testers reported increased difficulty in typing free text comments on the iPad in a simulated therapy space. Portable keyboards and small movable tables were introduced to assist testers in completing the form whilst retaining the mobility required for the examination.

Phase 2 testing

Following the initial testing, the Microsoft Form was evaluated by the original four testers and four additional course examiners to confirm its alignment with existing marking criteria. The digital form was then piloted by four of the aforementioned examiners who marked alongside the traditional paper-based assessors during the official module OSCE.

On the day of the pilot, five students volunteered to be marked using both traditional and digital assessment forms. To reduce disruption to proceedings, the examiners piloting the digital marking form were limited to an observation-only role.

The Microsoft Form and traditional paper-based examiner transcripts were internally moderated for consistency of grading and feedback, as per institutional procedures for this level of exam. If required, any arising comments could be discussed between the moderating examiners: in the case of the pilot, no discrepancies were identified.

All module students were given the opportunity to contact the teaching team to receive their written feedback via the traditional paper form, whilst those five students marked with the digital form were sent their feedback by email after the release of all students’ grades.

Participant feedback and analysis

To conclude the pilot, the eight examiners who evaluated Phase 2 and all students who undertook the assessment (N = 30) were invited to be surveyed regarding their opinions on digital feedback methods. Ethical clearance was sought and granted prior to gathering participant data. Invited participants were made aware of the purpose of this more formalised feedback procedure, and that participation in the project was optional. Responses were collected via an online survey with a series of open-ended questions, and all responses were anonymised. Questions for participants were written as to allow participants to identify their own themes with the intention that these could be compared with those previously identified within the literature and inform developments for OSCE marking and feedback provision.

For examiners, survey questions asked them to consider the potential benefits and drawbacks of using the developed Microsoft Form or similar digital marking tool to mark the OSCE assessment. A further question asked them to indicate what characteristics the form would need to be an effective marking tool. The four examiners who utilised Microsoft Forms in practice were additionally asked to describe their experiences using the form to mark the OSCE exam and to relate their feedback specifically to their marking practice.

Students who undertook the assessment were also surveyed via an online survey tool (N = 30) to gain their perspectives on their current method of receiving feedback and whether the process could be made more effective, with the use of the digital form. Those students whose assessment was additionally marked using the Microsoft Form (N = 5) were asked to comment specifically about their experience of receiving digital feedback.

A thematic analysis of the survey responses was employed to identify common perspectives amongst the qualitative data with the aim to determine which characteristics of a digital marking tool would be desirable for examiners, what aspects of marking using Microsoft Forms might influence its adoption over current marking and feedback procedures, and perspectives from students around the receipt of feedback.

Braun and Clarke’s (2006) approach served as the primary basis on which themes were categorised, first through an initial familiarisation of all responses, followed by the coding and categorisation of related statements. Themes were identified primarily through a deductive approach, where factors such as the quality, comprehensiveness and timeliness of feedback, ease-of-use of the marking tool and developmental complexities had previously been indicated within the literature. Statements were then further sub-categorised into the benefits and drawbacks of employing the form amongst examiner-participants, and into current practice and perceived improvements of employing a digital feedback tool amongst student-participants to reflect the challenges identified within the literature regarding traditional OSCE assessments. Sustainability was also included as an emergent theme identified by two participants.

Corresponding themes amongst the examiner and student feedback were highlighted to explore how the digital form might improve the effectiveness of OSCE marking and feedback provision, as per the research aim.

Results

Examiner feedback

The thematic analysis revealed several common themes identified by examiners (N = 8) relating to both the potential benefits and drawbacks of using Microsoft Forms for OSCE marking. Potential benefits of the digital form include ease-of-use, improved efficiency, improved quality and clarity of feedback, and improved sustainability. Potential drawbacks include concerns around ease-of-use, limitations to feedback input, IT-related risks and developmental complexities (see Table 1).

Table 1. Examiner (N = 8) feedback on the adoption of a new digital assessment tool.
No. Themes – Benefits Examiner Feedback
1 Ease of use 1.1 Potential for it to add up marks as we go.*
1.2 Easy to use when selecting tick boxes and options.*
1.3 Central storage.*
1.4 Electronic cut and paste where different students need similar advice, gradually preparing a… bank of frequently needed comments to select from.
1.5 If the system is user-friendly, a quicker and logical way of recording feedback.
1.6 Follow along with the distinct sections of the exam and record comments in order.*
2 Clarity and consistency of feedback 2.1 It will allow more consistent format of feedback between different markers.
2.2 For students… they would be able to see how their performance aligned much more clearly with the criteria.*
2.3 Students can understand better the rationale and the reason for grades.
2.4 Allows staff to mark all of the work consistently.
2.5 Much more comprehensive in terms of addressing all of the marking criteria, whereas the original marking form would not easily allow the examiner to address all sections.*
3 Efficiency 3.1 Potential to speed up the process of feedback to students.
3.2 Reduced the amount of paperwork involved in practical exams.*
3.3 It would make the process quick and easy… when you have a small timeframe between exams.
3.4 [Results] would be instantly online and then easy to send to students for feedback.
3.5 Feedback could be audited by the module conveyor.
3.6 This should also make the marking and grade aggregation process much easier to manage.*
3.7 Using the grading of each section to gauge not only where the student did well in but also to see the areas… where teaching may need to [be] improved.*
4 Sustainability 4.1 An added benefit…eliminated printing and paper costs.
4.2 Enabling a more sustainable approach to marking student assessments.*
Themes – Drawbacks Examiner Feedback
5 IT-related issues 5.1 If there is a glitch of delay… this may slow down the process.
5.2 Logistics like battery life and accessibility of the iPads.*
6 Concerns over ease of use 6.1 It needs to be user friendly so everyone can pick up the tool and use it.
6.2 The familiarisation time needed to get used to the marking tool.
6.3 Forms might need to be continuously revised before it can be usable in an easy and efficient fashion.
6.4 A bit more time-consuming when using the free text options.*
6.5 Difficult to come back to sections.*
6.6 Difficult to keep up with the speed of the [exam].*
6.7 Time it can take to type in free text with the keyboard ([an] iPad pen might have been more useful).*
No. Themes – Drawbacks Examiner Feedback
7 Generalisation of feedback 7.1 Feedback and marking is generic rather than individualised.
8 Development complexities 8.1 Development of efficient and effective… forms could be complex and time-consuming.
8.2 Using the correct language and defining the correct assessment criteria to define performance can be complex.
*A comment made by a pilot examiner in practice.

Similar themes regarding the potential benefits of Microsoft Forms were reflected in the responses relating to the ideal characteristics of a digital marking tool. Participants identified characteristics consistent with the developed form such as a ‘clickable rubric for marks awarded’, ‘easily able to move through sections’, ‘[aligned] with learning objectives’ and ‘voice recordable feedback’. Other preferred characteristics such as ‘the ability to write [with a stylus]’ were not yet compatible with the developed version of the form.

For those examiners who utilised Microsoft Forms during the assessment, specific recommendations were documented. Notably, all four of the examiners mentioned the production of free-text feedback as an area of challenge, such as this statement from Examiner 3:

I found the form easy to use when selecting tick boxes and options, but a bit more time-consuming when using the free text options. Using a stylus to write directly onto the screen might have enhanced this experience.

This sentiment is echoed by Examiner 1:

Limited in what you can and can’t do – I like to use doodles to aid descriptions particularly with handling modifications for mobilisations which is not an option here.

Examiner 2 mentioned the use of a portable keyboard as a potential solution:

Having a portable keyboard and small table to type comments was hugely beneficial in allowing me to produce free text comments quickly for each section.

Other recommendations included ‘new examiners simply [observing] the exam in the first instance… before attempting to complete the assessment form alongside the exam’ and ‘More developments… to align the structure of the form to the narrative that experienced examiners might follow’.

Finally, Examiner 2 commented specifically on the use of Microsoft Forms compared with marking tools available in Moodle:

The Microsoft Form is ideal for use on a portable device, whereas other marking tools that we have available are constrained within Moodle and optimised for desktop use.

When asked whether they would prefer to retain current methods of marking and feedback or progress to the digital form, 6 out of 8 examiners responded positively to the digital method. Those who did not recommend a switch to the digital method both advised that a better system of recording free-text comments was needed within the Microsoft Form.

Student feedback

The thematic analysis of student responses (N = 30) revealed a similar categorisation of themes, which reflect the potential benefits of digital feedback methods identified by the examiner participants. When discussing their current methods of receiving feedback, key themes included the quality, timeliness and communication of feedback. When asked specifically about potential improvements to the current method, response themes centred around addressing limitations identified in current feedback practices (see Table 2).

Table 2. Student (N = 30) feedback on current assessment methods and a proposed improvement.
No. Themes – Current Practice Student Feedback
1 Quality of Feedback (Positive) 1.1 The feedback given is always very helpful and constructive.
1.2 The feedback was in a good amount of detail and easy to understand.
1.3 I received a great load of feedback that showed me how to improve with my clinical language.
2 Quality of Feedback (Negative) 2.1 Feedback has been useful in some modules; however, some modules feedback haven’t been beneficial for learning and how to improve in that area.
2.2 Some feedback only states highlighted parts of the criteria.
2.3 Feedback has been vague sometimes, found it difficult to take the feedback forward into my other modules and assessments.
2.4 Feedback could be slightly more detailed online in outlining where could improve.
3 Timeliness 3.1 The feedback is usually good but tends to take a long time to get back to us.
3.2 Some grades have taken longer than a month to come back to us.
3.3 Timeline for feedback is too long after the exam has taken place.
4 Communication / Ease of access 4.1 Sometimes it wasn’t put on [Moodle] what we did well and what went wrong, we were just asked to contact someone.
4.2 I think it would be helpful if the examiner of assessments were to contact each of their examined students, just to ask if they would like further [feedback].
4.3 I have previously found it a little difficult to obtain further feedback and book tutorials with the examiners of my assessments.
Themes – Improvements Student Feedback
5 Quality of feedback 5.1 If we were told where we went wrong, or the reasons why the grade was issued, and what was needed to improve it.
5.2 Written in-depth feedback with strengths and weaknesses.
5.3 More specific feedback on what needs improvement from previous practical exams.
5.4 Accessing feedback could be more accessible. By this I mean seeing a full document of goods, bads etc.
6 Timeliness 6.1 Quicker feedback, and where we could improve without asking.
6.2 Feedback straight after the exam would be beneficial rather than waiting 3–6 weeks.
6.3 Providing feedback as soon as it’s available rather than having to ask for it.
6.4 I would prefer if we got our grade and feedback together rather than asking for the feedback separately.
7 Communication 7.1 To receive feedback along with a [Microsoft] [T]eams call to talk through it together.
7.2 Talking through the grade in person would be beneficial.

All students surveyed were also introduced to the Microsoft Form developed for their OSCE assessment and asked whether the form might address some of the concerns raised in their feedback. Twenty-two out of thirty participants responded that they would prefer examiners to adopt Microsoft Forms or a similar digital marking device when compared with the current method of requesting their paper-based feedback. Those who indicated that they preferred the paper-based method (N = 8) tended to respond positively to current feedback practices and did not denote a specific reason for their hesitancy towards a digital alternative.

The participants who were additionally marked using Microsoft Forms (N = 5) were asked to comment on their digital feedback following the initial survey of all students. Interestingly, the same themes arose from these students’ responses, which address the limitations of the paper-based feedback method (see Table 3).

Table 3. Student (N = 5) feedback on receiving OSCE feedback using the developed Microsoft Forms.
No. Themes Student Feedback
1 Quality of feedback 1.1 I feel like the digital form is easier to read, it’s more clear and concise about what is being commented on.
1.2 Being able to see areas where [you’re] better and areas where [you’re] worse is beneficial as it allows a student and examiner to discuss areas of strength and improvement with better understanding.
1.3 It provides a good insight to what areas you are achieving certain grades that are very good and some areas for improvement where you may be not as good which can further benefit a student for future modules.
2 Timeliness 2.1 The possible benefits of the digital method are that it is easier and quicker to type up notes compared to writing them up.
2.2 The digital feedback in my opinion is better than the paper one because there is no need to digitalize the assessment sheet afterwards, and if the student asks for feedback, it can be sent through quickly.
3 Communication / Ease of access 3.1 I think the digital is easier to send to students than the paper feedback.
3.2 Clear writing, easy access and more organized.
3.3 Another benefit is that it is easier to send feedback digitally, and it is also clearer to understand than someone’s handwriting.

Out of those students who received a version of their feedback using the Microsoft Form, all responded that they would prefer the digital form to the current paper-based version of their OSCE feedback. Only two identified a potential drawback of using a digital device, with one commenting that ‘sometimes technology does not work and could crash’, whilst a second commented that ‘[the] teacher may not be able to write everything out, since you need to type and not write’.

Discussion

Although only a small number of participants consisting of examiners and students took part in this pilot, the results of the survey revealed a generally positive response towards the adoption of digital feedback methods for the OSCE exam. Notably, those who engaged specifically with Microsoft Forms reported that this tool for assessment and feedback would be able to address the concerns raised by all participants regarding the current marking and feedback practices for the OSCE assessment.

Many of these themes were also those highlighted within the literature, as anticipated through the deductive approach taken within the thematic analysis. Several authors commented on the importance of timeliness of feedback (e.g. Ashby et al., 2016; Daniels et al., 2019; Snodgrass et al., 2014), a factor that is echoed by the student recommendations in Table 2 and addressed as a potential benefit of the OSCE Microsoft Form by examiners and students in Tables 1 and 3, respectively.

Others note the importance of both timeliness and individualisation of feedback within OSCE feedback (Luimes & Labrecque, 2018; Wardman et al., 2018). Whilst the individualisation of feedback may not be improved exclusively by the use of the OSCE Microsoft Form, it may help to address the need for feedback to be both individualised and comprehensive, as noted in recommendations amongst all participant groups. Marking via the digital form may help to reduce the administrative workload of compiling such comprehensive feedback, as noted by Cham and Cochrane (2020).

Specifically for examiner participants, one recurring theme is Microsoft Forms’ ease-of-use for marking as a key factor for its adoption and implementation as a primary assessment tool. Similar to the findings of Bennett et al. (2017), assessors expressed the need for a digital device to be ‘easier’ and ‘quicker’ to use for marking when compared to their current paper-based method, regardless of its potential benefits for students. Such perspectives align with the TAM (Davis, 1989, 1993), which proposes that the primary determinant for the adoption of a new technological solution is through its perceived ease-of-use, and to a lesser extent with the Utility Formula (Van Der Vleuten, 1996), which indicates practicality as a key factor in developing health-sciences assessments. This may help to explain why some examiners would be hesitant to adopt Microsoft Forms over the current method. Overcoming concerns around ease-of-use, particularly as they relate to the production of free-text comments (Snodgrass et al., 2014), may therefore be crucial in realising change.

Other limitations to note are those proposed by Bennett et al. (2017) and Meskell et al. (2015) who commented, respectively, on development time and bespoke training requirements as factors that may impact the adoption of a digital OSCE solution. Both themes are echoed within the potential drawbacks identified by examiners in Table 1. Given that all the examiners were able to review the form, even if they did not test it in practice, it is evident that they felt more development may be required, thereby mitigating some of the potential benefits of using a more common office application for marking and feedback purposes.

Conclusions and recommendations

This study detailed the development of a Microsoft Form, which was piloted as an improved marking and feedback tool for an OSCE on a BSc (Hons) Sports and Exercise Therapy level-5 module. The purpose of the pilot was to develop and trial the use of a non-specialist tool to overcome the limitations of using a paper-based marking instrument within the OSCE. Microsoft Forms was selected because it did not require a specialist developer to maintain and was already in use at the participating institution, thus taking advantage of the existing digital capabilities of markers. Its variety of question types aligned with the assessment’s existing marking rubric and format. The mobile-friendly interface could facilitate marking in an exam where the examiner may need to move along with the student and their simulated patient. Moreover, initial testing deemed that the risks of losing participant data during the exam were minimal due to the form’s auto-save features and automatic data caching.

A final version of the Microsoft Form was trialled by examiners to secondarily mark students during a spinal mobilisations OSCE. Module examiners and students undertaking the module were also surveyed on their views of current assessment and feedback practices and whether a digital solution, such as Microsoft Forms, could improve the effectiveness of marking and feedback delivery.

Results revealed that perspectives towards a digital form as an alternative OSCE marking and feedback tool were generally positive amongst both student (N = 30) and examiner participants (N = 8), particularly as they related to potential improvements in the comprehensiveness, timeliness and individualisation of feedback. However, concerns indicated by examiner participants surrounding the form’s ease-of-use, such as the need for improved recording of free-text comments and additional familiarity with the tool, may be crucial in influencing the team’s adoption of the digital form in place of the paper instrument. Additional benefits included a reduction in paper wastage and printing costs and improved data security.

Arguably the most significant conclusion that can be drawn from this study is that ‘perceived ease-of-use’ continues to be a dominating influence in the adoption of digital, even amongst those technologies, which may already be familiar to users (Davis, 1989, 1993). One proposed solution is that improving confidence amongst examiners in using Microsoft Forms for this alternative purpose may improve perceptions of ease-of-use and subsequent utilisation as a marking and feedback tool (Joo et al., 2017). For instance, trialling the use of the digital form for marking an exam with fewer components or criteria may allow examiners to gain confidence in using the tool for marking, thereby improving its ease-of-use. Hesitancies suggested by students around the receipt of digital feedback may also be alleviated through greater confidence gained by examiners. As argued by Greener and Wakefield (2015), educators’ confidence in utilising digital learning tools may have an important impact on students’ engagement with those tools.

One clear limitation of this study is the relatively small number of participants, particularly those examiners who participated in trialling Microsoft Forms during a live exam. A greater number of test examiners would undoubtedly be able to provide a richer variety of feedback to inform the development of the digital form and to judge whether it could viably replace a paper marking form in all circumstances. Factors identified within the literature review, including efficiency, administrative workload, possible transcription errors and the comprehensiveness of feedback, might be more thoroughly evaluated with a greater sample of test-examiners utilising the form as primary or second markers whilst completing the formalised marking and feedback procedures of the OSCE.

Further developments to the Microsoft Form may also aim to address limitations identified by the participants. For instance, a bank of common feedback comments from which examiners could select several options may reduce the need to hastily type free-text feedback comments alongside each criterion, though this might create a risk of further generalising feedback.

It is evident that the utilisation of digital assessment tools has the potential to improve the effectiveness of marking and feedback provision, particularly in the context of an OSCE, where it is essential that students are evaluated based on their performance in a live simulated environment. The results of this pilot study indicate that Microsoft Forms does, indeed, have the potential to meet the criteria of an OSCE assessment instrument. Notably, Microsoft Forms’ role as a non-specialist application means that there is a potential for reduced development time and user training requirements once the application has been proven effective as an assessment tool in this context. This research demonstrates the value of exploring such non-specialist applications for use in Higher Education assessment, particularly as practical, simulated and authentic assessment becomes more prevalent (Sambell et al., 2019).

Acknowledgements

The authors would like to acknowledge the contribution of Edward Bolton, Solent University, who supported the investigators during the testing phases of the project.

Conflict of interest and funding

The authors declare no funding or conflicts of interest. The use of Microsoft Forms is enabled via an institutional licence purchased by the participating university.

References

Ashby, S. E. et al. (2016). Factors shaping e-feedback utilization following electronic objective structured clinical examinations. Nursing and Health Sciences, 18(3), 362–369. https://doi.org/10.1111/nhs.12279
Bennett, S. et al. (2017). How technology shapes assessment design: Findings from a study of university teachers. British Journal of Educational Technology, 48(2), 672–682. https://doi.org/10.1111/bjet.12439
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Cham, K. M., & Cochrane, A. L. (2020). A digital resource to assess clinical competency. The Clinical Teacher, 17(2), 153–158. https://doi.org/10.1111/tct.13030
Daniels, V. J. et al. (2019). Impact of tablet-scoring and immediate score sheet review on validity and educational impact in an internal medicine residency Objective Structured Clinical Exam (OSCE). Medical Teacher, 41(9), 1039–1044. https://doi.org/10.1080/0142159X.2019.1615609
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Davis, F. D. (1993). User acceptance of information technology: System characteristics, user perceptions and behavioral impacts. International Journal of Man-Machine Studies, 38(3), 475–487. https://doi.org/10.1006/imMicrosoft.1993.1022
Greener, S., & Wakefield, C. (2015). Developing confidence in the use of digital tools in teaching. The Electronic Journal of E-Learning Volume, 13(4), 260–267.
Harden, R. M. et al. (1975). Assessment of clinical competence using objective structured examination. British Medical Journal, 1(5955), 447–451. https://doi.org/10.1136/bmj.1.5955.447
Harrison, C. J. et al. (2015). How we give personalised audio feedback after summative OSCEs. Medical Teacher, 37(4), 323–326. https://doi.org/10.3109/0142159X.2014.932901
Joo, Y. J., Park, S., & Lim, E. (2018). Factors influencing preservice teachers’ intention to use technology; TPACK, teacher self-efficacy, and technology acceptance model. Journal of Educational Technology & Society, 21(3), 48–59. Retrieved from http://www.jstor.org/stable/26458506
Judd, T. et al. (2017). If at first you don’t succeed… adoption of iPad marking for high-stakes assessments. Perspectives on Medical Education, 6(5), 356–361. https://doi.org/10.1007/s40037-017-0372-y
Kristina, S. A., & Wijoyo, Y. (2018). Assessment of pharmacy students’ clinical skills using objective structured clinical examination (OSCE): A literature review. Systematic Reviews in Pharmacy, 10(1), 55–60. https://doi.org/10.5530/srp.2019.1.9
Luimes, J. D., & Labrecque, M. E. (2018). Implementation of electronic objective structured clinical examination evaluation in a nurse practitioner program. The Journal of Nursing Education, 57(8), 502–505. https://doi.org/10.3928/01484834-20180720-10
Meskell, P. et al. (2015). Back to the future: An online OSCE Management Information System for nursing OSCEs. Nurse Education Today, 35(11), 1091–1096. https://doi.org/10.1016/j.nedt.2015.06.010
Microsoft. (2021). Forms [computer software]. Redmond.
Moodle. (2020). Assignment activity. Moodle.org. Retrieved from https://docs.moodle.org/39/en/Assignment_activity
Rushforth, H. E. (2007). Objective structured clinical examination (OSCE): Review of literature and implications for nursing education. Nurse Education Today, 27(5), 481–490. https://doi.org/10.1016/j.nedt.2006.08.009
Sambell, K., Brown, S., & Race, P. (2019). Assessment as a locus for engagement: Priorities and practicalities. Italian Journal of Educational Research, (Special Issue), 45–62.
Snodgrass, S. J. et al. (2014). Implementation of an electronic objective structured clinical exam for assessing practical skills in pre-professional physiotherapy and occupational therapy prograMicrosoft: Examiner and course coordinator perspectives. Australasian Journal of Educational Technology, 30(2), 152–166. https://doi.org/10.14742/ajet.348
Swift, M., Spake, E., & Kohia, M. (2016). Examiner fatigue and ability to concentrate in objective structured clinical examinations for physical therapist students. Journal of Allied Health, 45(1), 62–70.
Van Der Vleuten, C. P. M. (1996). The assessment of professional competence: Developments, research and practical implications. Advances in Health Sciences Education, 1(1), 41–67. https://doi.org/10.1007/BF00596229
Wardman, M. J., Yorke, V. C., & Hallam, J. L. (2018). Evaluation of a multi-methods approach to the collection and dissemination of feedback on OSCE performance in dental education. European Journal of Dental Education, 22(2), e203–e211. https://doi.org/10.1111/eje.12273