ORIGINAL RESEARCH ARTICLE

Students’ perspectives of a study support (Studiosity) service at a University

David Pike*

Head of Digital Learning, University of Bedfordshire, Luton, Bedfordshire, UK

Received: 5 February 2023; revised: 29 October 2023; accepted: 12 January 2024; Published: 21 May 2024

Supporting students’ success and achievement is a key mission of WP (Widening Participation) institutions such as the University of Bedfordshire. An essential step in ensuring students succeed is the development of academic writing skills – these are vital during students’ studies and when students leave university study and undertake further study or enter graduate-level employment. During the 2021–2022 academic year, the University of Bedfordshire implemented a study support service called Studiosity, a service designed to provide students with formative feedback on drafts of their assessment tasks. This study utilises a survey instrument exploring Studiosity’s Writing Feedback (WF) service and addresses a gap in the literature where there is very little understanding of the details of students’ engagement with the system. The survey’s results indicate a mismatch between students’ assumptions about formative feedback provided by Studiosity. However, when students utilise Studiosity’s WF service, the personalised and specific feedback raises students’ confidence in their ability to write academically.

Keywords: Study support; Studiosity; evaluation; survey and Writing Feedback

*Corresponding author. Email: David.pike@beds.ac.uk

Research in Learning Technology 2024. © 2024 David Pike. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2024, 32: 3015 - http://dx.doi.org/10.25304/rlt.v32.3015

Introduction

This study examines students’ experiences of a study support service called Studiosity which the University of Bedfordshire implemented during the 2021–2022 academic year (August 2021–August 2022) using a survey instrument containing qualitative and quantitative questions. Within this study, I explore the first iteration of a survey instrument which directly examines students’ priorities and perceptions of Studiosity’s Writing Feedback (WF) service. A Thematic Analysis (Braun & Clarke, 2006) examines the qualitative responses to the survey, and quantitative questions are utilised to help provide an understanding of students’ priorities and to support the qualitative questions. The research question addressed is: What are students’ priorities, experiences, and reflections of their engagement with Studiosity?

Data from the quantitative section include: students’ motivations for using Studiosity (QG11), confidence (QG2), common sources of help (QG3), timing (QG4), post-confidence use (QG5) and potential improvements (QG6). The themes identified from the qualitative sections of the survey include: students’ focus upon assessment practices (T21, T5, T9 and T10), confidence and the action of academic writing (T2, T7 and T8), referencing practice (T6), understanding and making changes (T3, T4) and utilisation approach (T11 and T12). When asked to reflect upon the advantages Studiosity offered, themes included a focus upon students’ language choices, referencing and the methodology of Studiosty’s utilisation. The Results and discussion section provides a comprehensive overview of all 12 themes.

Institutional context

As a Widening Participation (WP) institution, the University enrols students who need support adjusting to the culture of a UK university, learning the nuances of academic writing and developing their academic voice. The University must demonstrate it is managing students’ experiences and monitoring and responding to changes in key metrics. The metrics are defined by a UK government agency called the Office for Students (OfS) evaluates individual institutions and compares all UK universities in an exercise called the TEF (Teaching Excellence Framework; OfS, 2020a) and through an annual analysis which tracks: completions – ensuring enrolled students complete a programme, and attainment – comparing the academic performance of different student groups based on demographics such as ethnicity. Institutions must promote and target interventions which engage student groups where completion and attainment is lower than a pre-set targets as part of an APP (Access and Participation Plan; OfS, 2020b; UoB-APP, 2022). The University’s initial and ongoing implementation of Studiosity is an example of an intervention which is intended to improve student outcomes.

Studiosity and its role in developing academic writing

Studiosity is a service which provides students with formative and developmental feedback on the draft versions of their assignments. Students submit their draft work via an online interface and as part of Studiosity’s WF service, a writing specialist evaluates a student’s use of spelling and grammar, referencing, choice of language and structure. The feedback does not address the academic content of students’ assessments, and this is made clear to students pre and post-submission. The University’s working assumption was that feedback from Studiosity would support the development of students’ academic writing, allow students to enhance their assessments before final submission and consequently impact students’ attainment. Studiosity’s availability is 24 h a day and 7 days a week, and the service does not require prior booking. Feedback is returned to students at most in 24 h, but typically within 2–4 h. In contrast, University services operate 5 days a week from 9 am to 5 pm, and require advance booking. Thus it was assumed that students would benefit from having on-demand support rather than waiting for appointments in person. The University was also attempting to understand how Studiosity could influence students’ academic writing skills, address the University’s APP and therefore TEF outcomes.

Literature review

There is little published research regarding Studiosity, and at the time of writing, there is a very limited assessment and examination of students’ experiences with Studiosity’s services. It is this area there is a significant gap in existing knowledge. The review focuses upon the importance of academic writing, the role of formative feedback as it relates to the development of academic writing and an evaluation of the available, but limited, Studiosity research.

Academic writing is a form of academic practice which includes developing arguments which demonstrate objectivity and criticality (Elander et al., 2006; Sultan, 2013) – in this article, the collective term for this is developing an academic voice. However, students do not suddenly develop an academic voice without the input of colleagues, feedback and experiencing success or failure. The high-level principles relevant to this development are threefold: firstly, for some students only summative assessment has meaning (Gedye, 2010) – with the pressures of modern life, students operate in the short term and consequentially; secondly, the OfS (2021) have indicated that UK universities must ensure assessment processes take into account students’ proficiency in English and balance this with the expression of ideas; finally, given my previous point there is a delicate balance to be struck between developing students’ agency (Boyle et al., 2019; McCarthy, 2017) and responding to the requirements of the OfS. This is where formative feedback has a role to play in developing, checking and reiterating the importance of a student’s academic voice.

The literature concerning formative feedback does not contain a unified definition, but to support the later Results and discussion section, three areas of interest relate to Studiosity’s purpose.

Firstly, identifying and clarifying the requirements of academic writing to enhance students’ assessment performance (Nicol & Macfarlane-Dick, 2006). This position relies upon two important ideas: students must be able to express themselves competently and students developing the abilities to contextualise and respond to feedback in a timely manner. Writing and improving an assignment requires students to take responsibility for their own development (Boyle et al., 2019; McCarthy, 2017; Murray & Moore, 2006). However, too much prompting and clarification might lead to a danger of teaching to the assessment, as it places more emphasis upon outcome than development. The action of producing an assessment must also contain some risk and reward for students, but some students fail to realise the ultimate responsibility of understanding and interpreting assessment tasks as vital skills (Aiken, 2023; Jessen & Elander, 2009).

Secondly, if feedback is not personalised then students are likely to ignore it (Bennet, 2011), and engagement with feedback can be improved if students believe it is personalised (Planar & Moya, 2016). This assumes that the reason students do not utilise feedback is personalisation, but for some of the University’s students, the challenges of academic writing lie in identifying, translating and utilising feedback (Bacha, 2002; Strobl et al., 2019). Personalised feedback is an aspiration if group sizes are large – the provision of feedback is subject to the finite resources available.

Thirdly, self-direction to obtain feedback where students engage with tasks and prompts to develop their assessment (Leenknecht et al., 2020; Owen, 2016) – this assumes students are positioned to engage, that there are no conflicting demands upon students – personal or work, and the presence of a suitable study environment when students are away from the classroom. Prior experiences are also a source of student expectations, for example – close quarters support in FE (Further Education) and school environments cannot be resourced in a university environment (Jessen & Elander, 2009). Students may seek help and advice from supporting networks. For example, Wilcox et al. (2005) and Wanner and Palmer (2018) acknowledge the importance of surrounding social networks which include peers, support services and families. Support services are not always available on demand, students’ peers may also be unsure of the requirements that must be met and not everyone may understand the nuances of academic writing. It is this last point which bears importance for services such as Studiosity. They provide some degree of assurance to students that they are at least expressing themselves correctly, but importantly allow students to focus upon subject knowledge.

Research into the role and impact of Studisotity

The previous section identified three important points: clarifying the requirements of academic writing, the personalisation of feedback and self-direction to obtaining feedback. The available literature focuses upon quantitative outcomes and students’ satisfaction, and to a much more limited extent, the qualitative experiences of students.

Thomas’s (2020) report details Studiosity’s positive impact upon progression and continuation among first-year undergraduates who utilise Studiosity, and non-engaged students exhibit lower progression and continuation. This raises two questions: firstly, is Studiosity the only causal factor in students’ success – are there any other influences upon students’ development; secondly, are there any other factors or barriers that impacted the students who did not progress and continue? The latter of these questions is a problem which vexes institutions and is beyond the scope of this study – as the emphasis is upon students who do engage; the former question is a subject which other Studiosity-focused studies address indirectly by attempting to understand students’ experiences. Brodie et al.’s (2022) and Wilson et al.’s (2020) evaluations of Studiosity utilise data from a post-Studiosity submission satisfaction survey provided to users of the system – Brodie et al.’s evaluation of this data is at a multi-institutional level identifying four high-level categories: ‘confidence’, ‘improvement’, ‘understanding’ and ‘reinforcement’, whereas Wilson et al. focus upon their own much smaller scale institutional implementation. Brodie et al.’s analysis is best placed as a framework for further investigation. In both cases, the post-Studiosity use survey provides students with a single Likert rating question – satisfaction – and a single qualitative feedback question. In both cases, the survey data are a subset of the users of Studiosity’s service, so it is unclear how frequently a student utilised the service, and it does not take into account the missing voices where students did not provide feedback. Dollinger et al.’s (2020) study is much smaller in scale and identifies two points: students believed they would obtain a higher grade if they used Studiosity, and students indicated they felt more confident after engaging with Studiosity. Though offering a slightly improved specificity with five Likert questions, Dollinger et al.’s data are the small size of the evaluation with at most 40 responses. It also has the disadvantage of using convenience sampling which makes it difficult to generalise the findings beyond the original context (Bornstein et al., 2013). In Bornschlegl and Caltabiano (2022), Studiosity forms part of a wider examination of students’ help-seeking behaviour, but there is no direct evaluation made of Studiosity’s impact, only passing mentions of the utility of the service.

There is evidence to suggest students find significant utility in Studiosity, but there is an opportunity to further develop the evidential base to demonstrate the service’s impact by exploring the approach to utilisation.

Methodology

Students utilising Studiosity’s service were distributed among the University’s UK-based campuses: Luton, Bedford, Milton Keynes, Aylesbury, London and Birmingham. In the literature review, a critique was offered of Dollinger et al. (2020), Wilson et al.’s (2020) use of a survey and Brodie et al.’s (2022) larger-scale analysis of Studiosity’s survey data. In wanting to provide a more comprehensive examination of student experiences, and considering the potential resource implications of visiting six different campuses – a survey was the most efficient method of data collection. Though a survey can theoretically reach a large number of students (Bartram, 2019), there was a concern that only engaged students would complete it, but also the consideration that the language used would utilise terminology from Studiosity (Lune & Berg, 2017).

The survey was delivered mid-academic year, and utilised a combination of quantitative and qualitative questions with students asked to explain their quantitative selections in more detail. The questions focused upon an enhanced investigation of Brodie et al.’s ‘confidence’ – examining students’ pre and post-Studiosity experiences, ‘improvement’ – identifying students’ understanding of how Studiosity’s feedback helped them make improvements, ‘understanding’ – testing to see if students could comprehend the University’s intentions for students’ Studiosity’s utilisation and ‘reinforcement’ – asking students for examples of the utility of feedback and asking students to reflect on how the system may assist others. In utilising both question types, the intention was to model and improve upon Dollinger et al.’s work and utilise the advice of Smyth et al. (2009) and Singer and Couper (2017) where specific prompts for open text questions enhanced responses and offered a complementary insight. The survey questions were tested on a small group of local colleagues and with students to ensure the questions and structure could be followed.

The survey was made available to students who had accessed Studiosity’s WF service – in totality, there were potentially 1500 students ranging from foundation year to postgraduate. A specific announcement directed to this group of students was placed within the University’s Virtual Learning Environment (VLE) followed by an email campaign. Participation in the survey was voluntary and the responses represent a self-selection or convenience sample (Cohen et al., 2007). The risks of the sampling approach mean it may prove difficult to generalise the results to other practitioner’s contexts as it is difficult to analyse the subgroups present (Bornstein et al., 2013). The approach reflects the same methodological approach that Dollinger et al. (2020), Wilson et al. (2020) and Brodie et al. (2022) employed.

Ethical approval was sought and gained from the University of Bedfordshire’s Institute for Research in Applicable Computing (IRAC). The results of the survey were anonymous, and students were able to enter their student ID number if they wanted to enter a prize draw. Data collected from students did not include any other personally identifiable information, and students were assured that taking part, or not participating would have no impact upon their grades, and they would not be personally identified.

Method of analysis

A TA (Thematic Analysis – Braun & Clarke, 2006) was utilised for the qualitative questions within the survey which are supported by data from quantitative questions. The short nature of the survey responses meant the semantic form of TA (which focuses upon description) was the most appropriate, as this would allow the students’ views and feedback to be visible to other practitioners and provide a baseline for future investigations. The stages of the TA process I utilised included: 1 – data familiarisation; 2 – initial code generation; 3 – searching for themes; 4 – review of themes; 5 – defining themes and 6 – write up. In line with Maguire and Delahunt’s (2017) advice on conducting TA processes, I merged stages 2 and 3 owing to the shortness of the responses in the survey. In stages 4 and 5 – I compared the supporting quantitative data to determine the alignment with the initial codes and themes; my analysis and outcomes were examined and explored with a reviewer.

Results and discussion

QG 1 – Students’ motivations for utilising Studiosity – Table 1

The survey provided students with a pre-prepared list of items to rank in order of importance (items with a * are features provided by Studiosity): feedback to improve confidence*, receiving a high grade, focus upon answering assignment questions, passing, structure*, language choices*, grammar and punctuation*, and referencing*.

Table 1. Overall ranking for individual elements.
Response Rank
I wanted feedback to help me feel confident about my writing 1
I wanted to make sure my work would receive a high grade 2
I wanted to make sure my work answered my assignment questions 3
To check I had structured my work properly 4
I wanted to make sure I passed the assessment 5
To check my choice of grammar and punctuation 6
To check that my choice of language was appropriate 7
To check my referencing 8
Note: The statements listed above require students to drag and drop the items into the order they wanted to reflect their priorities, and the overall ranking was calculated by determining the frequency of positions, for example, the first item was ranked first 124 times.

Students’ motivations did not reflect the primary purpose of Studiosity and presented as more aligned with Gedye’s (2010) view that summative grades are of the most significance. Confidence in writing was ranked by students as being the top priority, but it was followed closely by attaining a high grade, and the need to answer the assessment question. For students to develop their academic voice (Elander et al., 2006; Sultan, 2013) and to meet the needs of the OfS (2021) requirement of proficiency in English, the lowest place group of priorities would serve both students and the University better in the long term: to check my choice of grammar and punctuation (6th place), to check that my choice of language was appropriate (7th place) and to check my referencing (lowest – 8th place). Further evidence to support the quantitative data can be found in the themes from the open-text question asking students to identify any other motivations the pre-set list did not include. The two themes are: Assessment Outcomes (T1) and Writing Confidence (T2).

Assessment outcomes (T1)

Students’ priorities in this theme focused upon the outcome of summative assessment and did not seem to differ much from the preceding quantitative question. Studiosity’s feedback does not comment upon potential grades, but students could be attempting to determine what is acceptable (passing) and if a submission reflects the assignment brief students are working to:

‘[I] wanted to make sure my work receive a high grade, [I] want to make sure I passed the assignment’

‘I wanted to make sure I passed the assessment’

‘To make sure i was on the right track with my assignment’

It again reflects Gedye’s (2010) points about summative assessments, Aiken (2023) and Jessen and Elander (2009) where students think the system will position them to understand the assessment task as they have missed the importance of taking responsibility for their own learning (Boyle et al., 2019; McCarthy, 2017).

Writing confidence (T2)

Writing confidence refers to the process that students must develop as part of their development process. Studiosity’s feedback does indicate all areas of a student’s work that require corrections, and it is for students to make choices about the areas in their writing that must be corrected and improved. This category demonstrates the positive potential when students choose to engage, and there are echoes of Wilcox et al. (2005) and Elander (2009) where there is a focus transition and development. It also reflects Brodie et al.’s (2022) understanding category as students demonstrate a grasp of how the system should be utilised:

I did not know how to write a essay/assignment properly. I am a mature student and had not studied in over 20 years.

Studiosity tend to analyse to me my areas of strengths and weaknesses prior to submitting my actual work.

It gave me confidence to submit my work knowing that i have corrected the mistakes that were in my work.

This experience is typical of the cohorts where there are larger numbers of mature students (for example, Nursing and Midwifery within the University’s HSS faculty). Students have to learn to write in a new context and to develop new skills, but this requires support and help to ensure students are able to convey meaning with confidence.

QG2 – Students’ confidence before using Studiosity – Table 2

The question asked students about their confidence levels ‘Before using Studiosity the first time I was confident with my abilities in the following areas’ with four items linked to Studiosity’s functionality (choice of language, grammar and spelling, referencing and structuring my writing) with the exception of the act of assessment writing – a five-point Likert scale was utilised with a ‘not sure’ option. The data suggest that students possessed a moderate degree of confidence. Only a limited number of responses were received for the supporting qualitative question, and the responses focused upon the criteria for the assessment.

Table 2. Confidence before Studioisty utilisation – Q9.
Name N Strongly agree (%) Agree (%) Neither agree nor disagree (%) Disagree (%) Strongly disagree (%) Not sure (%)
Assessment writing 247 16.2 44.1 23.5 9.3 4.0 2.8
Choice of language 222 12.6 45.0 25.2 12.6 3.6 0.9
Grammar and spelling 223 13.9 40.4 23.8 17.0 4.0 0.9
Referencing 222 16.2 36.0 20.3 18.9 6.8 1.8
Structuring my writing 224 13.8 42.0 23.7 15.2 3.6 1.8

QG3 – Sources of help writing assignments– Table 3

As Wilcox et al. (2005) indicated, there it is not just the institution but the surrounding support which is important to students. Question 11 was designed to identify students’ preferred sources of help, and as a way to respond to Thomas’s (2023) research which focuses upon Studiosity being a causal factor influencing students’ activities. The question does focus upon the frequency and excludes Studiosity because a student’s use of the system was highly focused. Students indicated that the Internet was the most frequent source of help (n = 168/247 – Very Frequently (VF) and Frequently (F): 41.7% + 33.6%), but the nature of the question precludes the exact methodologies students employ. Students also indicated that lecturers and teaching staff (n = 151/227 – VF and F: 30.4% + 30.8%) and feedback from prior assignments (n = 135/227 – VF and F: 30.4% + 29.1%) were also the sources of help. The first result is reasonably expected as students will invariably ask teaching staff questions which are assessment-related. However, the second result may be a peculiarity of the assessment feedback mechanism used by the University – the minimum expectation for feedback includes the requirements to explain to a student what they could improve next time, and what worked well in the current assignment.

Table 3. Sources of help writing assignments – Q11.
Source n Very frequently (%) Frequently (%) Occasionally (%) Rarely (%) Very rarely (%) Not at all (%)
Lecturers and teaching staff 247 30.4 30.8 22.7 9.3 2.4 4.5
Studyhub and PAD online 224 9.8 29.9 25.4 13.8 6.3 14.7
Studyhub and PAD in person 213 5.6 13.1 22.5 19.7 11.3 27.7
Students in your class 215 15.8 25.1 30.7 9.8 8.8 9.8
Students who have studied the subject before 220 7.3 14.1 16.4 15.0 7.3 40.0
Resources from the Internet 223 41.7 33.6 17.5 1.3 2.2 3.6
Feedback from prior assessments 227 30.4 29.1 26.0 7.5 4.0 3.1
Family or friends 217 8.3 12.9 26.3 15.2 10.1 27.2
Online proofreading services 223 14.3 14.3 18.4 12.6 6.7 33.6
Services such as Grammarly or Prowriting aid 221 21.7 23.5 16.7 8.6 5.9 23.5
Note: All of the items were displayed to students and they had to select the frequency of their engagements.

QG4 – Utilisation of Studiosity – and Figure 1

Students indicated they typically submitted work to Studiosity more than 1 week before (n = 83/34.2%) and a week before assessments were due (n = 41/16.9%) – overall this represents over half the total responses (total n = 124/51.1%). Students responding to the survey are providing themselves with time to address the prompts Studiosity. The prompts are effectively tasks, which is an approach Owen (2016) advocate as an important part of the development of academic writing skills. However, data from institutional-level Studiosity reports indicated that students’ submissions rise steadily from Saturday and peaked on Thursday (Friday is usually a submission day). This may indicate that the students responding to the survey are more engaged with the development of their assignments than the general populous. The claim that the survey respondents are more engaged derives from Q14 which explores how many times students utilised Studiosity with 63.5% (n = 155) students indicating they had used the service at least three times. In most cases (Q15) 56.4% (n = 136) of students’ submissions were indicated as ‘Mostly ready to be handed in, but you wanted Studiosity’s feedback to check before final submission’. This supports the earlier themes T1 – Assessment outcomes (wanting to check work before the final submission) and possibly T2 – Writing confidence (students’ desire to be confident about the work they had produced).

Fig 1
Figure 1. Studiosity utilisation timing – Q13, Q14 and Q15.

QG5 – Post-Studiosity use and students’ confidence – Table 4

The question’s text focused specifically upon Studiosity’s utilisation: Once you received feedback from Studiosity did you feel more confident about your [area of interest]. An increase in confidence could be found across all the measures and is listed in Table 4 (SA = Strongly Agree, AG = Agree and n = 241). Students indicated the greatest increase in confidence with grammar and spelling and their choice of language. Table 4 provides an overview of the results post-Studiosity usage indicating the shift in students’ responses. Table 5 demonstrates that students were able to apply the feedback they were provided by Studiosity, with all measures demonstrating strong agreement.

Table 4. Post-Studiosity use – changes in students’ confidence levels.
Q9 – Strongly agree + agree (%) Q16 – Strongly agree + agree (%) % Change
Assessment writing 60.3 89.7 29.4
Choice of language 57.6 87.6 30.0
Structuring my writing 55.8 85.5 29.7
Grammar and spelling 54.3 89.0 34.7
Referencing 52.2 76.7 24.5
Note: The percentage values for Strongly Agree and Agree % values have been combined

 

Table 5. Applying Studiosity’s feedback – Q19.
Question part Strongly agree (%) Agree (%) Total (%)
Understand where I could make changes to my assessment 55.70 39.20 94.90
Make changes to my assessment 52.50 41.50 94.00
Understand the feedback given by Studiosity 51.10 45.70 96.80
Agree with the points raised in the feedback 47.50 46.10 93.60
Apply the feedback to future assessments 52.30 42.60 94.90

Question 17 asked students to comment on any other areas they considered relevant but not covered by the five areas listed in Table 4. Students’ improved confidence was expressed in two ways: Understanding Change – T3 and Specific Changes – T4.

Understanding change – (T3)

This theme identifies students grasping the importance of the feedback Studiosity provided and reflects the ways in which students comprehended the feedback they received:

I was able to understand why I [received] comments on assignment feedback to improve my academic writing skills and referencing. I hope my next assignments will be much better now that I tried [S]tudiosity for the first time this week with three of my assignments.

General confidence, when I receive the feedback and fix the areas I need to fix, I feel less worried about submitting my [assignment].

The written feedback, rather than just the analysis of the document provides a personal confidence boost.

The manner in which feedback is provided is very positive – even when there is a lot to consider. You get the sense they are trying to help rather than criticise.

Specific changes (T4)

When considering what to improve, based on Studiosity’s feedback, students could identify areas where they needed to improve. Referencing was a common topic for all students; in many cases, students referred to this and their use of language coupled with punctuation and other elements of writing:

Referencing was an area of weakness for me, but with Studiosity I was able to learn how I’m expected to reference a source.

I always think I make a lot of mistakes when it come to my English, as it Is not my first language.

Style of writing and grammar.

Q18 asked students to identify where they thought Studiosity did not help improve their confidence. Students’ responses fell into two themes: T5 – Assessment Success and T6 – Referencing Practice. Students’ responses indicated a desire for Studiosity to examine the assessment brief, and students wanted highly specific feedback for their entire assignment. In Q7 and Q8, students indicated the assessment outcomes as a priority and the repetition of the same message demonstrates students’ responses are consistent. Though not made in the same context, the points raised by Planar and Moya (2016) bear relevance – some students expect a degree of personalisation which will not help them develop their skills. Referencing is a different issue, and in Q7, students rated referencing as their lowest motivation for using Studiosity. It is likely that the explanation of the importance of referencing by academic colleagues has prompted students’ attention.

Assessment success (T5)

Studiosity should read assignment brief so he/she know what the question asking about and the way we write is much with assignments brief or not

I know Studiosity is very busy, but I quite often got “you must check this throughout the rest of your assignment”. If they are reading the rest of the document anyways I would’ve preferred more comments rather than me having so search for certain issues again.

To give feedback on the content of the assignment in relation to the brief.

Referencing practice (T6)

In the case of referencing, students mentioned the problem often in very few words:

Referencing according to my university’s requirements: Harvard referencing from Citethemrightonline.

Referencing check.

They couldn’t help with referencing in the uni of beds way.

The University’s academic integrity policy indicates the importance of referencing the ideas and words of other authors. It may be that students have seized upon this message as they have heard it repeated by academic colleagues, or they are concerned they may fall foul of the University’s policy.

Most useful elements of feedback from Studiosity (Q20)

Students’ responses focus upon two themes: T7 – The Mechanics of Writing and T8 – Structure, Grammar and Language Choices. In the former case, students’ responses indicated that Studiosity’s specialists had highlighted the mechanics of their writing and the action of demonstrating where writing could be improved.

Mechanics of writing (T7)

The entire feedback was amazing and it was structured in steps, taking me through what was necessary to be done and included.

How to break down paragraphs into smaller chunks and understand how commas are used in complex long sentences.

The feedback with clear explanations and examples on how my writing could improve.

Structure, grammar and language choices (T8)

The second theme structure, language and grammar had more internal variance with students indicating focused upon sentence structure, using particular phrases and how work was referenced:

Findings referencing errors i would not of been able to pick up on. Also, rephrasing my sentences to make it easier for reader to [understand].

My punctuation when to use semi colons.

Grammar support and language choices.

Feedback on punctuation and structure.

QG6 – Improving Studiosity’s feedback

The question asked students to identify ways in which Studiosity’s feedback could be improved. Given Studiosity’s business model focuses upon returning feedback as quickly as possible to students, students may assume that feedback could also serve to quickly address problems. If students are also operating to tight timescales, students may not take the opportunity to reflect upon the feedback and may not be able to integrate it into their assessment (Bacha, 2002; Strobl et al., 2019). These two points are evident in the two themes: T9 – Assessment Requirements and T10 – Feedback Specificity. In this question, students once again focused upon subjects which are not in the functional domain of Studiosity. It may be that this is a problem with managing expectations, or it could be that students’ focus is upon performance in assessments and is an example of the need for students to take more responsibility for their own development.

Assessment requirements (T9)

Could have said the expected mark.

‘More applicable to the specific assignment subject content, but this would need to be a specialist not just an academic review.

It could provide some information about the quality of critical analysis, most students struggle with that and would like to get some feedback

identify the [length] of the paragraph to enable correction and gain more marks.

Feedback specificity (T10)

Feedback specificity focused heavily upon referencing. Referencing generally accounts for 10% of an assessment’s overall outcome. Students’ concerns could be ascribed to the University’s strict approach to academic integrity and the reminders that students are expected to highlight where they had used words or ideas from other sources:

Be more specific with referencing – they often wrote refer to institution guidelines.

Referencing feedback was poor!

If specialists knew what the university’s referencing style were so better referencing advice could be given.

Along with critiques of the feedback provided:

Be more specific to the format of writing.

If they gave examples of what not to do.

In asking students to reflect upon their engagement with Studiosity, two themes emerged: T11– Timing of Use and T12 – Opportunities to Submit.

Timing of use (T11)

For timing of use students’ responses included:

By trying to submit one or two weeks before submission

Submitted before it was nearly ready for hand in

I wish I started using Studiosity from a lot earlier. I never thought I needed Studiosity’s support. But now that I tried it, it turns out I did!

Make it a requirement for students to use. Because I feel the students that don’t use it are the ones that need it.

Opportunities to submit (T12)

Students’ comments focused upon the lack of available submissions, which are rationed so there is equal opportunity for all students to submit. Some examples included:

Allow for more submissions

Not ration my submissions

More submissions allowed

Studiosity would allow students to submit up to 10 documents, but this was subject to the system’s overall availability which was a finite resource – not all students were able to utilise the 10 submissions as capacity ran out.

A limited number of students submitted the same work repeatedly, and some students took the opportunity to submit several pieces of work simultaneously. Each submission uses up part of the institution’s overall capacity in the system. Rationing and using the system’s resources effectively are two areas which other practitioners should make careful note of, as this generates a dependency culture among the students.

Reflecting upon the research question – what are students’ priorities, experiences and reflections of their engagement with Studiosity?

Students who responded to the survey in this study prioritised elements of feedback that were not deliverable by Studiosity. Despite the efforts of the team implementing Studiosity to explain the role of the system this did not appear to be reflected by the priorities of the students. This is not an issue with Studiosity, but more about students’ understanding of the ways in which they can improve their academic writing. The answer lies in directing students towards a better understanding of how feedback can inform assessment development. Given that students who responded to the survey were motivated by grading and assessment performance, it would make sense to position Studiosity as an earlier intervention in the assessment process. The University’s approach to utilising Studiosity was at the time in a phase where the pedagogy of use was not fully developed. Based on students’ experiences, an improved pedagogical approach may be to separate the questions in the survey which focus upon priorities and use these to illustrate the construction of an assignment from first principles. Students’ use of Studiosity could then be aligned with Planar and Moya’s (2016) suggestion that personalisation enhances engagement by making it a standard part of the assessment development process. The difficulty may be that not all students can and will engage – this assertion is drawn from institutional data about utilisation – most students use Studiosity a few days before final submission and the students who responded to the survey who indicated their use was often more than a week before the due date.

Students claimed that Studiosity made them feel more confident and that they were confident in their ability to make changes to their assessment (Tables 4 and 5 and T3 – Understanding Change and T4 – Specific Changes), but this article is limited in that it cannot test students’ claims, but report that they exist and are for further exploration and evaluation. This confirms Brodie et al.’s (2022) findings of students’ indicating improved confidence, but as the authors point out confidence and competence are not synonymous. The theme T5 – Assessment Success demonstrates that some students expect that everything needing correction should be provided to them, or that rationing submissions (T12 – Opportunities to Submit) should be addressed. Like Jessen and Elander (2009), I would partly suggest this is because students need to develop as independent learners (Bacha, 2002; Strobl et al., 2019), but also the novel nature of the service will cause the University and students to revisit the roles and responsibilities in developing academic practice (Boyle et al., 2019; McCarthy, 2017).

Asking students to reflect upon their own usage (T13 – Improving Assessment Quality and T14 – Submission Timing) provides an interesting insight into how students operate. It is a key advantage for the University to be able to see how and in what ways students are progressing with their assessment. Our approach to implementing Studiosity will change for the 2023–2024 academic year as we will be using language which presents assessment and the use of Studiosity as part of a learning-to-learn toolkit.

Limitations of this study

There are three areas of limitation that other implementers and practitioners should consider: firstly, the sample size was relatively small at around 240 students. Students tended to ignore later questions in the survey; secondly, students’ responses to the open-text questions were in some cases very short and it might be better to consider prompting for further examples or to be more specific and finally, the survey asked students to focus upon their overall usage (some students used the system more than once) and it may be better to link evaluation to a specific utilisation of Studiosity. This may have the advantage of providing longitudinal insight into utilisation. An alternative method which was not viable in this study would be the use of focus groups or individual interviews.

Conclusion

Within this study, I have demonstrated how it is possible to explore the underlying narrative of students’ engagement with Studiosity. The institution’s efforts to position Studiosity as supporting students with their assessments may have been misinterpreted, and this certainly became apparent when considering students’ priorities for Studiosity. When considering the totality of the Studiosity research I explored in the literature review, this study has indicated a need to examine the link between a submission into Studiosity and a student’s outcomes. In this way, the research by Thomas (2020) can be realised at a local level and used to provide a stronger evidence base for encouraging students and institutions to utilise Studiosity. There is also an opportunity to engage students in a deeper discussion, for example via interviews and focus groups, to determine the meaning behind the questions utilised in the survey. This may make it possible to find ways to moderate students’ expectations of Studiosity’s service.

Acknowledgements

The author wishes to note the work of Isabel Aruna – who made sure that our efforts to implement, evaluate, develop and enhance students’ use of Studiosity was a key part of our ongoing project.

References

Aiken, V. (2023). Academic literacies and the tilts withIn: the push and pull of student writing. Teaching in Higher Education, 28(8), 2104–2120. https://doi.org/10.1080/13562517.2021.1952565
Bacha, N. N. (2002). Developing learners’ academic writing skills in higher education: A study for educational reform. Language and Education, 16(3), 161–177. https://doi.org/10.1080/09500780208666826
Bartram, B. (2019). Using questionnaires. In Practical research methods in education (1st ed.). Taylor & Francis Group.
Bennett, R. E. (2011). Formative assessment: a critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5–25. https://doi.org/10.1080/0969594X.2010.513678
Bornschlegl, M., & Caltabiano, N. J. (2022). Increasing accessibility to academic support in higher education for diverse student cohorts. Teaching and Learning Inquiry, 10, 1–18. https://doi.org/10.20343/teachlearninqu.10.13
Boyle, J., Ramsay, S., & Struan, A. (2019). The academic writing skills programme: A model for technology-enhanced, blended delivery of an academic writing programme. Journal of University Teaching & Learning Practice, 16(4), 41–53. https://doi.org/10.53761/1.16.4.4
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Brodie, M., Tisdell, C., & Sachs, J. (2021). Online writing feedback: A service and learning experience. In Student support services (pp. 1–18).
Bornstein, M. H., Jager, J., & Putnick, D. L. (2013). Sampling in developmental science: Situations, shortcomings, solutions, and standards. Developmental Review, 33(4), 357–370. https://doi.org/10.1016/j.dr.2013.08.003
Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education (6th ed.). Routledge. https://doi.org/10.4324/9780203029053
Dollinger, M., Cox, S., Eaton, R., Vanderlelie, J., & Ridsdale, S. (2020). Investigating the usage and perceptions of third-party online learning support services for diverse students. Journal of Interactive Media in Education, 2020(1), 14. https://doi.org/10.5334/jime.555
Elander, J., Harrington, K., Norton, L., Robinson, H., & Reddy, P. (2006). Complex skills and academic writing: A review of evidence about the types of learning required to meet core assessment criteria. Assessment & Evaluation in Higher Education, 31(1), 71–90. https://doi.org/10.1080/02602930500262379
Gedye, S. (2010). Formative assessment and feedback: A review. Planet, 23(1), 40–45. https://doi.org/10.11120/plan.2010.00230040
Jessen, A., & Elander, J. (2009). Development and evaluation of an intervention to improve further education students’ understanding of higher education assessment criteria: Three studies. Journal of Further and Higher Education, 33(4), 359–380. https://doi.org/10.1080/03098770903272461
Leenknecht, M., Wijnia, L., Köhlen, M., Fryer, L., Rikers, R., & Loyens, S. (2021). Formative assessment as practice: The role of students’ motivation. Assessment & Evaluation in Higher Education, 46(2), 236–255.
Lune, H., & Berg, B. L. (2017). Qualitative research methods for the social sciences. SAGE Publications, Inc.
Maguire, M., & Delahunt, B. (2017). Doing a thematic analysis: A practical, step-by-step guide for learning and teaching scholars. All Ireland Journal of Higher Education, 8(3) 1–14, Retrieved from https://ojs.aishe.org/index.php/aishe-j/article/view/335/553.
McCarthy, J. (2017). Enhancing feedback in higher education: Students’ attitudes towards online and in-class formative assessment feedback models. Active Learning in Higher Education, 18(2), 127–141. https://doi.org/10.1177/1469787417707615
Murray, R., & Moore, S. (2006). The handbook of academic writing. Open University Press.
Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090
Office for Students. (2020a). About the TEF – Office for Students. Retrieved from https://www.officeforstudents.org.uk/advice-and-guidance/teaching/about-the-tef/
Office for Students. (2020b). Access and participation plans. Retrieved from https://www.officeforstudents.org.uk/advice-and-guidance/promoting-equal-opportunities/access-and-participation-plans/
Office for Students. (2021). Assessment practices in English higher education providers: Spelling, punctuation and grammar. Retrieved from https://www.officeforstudents.org.uk/publications/assessment-practices-in-english-higher-education-providers/
Owen, L. (2016). The impact of feedback as formative assessment on student performance. International Journal of Teaching and Learning in Higher Education, 28(2), 168–175.
Planar, D., & Moya, S., 2016. The effectiveness of instructor personalized and formative feedback provided by instructor in an online setting: Some unresolved issues. Electronic Journal of E-Learning, 14(3), 196–203.
Singer, E., & Couper, M. P. (2017). Some methodological uses of responses to open questions and other verbatim comments in quantitative surveys. Methods, Data, Analyses: A Journal for Quantitative Methods and Survey Methodology, 11(2), 115–134.
Smyth, J. D., Dillman, D. A., Christian, L. M., & McBride, M. (2009). Open-ended questions in web surveys: Can increasing the size of answer boxes and providing extra verbal instructions improve response quality? Public Opinion Quarterly, 73(2), 325–337.
Strobl, C., Ailhaud, E., Benetos, K., Devitt, A., Kruse, O., Proske, A., & Rapp, C. (2019). Digital support for academic writing: A review of technologies and pedagogies. Computers & Education, 131, 33–48.
Sultan, N. (2013). British students’ academic writing: Can academia help improve the writing skills of tomorrow’s professionals? Industry and Higher Education, 27(2), 139–147. https://doi.org/10.5367/ihe.2013.0145
Thomas, L. (2020). UK Studiosity users: Participation and persistence. Retrieved from https://www.studiosity.com/hubfs/Studiosity/Downloads/Research/Participation%20and%20Persistence%20in%20Studiosity%20users%20-%20Liz%20Thomas%20Associates.pdf
Thomas, L. (2023). A review of the experience and impact of Studiosity’s writing development service in UK universities, 2017–2022. Retrieved from https://www.studiosity.com/2023-impact-research.
UoB-APP. (2022). Access and participation plan university of Bedfordshire. Retrieved from https://www.beds.ac.uk/about-us/our-governance/public-information/access/
Wanner, T., & Palmer, E. (2018). Formative self-and peer assessment for improved student learning: the crucial factors of design, teacher participation and feedback. Assessment & Evaluation in Higher Education, 43(7), 1032–1047. https://doi.org/10.1080/02602938.2018.1427698
Wilcox, P., Winn, S., & Fyvie-Gauld, M. (2005). ‘It was nothing to do with the university, it was just the people’: The role of social support in the first-year experience of higher education. Studies in Higher Education, 30(6), 707–722. https://doi.org/10.1080/03075070500340036
Wilson, G., McAuley, A., Ashton-Hay, S., & Van Eyk, T. (2020). Just when I needed you most: Establishing on-demand learning support in a regional university. Australasian Journal of Educational Technology, 36(5), 46–57. https://doi.org/10.14742/ajet.6117

Footnotes

1QG – Question Group.

2T – Theme.