Making connections: technological interventions to support students in using, and tutors in creating, assessment feedback

Ian Glovera*, Helen J. Parkinb, Stuart Hepplestonea, Brian Irwina and Helen Rodgera

aLearning Enhancement and Academic Development, Sheffield Hallam University, Sheffield, UK; bStudent Engagement, Evaluation and Research, Sheffield Hallam University, Sheffield, UK

Abstract

This paper explores the potential of technology to enhance the assessment and feedback process for both staff and students. The ‘Making Connections’ project aimed to better understand the connections that students make between the feedback that they receive and future assignments, and explored whether technology can help them in this activity. The project interviewed 10 tutors and 20 students, using a semi-structured approach. Data were analysed using a thematic approach, and the findings have identified a number of areas in which improvements could be made to the assessment and feedback process through the use of technology. The findings of the study cover each stage of the assessment process from the perspective of both staff and students. The findings are discussed in the context of current literature, and special attention is given to projects from the UK higher education sector intended to address the same issues.

Keywords: feed-forward; assessment; practices; technology; technology-enhanced learning

Citation: Research in Learning Technology 2015, 23: 27078 - http://dx.doi.org/10.3402/rlt.v23.27078

Responsible Editor: Meg O’Reilly, Southern Cross University, Australia.

Copyright: © 2015 I. Glover et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License, allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Received: 23 December 2014; Accepted: 27 September 2015; Published: 27 October 2015

*Correspondence to: Email: i.glover@shu.ac.uk

Introduction

Feedback has become one of the major focal areas for research and policy in UK higher education in recent years. Part of the reason for this is the prominence of student satisfaction with their feedback on the widely publicised National Student Survey (NSS). Since the NSS began in 2005, the aggregate satisfaction levels for assessment and, particularly, feedback has consistently been significantly lower than for all other aspects of the survey with the exception of the Students’ Union (NSS 2005–2014; www.unistats.com. Reasons proposed for this lower satisfaction include that students:

This shows that there may be an underlying issue for students based on a lack of understanding of what constitutes feedback. A useful definition of feedback comes from the biological sciences and states that it is information about the gap between actual performance and a reference, or benchmark, level that is used to reduce the gap (Ramaprasad 1983). The implication here is that unless the information is used to alter future activity it does not constitute feedback, which means that educators need to ensure that the information they return to students (feed-forward) provides guidance on ways to improve performance in the future. However, studies have shown that this is not always the case (Duncan 2007; Hattie and Timperley 2007) and that students will often only fully engage with their assessment feedback if they have a deep interest in the topic; otherwise, they will refer to it only when their actual grade does not match their expectations (Carless 2006; Higgins, Hartley, and Skelton 2002; Jones and Gorra 2013).

In addition, the increasingly modular nature of curricula has also been proposed as a barrier to students making effective use of their assessment feedback (Boud, 2013; Boud and Molloy, 2000). This, combined with the trend towards a reduced number of assessment activities, has resulted in a bias towards assessment at the end of a module, further reducing the ability of students to make use of feedback (Deepwell and Benfield 2012; Irons 2008; Higgins, Hartley, and Skelton 2002; Yorke 2001).

Technological interventions have frequently been used to attempt to address issues related to feedback and mitigate their effects (Hepplestone et al. 2011; Ferrell and Sheppard 2013). For example, audio feedback has become increasingly common as a method to provide timely, yet rich, feedback (Lunt and Curran 2010), while standardised rubrics have been used to support students in identifying areas of weakness across modules (Glover 2012; Crotwell Timmerman et al. 2011). While many of these interventions relate primarily to the form of the feedback given to students, other areas of significant work have been in the design of the assessment tasks themselves, such as the University of Exeter’s Collaborate project (www.blogs.exeter.ac.uk/collaborate/), and in encouraging and supporting students in structured reflection about their feedback (Kerriganet al. 2011). This paper outlines a research project to investigate how students at Sheffield Hallam University make use of feedback. It reports on the findings from the project and will relate specific technological interventions, primarily from the UK higher education sector, that have sought to address the identified issues.

Project

This paper discusses a project which was a continuation of prior research at the institution into students’ engagement with their summative assessment feedback (Hepplestone and Chikwa 2014; Hepplestone et al. 2009). This prior research showed that students at Sheffield Hallam University generally claimed to understand the purpose of feedback and felt able to identify when they had received it. However, a need to further investigate the feedback practices of students and staff, particularly around how students make use of past feedback in subsequent assessments, was highlighted as a result of this earlier research. The objectives of the ‘Making Connections’ project were to:

  1. understand the intended purpose and meaning of feedback given by tutors,
  2. investigate student understanding, intended use and actual use of feedback received by students,
  3. explore the connections that students are able to make between the feedback they receive and future assignments, and
  4. identify any technological interventions that might help students to make connections between feedback that they receive and future assignments.

This paper focuses on the findings from objective 4, but references aspects of each of the objectives. The paper reports on existing technological interventions identified in the literature and supported by this study that may be used to support students in using, and tutors in creating, feedback and considers the implications for the sector.

Method

The project ran between January 2013 and April 2013 and used qualitative methods to explore how students use previous feedback in their assignments and how technology can be used to facilitate this process.

Five groups were involved in the research, a cohort from each faculty consisting of a tutor and between three and six of their students, and a final cohort of six tutors teaching on other modules. In total, 10 tutors and 20 students were interviewed as part of the project. Participants were recruited on a self-selecting basis, with the criteria for participation by tutors being that they must be teaching a second year undergraduate (Level 5) cohort of six or more students, to whom they had recently delivered feedback. The authors acknowledge that due to the qualitative nature of the study, the small sample size and the self-selecting nature of participants, the findings presented below are not representative or generalisable to the wider population. As discussed previously, it is our intention to strengthen the validity of our findings with supporting literature.

Semi-structured interviews were initially held with the tutor participants. These interviews firstly explored the assessment and feedback process generally and then specifically looked at the feedback they had given to a particular Level 5 cohort of students. Interviews were conducted using an ‘Interview Plus’ (JISC 2009) approach in which an artefact, in this case a copy of the feedback provided to students, was used as the basis for discussion. From the ten tutors interviewed, four were selected for further investigation based on a range of factors including discipline and feedback practices. The cohorts selected were from Engineering, Law, Health and Management – one area from each of the four faculties at Sheffield Hallam University.

Following on from tutor interviews, individual semi-structured interviews were conducted with between three and six students of the selected tutors. Similar to the staff interviews, the student interviews firstly explored their approach to assignments, following the entire process from the issue of an assignment to the receipt of feedback and beyond. The second part of the interviews specifically discussed the feedback provided by the relevant tutor participant, again using an ‘Interview Plus’ approach that focused on the piece of the student’s feedback that had previously been discussed with the tutor.

All interviews were accurately transcribed by an experienced professional external transcriber. This data was subsequently collated and analysed by experienced educational researchers during two full-day workshops where themes, trends and potential technology interventions were identified. Sheffield Hallam University has a strong track record of using data analysis workshops, an approach whereby a team of people led by the principal researcher engage in in-depth thematic analysis of a data set, with each person responsible for engaging with a subset of the data. This enabled the team members to develop a deep understanding of the data and inform subsequent discussions used to identify themes and commonalities in the data.

Summary findings

The full findings of the research are reported in Parkin (2013) and a paper exploring what students do with the feedback that they receive has been presented and submitted for inclusion in the conference proceeding (Hepplestone and Parkin 2015). The following is a summary of the key findings of the project, with specific emphasis on those aspects related to the use of technology in the assessment and feedback process. The interviews with staff and students were structured in such a way that all aspects of the assignment process were covered in a logical order, from how the students prefer to work on their assignments through to how they store and subsequently make use of received feedback, and in so doing it was possible to explore how technology may be used at each stage of the process to enhance student engagement with feedback. The same structure is used in presenting the findings.

Student perspective

Working on assignments

A variety of different locations were suggested by the students as their preferred place to work on assignments, ranging from designated quiet areas in the university library to a more relaxed environment at home. However, the common factor among all of the students was that access to necessary resources was essential. For some this meant mainly electronic materials online but for others it meant hard-copy information such as hand-written notes or previous assignments and feedback.

Students’ use of prior feedback while working on their assignments proved to be extremely limited and generally took a superficial approach. That is, students were more likely to use the feedback when there were obvious links between the previous and current assessments. For example, where an assignment was in a standard format such as a report they would make use of feedback for previous assessments using the same structure; however, they would typically be using feedback that related to formatting, referencing, etc. rather than the deeper, content-focused feedback. Where two assignments were explicitly linked, such as the second being a direct follow-up to the first, students were more likely to refer to the deeper feedback from the first when working on the second. This need for explicit links between assignments also showed itself in that students typically struggled to make connections between feedback and assignments in different modules.

The likelihood of a student referring to previous feedback did increase if they found themselves struggling with a particular assignment. However, some students acknowledged that they had never considered that previous feedback could be useful when working on other assignments.

Submitting assignments

The submission of assignments also showed a range of different practices and personal preferences. Students who preferred to submit hard copies rather than electronic ones stated that their main reason was an increased confidence that the work had been received, evidenced by a physical receipt, implying an element of mistrust in the online submission processes and related systems. Whereas the students who preferred online submission cited the increase in convenience as their primary reason, particularly that they could submit at a time and place that suits them and so could work right up to the deadline.

Students stated that they feel more confident about the assignment submission process when they have a specific receipt to record and, more importantly, prove that a piece of work has been submitted and received. This is standard practice for hard copy submissions, but although they had the same information available in their submission history in the Blackboard virtual learning environment (VLE), students wanted a similar receipt to be issued when submitting online.

On some occasions, students were able to submit hard copy and/or electronic versions of an assignment, with the deadline for the electronic version often being several hours later than the hard copy one, midnight rather than 5 p.m., for example. While this is not a recommended practice, some of the affected students were supportive of it because this allowed them extra time to submit their assignment and they often found themselves working right up to the deadline, even where the deadline was midnight.

While students appeared to have difficulty making connections between assignments and prior feedback, they valued early feedback on drafts particularly highly. They also valued the use of the Turnitin plagiarism detection service as a formative tool to help identify incorrect referencing and cases of accidental plagiarism prior to final submission.

Receiving feedback

The feedback that students received came in a variety of formats and media, such as hand-written annotations on hard-copy submissions, or worked-examples as part of a subsequent lecture. Preferences varied among the students, but there was a general consensus that feedback is most valuable when it is directly linked to the assessment criteria rather than being more abstract and conceptual. Some students expressed a preference for hard-copy feedback, but this was a result of how they manage and store their feedback, most of which could also be achieved electronically. However, most students stated that the convenience of anytime, anywhere access, combined with the ability to read their feedback in private was a major advantage of online feedback.

Any separation between receiving the grade and the feedback, whether a time separation or receiving grades online but feedback hard copy, reduced the likelihood that the student would engage with their feedback. High achieving students were the most likely to read their feedback regardless of the grade, whereas most students stated that they would generally only read the feedback if they were unsatisfied with the grade that they had received.

Storing feedback

All of the students stated that they place significant value on feedback and would not simply discard it, even when they had not actually read it, with some students having created processes for storing the feedback – such as printing out electronic feedback and keeping it in ordered files. This reflects a general view from the students that they are more likely to use previous feedback with their assignments if all of it is held in one place and is available when and where they need it. Electronic storage methods, such as those in the VLE, were particularly valued for this because they allowed students to quickly find the desired feedback when it was required.

Using feedback

Students were more likely to refer back to feedback when there were clear parallels with the assignment being worked on, such as the form or format. In most cases this would involve reading through previous feedback at some point while working on another assignment. However, some students did state that they took a more systematic approach and would look through multiple pieces of feedback to try to identify patterns and highlight specific areas to work on.

Feedback on generic aspects of assignments, such as structure, layout, referencing, etc., was seen as transferable to future assignments. The feedback related to the content of an assignment was typically seen as isolated and students struggled to connect it to other assignments unless there was a clear, explicit connection.

Staff perspective

Course design

While tutors were fully aware of the range and details of assessments within their own modules, it was clear that there was less understanding of the assessments across the entire course.

There were concerns that recently introduced policies could have a negative effect on the assessment and feedback process, specifically:

Assignment submission

While there was a general acknowledgement that electronic submission brings benefits to tutors and students, the method of assignment submission was decided by the module leader and, in some cases, the major factor appears to be based around preferences for marking. This was particularly evident in situations where the students are required to submit an electronic copy to Turnitin to be checked for plagiarism and a hard copy to be marked by the tutor.

Marking and feedback generation

Many tutors stated that marking is an intensive and time-consuming activity and they had devised a wide range of different techniques to help increase their efficiency. These ranged from personally developed technical solutions through to a preferred environment.

There was a strong feeling among a small number of tutors that they did not want to mark on screen, and a larger group stated that they were frustrated by the limitations of current methods of online marking, such as those provided by Turnitin and the Blackboard VLE.

Issuing feedback

The way in which an assignment was marked, that is hard-copy or electronic, was the determining factor on how the feedback was returned to students, with students being required to collect hard-copy feedback or receiving it during class. Concerns with this were raised due to students frequently not collecting their feedback, especially when the grades had been released online.

Future use of feedback

Most of the tutors were creating formal opportunities to discuss feedback with students, such as working through common mistakes during a lecture or holding specific ‘surgery’ sessions, but they were also offering informal opportunities, including setting aside time after class to talk to students. Despite creating these opportunities and informing students, the students did not always make use of them.

Discussion and recommendations

This section covers three to four issues each for students and staff and for each, we recommend within the discussion where the application of specific technologies, combined with refined processes, can assist students in making connections between previous feedback and subsequent assessments.

Students

Clear transferability

The interviews revealed that students tend to use feedback when it is general and obviously transferable, such as comments on structure, language, referencing, etc. They struggle with the process of identifying assessment-specific feedback that can be generalised and applied to other assessments. Tools such as the ‘Programme Overview Browser’ (Toner and Soanes 2011) can help students, and staff members, to identify the common aspects and topics of different modules on a programme. This information would support both groups in making connections between modules and assessments.

An alternative approach is being used by the consortium behind the ‘Transforming the Education of Students through Assessment’ (TESTA) project (www.testa.ac.uk). Rather than attempt to identify commonalities between existing modules, many of which have been developed in relative isolation, the TESTA methodology encourages programme-level assessment. This offers a way to design assessment and feedback practices to support the creation of clearly transferable feedback by ensuring that assessments cover multiple elements from a programme, rather than a single topic or module.

Feedback on draft work

A further example of how students struggle to generalise their feedback is evident in the value placed on being able to submit draft assessments and receive feedback on them. Feedback on draft work has clear applicability and can be readily used to adapt an assessment submission as it is being developed, yet it is seldom available because it drastically increases the amount of time tutors need to spend on each submission. Peer or ‘near-peer’, such as a previous student on the module, review of drafts can provide a mechanism to allow the students to receive useful feedback on their work before their final submission (Coit 2004). Tools for online peer assessment, such as Turnitin’s PeerMark or WebPA (Loddington et al. 2009), can help facilitate this process; however, to counter the possibility of plagiarism, assessments may need to be designed in such a way that there is little direct overlap between the work that different students are undertaking.

Directly linked to criteria

Students highlighted that they often have difficulty in interpreting how the feedback they receive relates to the assessment criteria, and therefore their grade, unless explicitly stated. This is especially the case with in-context feedback, such as comments on a document submitted as part of the assignment, but is also evident in general feedback about the submission as a whole. In order to assist the students to make these connections between their work and the assessment criteria, tutors should link their feedback to specific criteria where possible. One way to do this would be to give each criterion a code and use this within the feedback where appropriate. Tools that allow the easy reuse of standard pieces of feedback, such as the GradeMark tool in Turnitin, make this very straightforward by enabling the criteria to be quickly referenced throughout the feedback. Marking rubric tools can also be used to help link general feedback to the assessment criteria and overall grade (Glover 2012; Parkin et al. 2012; McGoldrick and Peterson 2013) and are increasingly becoming an integrated part of the assignment features of VLEs, such as Blackboard and Moodle.

Wasted opportunities

Many of the students stated they would initially check the grade that they had received for a particular assessment and only read the feedback if it was either not as they anticipated or fell outside their usual achievement range. This means that students are unlikely to identify trends throughout their work that, if addressed, would improve their grades. This is especially important in modular programmes because the only person with complete access to a student’s feedback is likely to be the student themselves. A potential solution is to provide students and personal tutors with a ‘dashboard’ that allows them to monitor progress across a whole programme and identify trends as they develop (Dietz-Uhler and Hurn 2013).

Several potential methods exist to encourage students to read and, crucially, act upon their assessment feedback. One approach is to withhold the grade from students until they have submitted a reflection on their feedback, including how they will act upon it in future (Parkin et al. 2012; Taras 2001). However, this is problematic because, unless changes are made to the formal assessment guidelines to incorporate this approach, it would not be possible to withhold the grade indefinitely from those students who do not engage with this reflective practice. A slight alternative to this approach is to include this reflection as part of the assessment submission, with students being required to include details on how their past feedback has influenced the current submission, though this would suffer from similar issues to the reflective example above (Ajjawi and Schofield 2013).

A separate issue is that students do not always use opportunities to discuss their grades and feedback with their tutors, even when they are not satisfied with or do not understand what they have received. This appears to be the case whether these opportunities were informal, such as directly after a teaching session, or more formal, such as by arranging appointments with the tutor or their personal tutor. The University of Dundee’s ‘interACT’ project (Ajjawi and Schofield 2013) aims to increase the quality of feedback given to students by making the process an electronic dialogue between the tutor and student, rather than the more traditional monologue from the tutor. This approach helps address the issue of students wasting opportunities to discuss their feedback by making the discussion a central part of the feedback process.

Staff

Concerns about the use of technology

A common complaint from the tutors related to their active avoidance of on-screen marking on desktop computers, such as the use of Microsoft Word’s ‘Track Changes’ feature or annotating PDF documents. The main reasons stated, that the available tools are physically uncomfortable to use and that they are too limited when compared to marking on paper, while valid, suggest that there is a general resistance to the use of electronic marking. Different strategies were taken in order to restrict the need to mark electronically, ranging from a complete rejection of electronic submission by students through to a mostly electronic workflow with the submissions being printed and the hard copies being marked. One example of particularly poor practice was that of requiring the students to submit both an electronic copy and a hard copy for marking – often with different submission times.

The student interviews showed that they value electronic feedback for its generally higher legibility, the increased efficiency it provides to the end-to-end assessment process, and the ease and flexibility with which they can store their annotated work and their feedback. This suggests that the provision of electronic methods of marking that model traditional methods more closely could help bridge the gap between student and tutor preferences. Potential tools that fit this criterion include:

However, the overarching theme evident from the interviews was that the preference of the module leader is the main driver of the use of technology in submission and feedback, even when they will be taking little part in the process. This is primarily because they make many of the decisions about the content of the module and the assessment activities, resulting in the inconsistency across programmes noted by students. It is likely that there will continue to be resistance to the introduction of technology into the feedback process unless the individual module leaders are in full support; therefore, gaining this support should be a priority for e-assessment projects.

Time pressures

As with many other UK universities, Sheffield Hallam University has recently instituted a regulation that students should receive their mark and/or feedback within 3 weeks of the submission deadline. While this change is intended to make the feedback more timely, giving students a greater opportunity to act upon the tutors’ comments, the research showed that some tutors felt that the reduction in time available to assess submissions had resulted in a reduction in the overall quality of feedback.

The University of Huddersfield’s ‘EBEAM’ project provides a detailed analysis of the impact of electronic assessment and feedback processes within the university, specifically the use of Turnitin’s GradeMark tool (Ellis and Reynolds 2013). A key finding of the project was that, while there was an initial increase in the time taken to assess an assignment, this quickly became a net decrease as tutors became more familiar with the system and adapted to the new processes. Increased efficiency was evident in both the marking process itself and the related administrative processes (Ellis and Reynolds 2013). This suggests that while the overall amount of time available to mark and produce feedback on each submission may be reduced, more efficient electronic workflows can help to counteract any potential negative impact in the quantity and quality of feedback.

The ‘interACT’ project at the University of Dundee aims to evaluate the impact on learning of technology-supported dialogic approaches to feedback (Ajjawi and Schofield 2013). One aspect of this project that could be readily applied in other contexts is that of encouraging students to state the particular aspects of their submission on which they would like feedback. This allows tutors to provide feedback that is more focused on the students’ stated needs, increasing the likelihood that it will be acted upon, and raising the perceived value and quality of the information. The project also encourages students to reflect on their previous feedback and state how it has influenced their current submission, therefore providing another way to engage students with their feedback.

Narrow view of assessment timetable

The interviews revealed that tutors often have a limited understanding of the full range of assessments that students are undertaking on a particular programme. This means that in addition to staff being unable to assist students by making connections between assessments, students frequently have to submit assessments for several modules within a few days (so called ‘assessment bunching’). As a result, they will typically prioritise their workload accordingly, which may serve to prevent them in taking a considered approach to their work that makes full use of previous feedback. Therefore, providing mechanisms to allow tutors to see the entire assessment calendar of particular student cohorts would increase their ability to make informed decisions about deadlines and so assist students in managing their workloads to address this problem.

The introduction of simple online ‘Assessment Diaries’ at the University of Glamorgan resulted in tutors being better able to manage their own and their students’ workloads (Fitzgibbon 2013). This in turn led to students being able to dedicate more time to each assessment and tutors having more time available to mark each submission. In a similar way, the University of Greenwich’s ‘Map My Programme’ www.mapmyprogramme.com provides a graphical overview of the assessment points within a programme and allows a tutor to visualise the workloads of students across the programme, and so make informed decisions when setting their own assessments.

Subsequent developments at Sheffield Hallam University

The findings of this research have fed into the Assessment Journey Programme, a 3-year, institution-wide project to develop both the technological underpinnings and the staff and student culture for an end-to-end, seamless, online assessment experience. In particular, the programme will develop mechanisms to provide a course-wide view of assessments for staff and students, support a complete online submission and marking workflow, and implement a centralised, online space for all assessed work and feedback for students. In addition, it will also incorporate the creation of new policies and guidance for staff related to good practices around assessment and provide professional development opportunities aimed at addressing many of the issues identified through the Making Connections research.

Conclusion

Feedback is a fundamental part of any learning process – it is what allows us to learn from our mistakes and so perform better in the future. Likewise, the production and assimilation of feedback is a core activity for teachers and students, yet its value, both real and perceived, can be limited by the way it is produced by teachers and processed by students. In particular, an inability to make connections between feedback for a specific assessment and feedback that is generally applicable can have serious detrimental effects on a student’s future performance.

Projects to address these issues have frequently made use of technology, sometimes with considerable success; however, it is clear that changing the practices of both students and tutors is necessary to ensure wide-ranging impact in this area. This research has shown that a large-scale, institution-wide, technological intervention, such as the work being undertaken as part of the Assessment Journey Programme, could help facilitate students in making connections between past feedback and future assessments.

References

Ajjawi, R. & Schofield, S. (2013) InterACTEvaluation Report, [online] Available at: http://repository.jisc.ac.uk/5408/1/AF_Strand_A_Final_Evaluation_Report_interACT_06092013.doc

Boud, D. (2013) ‘Sustainable assessment: rethinking assessment for the learning society’, Studies in Continuing Education, vol. 22, no. 2, pp. 151–167. Publisher Full Text

Boud, D. & Molloy, E. (2000) ‘Rethinking models of feedback for learning: the challenge of design’, Assessment & Evaluation in Higher Education, vol. 38, no. 6, pp. 698–712. PubMed Abstract | PubMed Central Full Text | Publisher Full Text

Carless, D. (2006) ‘Differing perceptions in the feedback process’, Studies in Higher Education, vol. 31, no. 2, pp. 219–233. Publisher Full Text

Chanock, K. (2000) ‘Comments on essays: do students understand what tutors write?’ Teaching in Higher Education, vol. 5, no. 1, pp. 95–105. Publisher Full Text

Coit, C. (2004) ‘Peer review in an online college writing course’, in Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT’04), pp. 902–903, [online] Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1357712&isnumber=29792

Crotwell Timmerman, B. E., et al., (2011). ‘Development of a “universal” rubric for assessing undergraduates’ scientific reasoning skills using scientific writing’, Assessment & Evaluation in Higher Education, vol. 36 no. 5, pp. 509–549. PubMed Abstract | PubMed Central Full Text | Publisher Full Text

DeBourgh, G. A. (1999) ‘Technology is the tool, teaching is the task: student satisfaction in distance learning’, in Proceedings of Society for Information Technology & Teacher Education International Conference 1999, eds J. Price et al., Association for the Advancement of Computing in Education (AACE), Chesapeake, VA, pp. 131–137.

Deepwell, F. & Benfield, G. (2012) ‘Evaluating assessment practices: the academic staff perspective’, in Improving Student Engagement and Development through Assessment: Theory and Practice in Higher Education, eds L. Clouder et al., Routledge, Abingdon, pp. 59–72. ISBN: 978-0415618199.

Dietz-Uhler, B. & Hurn, J. E. (2013) ‘Using learning analytics to predict (and improve) student success: a faculty perspective’, Journal of Interactive Online Learning, vol. 12, no. 1, pp. 17–26.

Duncan, N. (2007) ‘“Feed-forward”: improving students’ use of tutors’ comments”, Assessment & Evaluation in Higher Education, vol. 32, no. 3, pp. 271–283. PubMed Abstract | PubMed Central Full Text | Publisher Full Text

Ellis, C. & Reynolds, C. (2013) EBEAMFinal Report. Project Report, [online] Available at: http://repository.jisc.ac.uk/5331/1/EBEAM_Project_report.pdf

Ferrell, G. & Sheppard, M. (2013) ‘Supporting assessment and feedback practice with technology: a view of the UK landscape’, in Proceedings of 19th European University Information Systems (EUNIS) Congress, [online] Available at: https://eunis2013-journals.rtu.lv/article/view/eunis.2013.025/170

Fitzgibbon, K. (2013) Evaluation of Assessment Diaries and GradeMark at the University of GlamorganFinal Report, [online] Available at: http://repository.jisc.ac.uk/4993/1/JISC_Assessment_and_Feedback_-_Glamorgan_Final_Project_Report_post_JISC_feedback_Nov_2012.docx

Glover, I. (2012) ‘SCOF: a standardised, customisable online feedback tool’, in Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (ED-MEDIA) 2012, eds T. Amiel & B. Wilson, Association for the Advancement of Computing in Education (AACE), Chesapeake, VA, pp. 1805–1812. Available at: http://www.editlib.org/p/40993

Hattie, J. & Timperley, H. (2007) ‘The power of feedback’, Review of Educational Research, vol. 77, pp. 81–112. Publisher Full Text

Hepplestone, S., et al., (2009) ‘Technology, feedback, action!: the impact of learning technology upon students’ engagement with their feedback’, [online] Available at: http://evidencenet.pbworks.com/Technology%2C-Feedback%2C-Action!%3A-Impact-of-Learning-Technology-on-Students%27-Engagement-with-Feedback

Hepplestone, S., et al., (2011) ‘Using technology to encourage student engagement with feedback: a literature review’, Research in Learning Technology, vol. 19, no. 2, pp. 117–127. Publisher Full Text

Hepplestone, S. & Chikwa, G. (2014) ‘Understanding how students process and use feedback to support their learning’, Professional Research in Higher Education, vol. 8, no. 1, pp. 41–53.

Hepplestone, S. & Parkin, H. J. (2015) ‘From research to practice: the connections students make between feedback and future learning’. in Paper presented at the 5th Assessment in Higher Education Conference, 24–25 June 2015, Birmingham, UK.

Higgins, R., Hartley, P. & Skelton, A. (2002) ‘The conscientious consumer; reconsidering the role of assessment feedback in student learning’, Studies in Higher Education, vol. 27, no. 1, pp. 53–64. Publisher Full Text

Irons, A. (2008) Enhancing Learning through Formative Assessment and Feedback, Routledge, Abingdon. ISBN: 978-0203934333.

JISC. (2009) Learners’ Experiences of e-Learning: Methods, [online] Available at: https://radar.brookes.ac.uk/radar/file/f401572b-3b1f-6e14-35c8-8d7cca50dac4/1/Interview plus.pdf

Jones, O. & Gorra, A. (2013). ‘Assessment feedback only on demand: supporting the few not supplying the many’, Active Learning in Higher Education, vol. 14, no. 2, pp. 149–161. Publisher Full Text

Kerrigan, M., et al., (2012) ‘The making assessment count (MAC) consortium maximising assessment and feedback design by working together’, Research in Learning Technology, vol. 19, 7782, doi: http://dx.doi.org/10.3402/rlt.v19i3.7782 Publisher Full Text

Liversidge, T., et al., (2010) Using e-Book Readers in Student Assessment. Project Report. JISC TechDis, [online] Available at: http://www.jisctechdis.ac.uk/assets/Documents/HEAT/MAN301.doc

Loddington, S., et al., (2009) ‘A case study of the development of WebPA: an online peer-moderated marking tool’, British Journal of Technology, vol. 40, no. 2, pp. 329–341. Publisher Full Text

Lunt, T. & Curran, J. (2010). ‘“Are you listening please?” The advantages of electronic audio feedback compared to written feedback’. Assessment & Evaluation in Higher Education, vol. 35, no. 7, pp. 759–769. PubMed Abstract | PubMed Central Full Text | Publisher Full Text

Malan, D. J. (2009) ‘Grading qualitatively with tablet PCs in CS 50’, Paper Presented at the Workshop on the Impact of Pen-Based Technology on Education, Blacksburg, VA, 12–13 Oct., 2009.

McGoldrick, K. & Peterson, B. (2013) ‘Using rubrics in economics’, International Review of Economics Education, vol. 12, no. 1, pp. 33–47. Publisher Full Text

Parkin, H. J. (2013) ‘Making connections: using technology to improve student engagement with feedback’, Available at: https://blogs.shu.ac.uk/telteam/files/2013/05/FINAL-REPORT-Making-Connections.pdf

Parkin, H. J., et al., (2012) ‘A role for technology in enhancing students’ engagement with feedback’, Assessment & Evaluation in Higher Education, vol. 3, no. 8, pp. 963–973. Available at: http://www.tandfonline.com/doi/abs/10.1080/02602938.2011.592934 PubMed Abstract | PubMed Central Full Text | Publisher Full Text

Price, M., et al., (2010) ‘Feedback: all that effort, but what is the effect?’ Assessment & Evaluation in Higher Education, vol. 35, pp. 277–289. PubMed Abstract | PubMed Central Full Text | Publisher Full Text

Ramaprasad, A. (1983) ‘On the definition of feedback’, Behavioural Sciences, vol. 28, pp. 4–13. Publisher Full Text

Tague, J., et al., (2013) ‘Choosing and adapting technology in a mathematics course for engineers’, in Proceedings of 120th Annual Meeting of the American Society of Engineering Education, Atlanta, GA. [online] Available at: http://www.asee.org/public/conferences/20/papers/7445/download

Taras, M. (2001) ‘The use of tutor feedback and student self-assessment in summative assessment tasks: towards transparency for students and for tutors’, Assessment & Evaluation in Higher Education, vol. 26, no. 6, pp. 605–614. PubMed Abstract | PubMed Central Full Text | Publisher Full Text

Toner, J. & Soanes, M. (2011) ‘The personalisation of the curriculum: the Programme Overview Browser on the City Law School Bar Professional Training Course’, Presented at Learning in Law 2011, [online] Available at: http://www.ukcle.ac.uk/resources/enhancing-learning-through-technology/toner/

Weaver, M. R. (2006) ‘Do students value feedback? Student perceptions of tutors’ written responses’, Assessment & Evaluation in Higher Education, vol. 31, no. 3, pp. 379–394. PubMed Abstract | PubMed Central Full Text | Publisher Full Text

Yorke, M. (2001) ‘Formative assessment and its relevance to retention’, Higher Education Research and Development, vol. 20, no. 2, pp. 115–126. Publisher Full Text