ORIGINAL RESEARCH ARTICLE

Stakeholder perspectives on graphical tools for visualising student assessment and feedback data

Luciana Dalla Vallea*, Julian Standera, Karen Grestyb, John Ealesa and Yinghui Weia

aSchool of Computing, Electronics and Mathematics, University of Plymouth, Plymouth, UK;

bFaculty of Science and Engineering, University of Plymouth, Plymouth, UK

(Received 6 August 2017; final version received 23 January 2018; Published 30 July 2018)

Abstract

This paper contributes to the development of learning and academic analytics in Higher Education (HE) by researching how four graphical visualisation methods can be used to present student assessment and feedback data to five stakeholder groups, including students, external examiners and industrialists. The visualisations and underlying data sets are described, together with the results of a questionnaire designed to elicit the perspectives of the stakeholder groups on the potential value of the visualisations. Key findings of this study are that external examiners agree that the visualisations help them to carry out their role and students concur that they can assist with study organisation, relative performance assessment against the wider cohort and even module choice. All stakeholder groups were positive about the benefits of graphical visualisations in this HE context and supported an increased use of visualisations to assist with data interpretation.

Keywords: academic analytics; learning analytics; data visualisation; external examiners; feedback; higher education; student perspectives

*Corresponding author. Email: luciana.dallavalle@plymouth.ac.uk

Research in Learning Technology 2018. © 2018 L. Dalla Valle et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2018, 26: 1997 - http://dx.doi.org/10.25304/rlt.v26.1997

Introduction

This study fills a gap in the academic and assessment analytics literature by responding to the need to unlock the value of Big Data in Higher Education (HE) (Daniel 2015). It explores the use of graphical visualisations to unlock the value of readily available information from the perspective of five stakeholder groups. Visualisation can, for example, offer ways of extracting meaning from student assessment data to assess performance quickly, and these results can be shared with different target audiences, thereby informing academic planning and educational decision-making. However, visualisations of data need to be used with caution, as they can pose certain challenges. Melero et al. (2015) describe visualisations that include ‘too much information’ and stated they were difficult for teachers to analyse when limited time was available. This means that it is important to evaluate visualisations from the point of view of stakeholders who may potentially use them.

The 1st Learning Analytics and Knowledge Conference (2011) proposed a definition of learning analytics as ‘the measurement, collection, analysis and reporting of data about learners and their contexts for the purposes of understanding and optimising learning and the environment in which it occurs’. Long and Siemens (2011) discuss this definition and contrast it with that of academic analytics, which according to them ‘is the application of business intelligence in education and emphasizes analytics at institutional, regional, and international levels’. Their Table 1 shows that learning analytics applies at the course and departmental level to learners and staff, while academic analytics applies at the faculty, institutional, regional, national and international level to academic leaders and administrators, funders, education authorities and national governments. A useful and wide-ranging discussion about academic analytics was offered by Campbell and Oblinger (2007), partly motivated by the need to identify students facing difficulties. Ellis (2013) subsequently made a strong case for moving towards assessment analytics, aimed at broadening the utility and scope of learning analytics. She describes how the expansion of emphasis from supporting ‘at-risk’ students (Essa and Ayad 2012) to understanding assessment data could bring benefits to more students, especially the ‘overlooked middle’, and to lecturers. She argues that assessment analytics offers the potential for students to compare their attainment with their peers or against benchmarks, and suggests that assessment analytics can realise learning optimisation. Ferguson and Shum (2012) discuss five forms of social learning analytics, a subset of learning analytics which focuses on how learners build knowledge together in their cultural and social settings. They then consider a sophisticated implementation of these analytics. Although Ferguson and Shum (2012) encourage learners to respond to and help shape the analytics, they do not study stakeholder responses. Similarly, Williams (2017) advocates using social learning analytics to support and evaluate students’ collaborative learning in realistic contexts but does not discuss detailed stakeholder reactions. Long and Siemens (2011) list groups who may benefit from different analytics but do not quantify these benefits. They also discuss the value of analytics to decision-making and resource allocation. The study presented here considers the value of four visualisations to decision-making from the perspective of possible stakeholders.

Table 1. The number of respondents in each group.
External examiners 5      
Learning developers 10      
Industrialists 7      
Academics 30      
Students 35 Foundation Undergraduates Postgraduates
    12 11 12

Assessment is a central feature of the curriculum and of teaching practice (Imrie et al. 2014), and has an overwhelming influence on what, how and when students study and learn. It provides such an important student learning support tool (Gibbs and Simpson 2004) that HE institutions are continually seeking assessment practice improvements (Astin 2012; Boud and Dochy 2010). To support this, and in accordance with the United Kingdom’s Quality Assurance Agency Code of Practice (Quality Assurance Agency 2015), subject-specific external examiners are employed to assure comparative academic standards across institutions. The Higher Education Academy Handbook (2012) notes that external examiners’ duties include ‘identifying good practice and providing advice for the enhancement of modules and programmes’. Part of the role of external examiners is to ensure that assessments that cover related subjects taken by similar groups of students yield comparable and reliable results year on year. Bloxham et al. (2013) describe external examiners as moderators and notes that they are part of a university’s academic standards decision-making process. They further recommend the development of external examiners’ knowledge, skills and judgement in HE assessment, which together could be termed assessment literacy. As support for this, and in order to optimise assessment data, academics and external examiners need simple techniques for quickly assimilating information on student performance and feedback. This study provides a contribution to the literature in the form of appraised data visualisations that are relevant to the area of assessment analytics and literacy. The findings also contribute to learning analytics and, because they inform course management and decision-making, to academic analytics.

Aims and stakeholders

The main aim of this paper is to understand how different stakeholders perceive the value of four visualisations of student assessment and feedback data and if these visualisations make a useful contribution to the wider field of learning and academic analytics. Five stakeholder groups who may benefit from engaging with the visualisations were identified: external examiners, learning (education) developers, industrialists (employers), academics and students. Although the visualisations are designed to be of general use, it is anticipated there may be specific potential benefits for each group.

The pressure on external examiners is considerable and increasing, with a call for strengthening and additional training in a report from HEFCE (2015) reviewing UK external examining arrangements. External examiners have to judge and compare academic standards, often across a broad range of modules and assessment types. The visualisations presented here provide simple ways that can allow external examiners to rapidly engage with and assimilate the growing amount of information that they must consider. The visualisations may be included in the induction and training of external examiners to improve assessment literacy in judging examination questions, comparing modules and ensuring reliability. External examiners can also appraise the visualisations in the context of both learning and academic analytics.

Similar considerations apply to academics, who are typically tasked with setting many assessments in a short time frame and who need to identify enhancements to improve both the teaching and learning experience. Some of the visualisations discussed here could provide academics with additional skills in, and support for, examination and assessment preparation, allowing them to make better use of performance data in decision-making. A visualisation tool is also provided that could give a better understanding of module feedback questionnaire results collected from students, showing how feedback changes over time. This allows the impact of any modifications, for example, to module delivery, to be assessed. Academics can appraise the visualisations in the context of learning and wider academic analytics.

Learning (education) developers are scholars with particular expertise and experience in education, typically concerned with enhancing academic practice. These stakeholders may benefit from engaging with visualisation tools due to their role in disseminating good practice when drawing general, data-based conclusions about the suitability of assessments and associated impact on the student experience. Learning developers can appraise the visualisations in the context of both learning and academic analytics.

Industrialists are employers and innovators working in a range of companies with national and international reach. These stakeholders are interested in the quality of assessment processes and academic standards, as are other stakeholders, and may even be able to make use of the visualisations to monitor their own processes and staff development. Industrialists can appraise the visualisations mainly in the context of academic analytics.

The visualisations presented here can potentially enhance the student experience by improving the consistency and reliability of future assessments. Students may make use of them directly to organise their studies, to assess their own relative performance against the wider cohort and to inform their module choices. Students can appraise the visualisations mainly in the context of learning analytics.

Research questions and methodology

The study was steered by two related research questions:

Q1: Are the four visualisations perceived differently by the five stakeholder groups?

Q2: Do the four visualisations make a contribution to the field of assessment, learning and academic analytics?

In order to respond to these questions, a simple questionnaire was designed to obtain the views of external examiners, learning developers, industrialists, academics and students. The questionnaire was administered to a range of participants from these stakeholder groups during an academic year. The external examiners were highly experienced and associated with programmes in Mathematics and Statistics (at both foundation and undergraduate levels). They were invited to take part as they were linked to the subject group of four of the authors and travelled to the University as part of their normal external examining duties. The learning developers worked in the Teaching and Learning Support unit of the host institution and with its associated Pedagogic Research Institute. The industrialists came from the actuarial, computing, consulting and finance sectors, and from national government, and were based in the European Union (EU) and Israel. Academic participants were subject lecturers (excluding learning or education developers) with a variety of backgrounds, including business, computing, economics, mathematics and statistics, from institutions in the EU and the United States. Student participants were studying at foundation, undergraduate and postgraduate levels across a range of science and computing-related programmes. The number of respondents in each group is given in Table 1. Overall, responses were obtained from 87 participants. Questionnaire results and analysis are discussed in detail in the ‘Questionnaire to evaluate the visualisations’ section.

Description and aims of visualisations

Data for visualisations

The four visualisations considered here are based on data from five study modules, referred to as Modules 1 through 5, delivered at a UK HE institution. Modules 1, 2 and 3 are all optional Stage 4 (Level 6) BSc Hons modules, delivered to similar groups of students in Mathematics and Statistics over a recent academic year. Each of these three modules was assessed by one piece of open-ended coursework worth 30% of the overall mark of each module plus a three hour examination worth 70%. Although these modules are offered to the same student cohort, there were only a few students who took all three. The number of students who participated in each of the three modules was 22, 22 and 20 respectively. Module 4 is a Stage 2 (Level 5) core module on a BA Hons Accounting and Finance programme, with 30% coursework and 70% examination assessment weightings. The number of students on Module 4 in two consecutive years was 69 and 51. Module 5 is a Stage 1 (Level 4) optional module taught to a range of computing and data science students. It was assessed by a report worth 70% of the overall mark and a presentation worth 30%, completed by 36 students in 11 self-assigned groups.

Visualisations and their aims

The four visualisations used in this study are produced in R (R Core Team 2018) using simple ggplot2 (Wickham 2016) code. R is now a long established and widely used software in education. Badge, Saunders, and Cann (2012), for example, used R to produce visualisations of student social network contributions aimed at encouraging a more collaborative approach to scientific education. Stander and Dalla Valle (2017), Stander, Dalla Valle, and Cortina Borja (2018) and Stander and Eales (2011), amongst many others, also discuss a range of R-based applications in HE. Detailed R instructions for producing Visualisation 1 are provided in the Supplementary File. Visualisations 2 and 4 were created in a similar way to Visualisation 1. Visualisation 3 can be produced in a relatively straightforward manner using R’s likert package (Bryer and Speerschneider 2016). Comparable graphs and visualisations can also be produced using other software.

Visualisation 1, shown in Figure 1, compares the results of three modules using boxplots. Each box shows the median, and the lower and upper quartiles of the marks. The median is used instead of the mean because it is a robust summary measure of location. The distance between the lower and upper quartiles provides a measure of spread known as the sample interquartile range. The whiskers indicate the highest/lowest values, with distance from the upper/lower quartile no more than one-and-a-half times the sample interquartile range. Values beyond these whiskers are sometimes referred to as outliers and are indicated separately using dots, as is the case for Module 2. In Figure 1, a separate vertical panel or facet is used for each module. The class boundaries have also been indicated using horizontal lines: to pass the module, students require minimum marks of 30% in both the coursework and examination and 40% or more overall; the 50%, 60% and 70% boundaries can be thought of as corresponding to the thresholds for lower second, upper second and first-class performances. The aims of Visualisation 1 are to provide an immediate impression of student performance on a set of modules, particularly to help external examiners compare standards across modules, and to assist students to organise their future potential studies in the light of past performance.

Fig 1
Figure 1. A graphical representation of the coursework, examination and overall marks obtained by students in Modules 1, 2 and 3 based on boxplots.

The data that Visualisation 1 presents were supplied by a stage tutor in the form of a comma-separated variable file similar to the one included in the Supplementary File. These data comprise an easily created, accurately transcribed record of student performance on each module assessment component.

Visualisation 2, shown in Figure 2, records the marks achieved by every student on each question part of Module 2’s examination. The size of the plotting symbol depends on the number of overlapping points or students that it represents. Square plotting symbols are used because their sizes are more easily identifiable than round symbols. The marks for each question part are at the bottom, while the marks for each whole question are at the top. The average mark for each question part (black line) and for each question (horizontal line) are also shown. The average examination mark and its standard deviation are given in the title. From the horizontal lines, it can be seen that the average question performance has decreased from just over 50% for Question 1 to just over 40% for Question 4. Parts 2 and 6 from Question 1; 3 and 6 from Question 2; 3 and 6 from Question 3; and 2 and 3 from Question 4 seem to have caused some difficulties, with students scoring on average below 30%. Most question parts provide performance discrimination, except possibly part 5 of Question 1 and part 1 of Question 2. The aims of Visualisation 2 are to provide a tool for students to assess their performance relative to that of the group, for academics and students to understand areas of strength and weakness in learning and for external examiners to pin-point problems in student performance.

Fig 2
Figure 2. Student results from each part of Module 2’s examination. There were four questions each comprising six parts. The size of the square symbol depends on the number (n) of overlapping points or students that it represents.

The data that Visualisation 2 presents were supplied by a module leader in the form of a comma-separated variable file. These data comprise an easily created, accurately transcribed record of student performance on each part of Module 2’s examination.

Visualisation 3, shown in Figure 3, was produced using R’s likert package (Bryer and Speerschneider 2016) and summarises the Module Feedback Questionnaire responses provided by two student cohorts on Module 4, where each response can be one of strongly disagree, disagree, neutral, agree and strongly agree. The percentages of negative (strongly disagree and disagree), neutral and positive (agree and strongly agree) responses are shown. Bars that extend to the right/left of the central 0% line indicate positive/negative responses. Students seemed very satisfied with their overall experience, with 90% (85%) agreeing or strongly agreeing with the statement ‘Overall I was satisfied with my experience of this module’ in Year 1 (Year 2). In Year 2, 10% of students were neutral about this question, while 5% disagreed. No student strongly disagreed with any statement.

Fig 3
Figure 3. Module Feedback Questionnaire responses provided by two student cohorts on Module 4.

The data that Visualisation 3 presents were obtained by electronically reading student responses made on a paper-based Module Feedback Questionnaire.

As part of the assessment of Module 5, students were required to make a group presentation. Three markers assessed 11 groups in the following categories: content (assessment weight 50%), quality of presentation (30%) and responses to questions (20%). Visualisation 4, shown in Figure 4, presents the percentage mark awarded by three markers to 11 groups in three assessment categories, together with the corresponding overall marks, which are used to order the groups. The mean mark across the three markers for each group in each category is also shown by a black line. There appears to be some variation in the markers’ content, presentation and question marks, but no marker stands out as being substantially different from the others. Because the overall mark is a weighted average of the marks in the three categories, the variation between markers is noticeably less. It is not expected that Visualisation 4 would normally be shown to students, as it is a management quality assessment metric.

Fig 4
Figure 4. The percentage marks awarded by three markers to 11 groups in three assessment categories, together with the corresponding overall marks. The mean mark across the three markers for each group in each category is also shown by a black line.

The data that Visualisation 4 presents were supplied by a module leader in the form of a comma-separated variable file. These data comprise an easily created, accurately transcribed record of the marks awarded by three markers.

All four visualisations can bring advantages to learning analytics, with Visualisations 1, 2 and 4 potentially making a transformative contribution to assessment analytics and Visualisations 1 and 3 making a decision-informing contribution to academic analytics.

An external examiner, a learning developer, an industrialist and an academic, different from those who responded to the questionnaire, were asked to provide detailed feedback about an earlier iteration of the visualisations. Changes were made in the light of this feedback. They included: the introduction of the horizontal lines to mark class boundaries in Visualisation 1, the use of a different colour palette and square symbols instead of round plotting characters in Visualisation 2 (so area can be more easily discerned), not splitting the title of each panel in Visualisation 3 and the inclusion of the mean mark across the three markers in Visualisation 4. In this way, we incorporated stakeholders’ feedback during the development of the visualisations.

Questionnaire to evaluate the visualisations

Questionnaire design

Four questionnaires were produced for external examiners, learning developers and academics, industrialists, and students, with minor variations as discussed below. The questionnaire for students was shorter than the others because Visualisation 4 was not designed to be shared with this group. Space was provided at the end of the questionnaire for extra comments. The questionnaire for external examiners is provided in the Supplementary File. The following visualisation-specific questions were asked:

In addition, three general No/Yes questions were asked:

Results for visualisation-specific questions

In this section, responses to the questionnaire for each visualisation and each stakeholder group are discussed. These responses provide valuable evaluative feedback about the visualisations, although caution is required due to the small size of some groups.

Figure 5 presents the responses to the question ‘How easy do you find this graph to understand?’ for each of the four visualisations, split down by respondent group. In order to obtain a parsimonious representation, Very Easy and Easy were combined into one response category ‘Easy’, and Hard and Very Hard into ‘Hard’. Most respondents said that Visualisations 1, 3 and 4 were easy to understand. However, the majority of respondents across all groups found Visualisation 2 less straightforward. One academic said that ‘Visualisation 2 allows the bimodal mark distribution of some question parts to be well understood’. An industrialist remarked that it ‘contains a great deal of information. However, once it is understood, it shows exactly where the problems are. Congratulations!’.

Fig 5
Figure 5. Responses to the question ‘How easy do you find this graph to understand?,’ split down by group.

Figure 6 presents the responses to the question ‘Does this graph provide you with an immediate impression of student performance/student feedback/the differences between markers for these assessments?’ for Visualisations 1, 3 and 4, split by group. Most respondents found that these visualisations did give an immediate impression, although some students did not agree with the immediacy of the feedback interpretation provided by Visualisation 3.

Fig 6
Figure 6. Responses to the question ‘Does this graph provide you with an immediate impression of student performance/student feedback/the differences between markers for these assessments?’

Figure 7 presents the responses to the question ‘Does this graph help you to do your job as an external examiner?’ For Visualisations 1, 3 and 4, there was complete agreement that these graphs helped. However, one of the five external examiners did not find that Visualisation 2 helped.

Fig 7
Figure 7. Responses to the question ‘Does this graph help you to do your job as an external examiner?’.

The numbers and percentages of students replying No/Yes to the question ‘Would this graph help you to organise your future studies?’ about Visualisation 1 are given in Table 2. The majority of students in each student group agreed that Visualisation 1 would help them to organise their future studies.

Table 2. Student responses to the question ‘Would this graph help you to organise your future studies?’ for Visualisation 1.
Student group Response Number Percentage
Foundation students No 2 17
  Yes 10 83
Undergraduates No 3 27
  Yes 8 73
Postgraduates No 2 17
  Yes 10 83

The numbers and percentages of students replying No/Yes to the question ‘If you were to have access to your examination marks, would this graph allow you to assess your performance relative to other students?’ about Visualisation 2 are given in Table 3. The majority of current students agreed that Visualisation 2 would help them to assess their relative performance. This question was less relevant for the postgraduate students in the sample, who may be more focused on their future research or working career and, consequently, less interested in past examination question results.

Table 3. Student responses to the question ‘If you were to have access to your examination marks, would this graph allow you to assess your performance relative to other students?’ for Visualisation 2.
Student group Response Number Percentage
Foundation students No 2 17
  Yes 10 83
Undergraduates No 3 27
  Yes 8 73
Postgraduates No 6 50
  Yes 6 50

The numbers and percentages of each group replying No/Yes to the question ‘Would this graph allow you to understand your strengths and weaknesses/the strengths and weaknesses of students in this module?’ about Visualisation 2 are given in Table 4. The majority of respondents in each group agreed that Visualisation 2 would help them to identify strengths and weaknesses.

Table 4. Responses to the question ‘Would this graph allow you to understand your strengths and weaknesses/the strengths and weaknesses of students in this module?’ for Visualisation 2.
Group Response Number Percentage
Learning developers No 3 30
  Yes 7 70
Industrialists No 1 14
  Yes 6 86
Academics No 11 37
  Yes 18 60
  No response 1 3
Students No 15 43
  Yes 20 57

The numbers and percentages of external examiners replying No/Yes to the question ‘Would this graph help you to pin-point problems with student performance on this module?’ about Visualisation 2 are given in Table 5. The majority of external examiners replied Yes.

Table 5. External examiner responses to the question ‘Would this graph help you to pin-point problems with student performance on this module?’ for Visualisation 2.
Group Response Number Percentage
External examiners No 1 20
  Yes 4 80

The numbers and percentages of students replying No/Yes to the question ‘Would similar graphical presentations of Module Feedback Questionnaire responses influence your module choices?’ about Visualisation 3 are given in Table 6. This question was slightly modified for postgraduate students, who were asked if Module Feedback Questionnaire responses would have influenced their module choices, if a similar graphical representation had been made available to them. The majority of students in each student group agreed that Visualisation 3 would influence their module choices.

Table 6. Student responses to the question ‘Would similar graphical presentations of Module Feedback Questionnaire responses influence your module choices?’ for Visualisation 3.
Student group Response Number Percentage
Foundation students No 2 17
  Yes 10 83
Undergraduates No 4 36
  Yes 7 64
Postgraduates No 3 25
  Yes 9 75

Results about general questions

The numbers and percentages of each group replying No/Yes to the question ‘Would you favour an increased use of visualisations of assessment results?’ are given in Table 7. Almost all respondents favoured an increased use of assessment result visualisations. Although the questionnaire was designed to be short to reduce the burden on busy participants and maximise completion, there were unfortunately a few non-responses to the general questions on the last two pages.

Table 7. Responses to the question ‘Would you favour an increased use of visualisations of assessment results?’.
Group Response Number Percentage
External examiners No 0 0
  Yes 5 100
  No response 0 0
Learning developers No 0 0
  Yes 9 90
  No response 1 10
Academics No 1 3
  Yes 27 90
  No response 2 7
Students No 0 0
  Yes 33 94
  No response 2 6

The numbers and percentages of each group replying No/Yes to the question ‘Would you favour an increased use of visualisations of student feedback results?’ are given in Table 8. The majority of respondents favoured an increased use of feedback result visualisations.

Table 8. Responses to the question ‘Would you favour an increased use of visualisations of student feedback results?’.
Group Response Number Percentage
External examiners No 0 0
  Yes 5 100
  No response 0 0
Learning developers No 0 0
  Yes 9 90
  No response 1 10
Academics No 1 3
  Yes 29 97
  No response 0 0
Students No 1 3
  Yes 32 91
  No response 2 6

The numbers and percentages of each group replying No/Yes to the question ‘Do you agree that student performance visualisations help to monitor the quality of assessment processes and academic standards?’ are given in Table 9. The majority of respondents agreed. A few academics suggested that Visualisations 1 and 2 could be of use to them when calibrating and preparing assessment but queried their general contribution to the quality process.

Table 9. Responses to the question ‘Do you agree that student performance visualisations help to monitor the quality of assessment processes and academic standards?’.
Group Response Number Percentage
External examiners No 0 0
  Yes 5 100
  No response 0 0
Learning developers No 1 10
  Yes 8 80
  No response 1 10
Industrialists No 1 14
  Yes 5 71
  No response 1 14
Academics No 3 10
  Yes 27 90
  No response 0 0

Discussion

This paper provides a contribution to the assessment and the learning analytics literature (Ellis 2013; Sclater et al. 2016) by presenting appraised assessment and feedback visualisations that offer potential benefits to a range of stakeholders. It also makes a useful contribution to academic analytics (using the definition of Long and Siemens 2011).

Visualisation 1 was the preferred graph in terms of ease of understanding and immediacy of impression for every group of respondents, suggesting that it does not require users to have advanced skills to understand it. It provides a quick and efficient comparison of results across modules, thereby allowing assessment standards to be monitored, and presents information for student decision-making. Visualisation 1 would not increase in complexity as the number of students increases but would become more complicated as the number of modules grows.

Visualisation 2 was generally viewed as the most difficult to understand. Some stakeholders were not used to extracting meaning from a graph that presents a lot of highly detailed information and may need guidance to appreciate its potential. As Visualisation 2 is built from scatter plots, support and guidance should be available in all HE institutions so that this is not a huge barrier to its use. In contrast, the assessment analytics methodology presented by Romero et al. (2013) requires a degree of sophistication and support that may not be available in every establishment. As Visualisation 2 allows students to assess their relative performance, lecturers to understand learning strength and weakness, and external examiners to pin-point student performance problems, it cannot be expected to give an immediate impression. One of the external examiners commented that academics and learning developers, particularly those involved in course teams, are the stakeholders who can benefit most from Visualisation 2, since it allows a thorough analysis of student performance. Visualisation 2 would not significantly increase in complexity as the number of students increases but would become more complicated as the number of question parts grows. The use of panels arranged in both rows and columns could help to handle such additional complexity.

Visualisation 3 was generally considered easy to understand, particularly by external examiners, learning developers and academics. These stakeholders are the most familiar with student feedback data. They agreed that Visualisation 3 gives an immediate impression of students’ feedback and that it also facilitates the comparison of the student experiences over time, as desired by Brožová and Rydvak (2014). Visualisation 3 was well received by students, especially those at the foundation level, who suggested that similar graphs could assist them in module choice. Although Visualisation 3 does not require advanced skills to understand it, a few industrialists and students found some difficulties in interpreting it, suggesting that it may not be suitable for a broad audience. Visualisation 3 would not increase in complexity as the number of students increases but would become somewhat more complicated as the number of years grows.

Visualisation 4 was judged to be easy to understand, especially by external examiners, learning developers and academics. These stakeholders are the most familiar with dealing with student assessment data and agreed that Visualisation 4 gives an immediate impression of the differences between markers. One of the academics commented that it could be particularly useful for academics and learning developers when discussing specific modules. Visualisation 4 does not require users to have advanced skills to appreciate its meaning and therefore could be very suitable for a broad range of stakeholders. Visualisation 4 does become somewhat more complicated as the number of student groups and the number of markers increases. Again, the use of panels could help.

All stakeholders were generally very supportive of the visualisations and their benefits. There was a considerable desire for an increased use of assessment and feedback visualisations to help to monitor the quality of assessment processes and academic standards. Industrialists also appreciated their value as tools to facilitate decision-making, a key feature recognised by Daniel (2015).

Students agreed that the visualisations can assist with study organisation, relative performance assessment and module choice. The performance aspect was also recognised by Melero et al. (2015), who discussed how a visualisation of student responses to questions can allow students to diagnose their own performance by comparing it with that of others, albeit in a secondary school and not in a HE context. Visualisations 1 and 2 could serve a similar learning enhancement purpose, if they were made available to students.

The visualisations were very well received by learning developers and academics, who viewed them as useful tools to monitor students’ assessment performance. Examination papers should consist of varied questions that assess the module learning outcomes and permit all students to demonstrate the full range of their abilities and achievements, thereby allowing accurate and effective discrimination between them that may ultimately be in the form of a degree classification (Quality Assurance Agency 2015). Visualisations 1 and 2 can be used by academics and learning developers to improve examination questions and assessment practices.

Two external examiners commented that the visualisations helped them to carry out the part of their job that concerns assessment standards across module:

EE1: I have experienced first-hand the usefulness of the visualisations. I found that the presentation of module results provided by the boxplots of coursework, examination and overall marks facilitated comparisons between modules. In addition, graphs that allow visualisation of examination question results can aid and inform future examination setting. I believe that exposure to such visualisations can enhance external examiners’ skills in HE assessment literacy and judgement.

EE2: These visualisation techniques provide powerful, yet simple, tools to facilitate the interpretation and discrimination of students’ examination performances. They can be employed to facilitate the enhancement of modules and programmes, and therefore, can play an important role during the university decision-making process regarding academic standards. External examiners can directly benefit from them by getting an immediate impression of detailed assessment data across modules.

These comments confirm that the visualisations make a useful contribution to the field of learning and academic analytics.

Table 10 summarises the questionnaire responses of the five different stakeholder groups. Specific and general questions related to the four visualisations are listed in the rows, while stakeholder groups are listed in the columns. Positive responses are denoted by ‘+’, negative responses by ‘–’ and mixed responses by ‘+/–’. Mention is also made of specific, positive stakeholder comments. Table 10 shows that the visualisations are perceived differently by the five stakeholder groups. However, learning developers and academics showed similar enthusiasm about the introduction of the assessment and feedback visualisations. These two stakeholder groups have, of course, similar HE backgrounds, although learning developers have particular expertise and experience in education.

Table 10. Summary of the responses to the questionnaire by the different stakeholder groups.
    Stakeholder group
External examiners Learning developers Industrialists Academics Students
Questions Visualisation 1 +
Positive comment
+ + + +
  Visualisation 2
Positive comment
+/–
Positive comment
+/–
Positive comment
  Visualisation 3 + + + +/–
  Visualisation 4 + + +/– +  
  General questions + + +/– + +
Positive responses are denoted by ‘+’, negative responses by ‘–’ and mixed responses by ‘+/–’. Mention is also made of specific, positive stakeholder comments.

Table 10 indicates that there is considerable positivity amongst the five stakeholder groups towards the four visualisations, although some differences in perception are clearly discernible. Overall, there is evidence that all visualisations make a contribution to learning analytics, that Visualisations 1, 2 and 4 make a contribution to assessments analytics and that Visualisations 1 and 3 make a contribution to academic analytics.

Conclusions, limitations and further research

An enormous amount of student performance and feedback data exists in HE institutions, and these data have the potential to monitor, inform and improve assessment processes and the overall student experience. A questionnaire was used to evaluate four visualisations designed to provide simple techniques for engaging with and assimilating such data. The evaluation offered here establishes the benefits that the visualisations provide to five groups of stakeholders. An increase in the use of assessment and feedback visualisations was strongly favoured across all groups. The visualisations, which could be projected onto a screen at a formal assessment panel, can assist external examiners to compare modules, academics to set future assessments or students to self-diagnose their own learning and make important study decisions. It may be concluded that the visualisations make a contribution to assessment, learning and academic analytics, in response to the request for a ‘more scientific’ and ‘evidence-based’ education (Davies 1999; Slavin 2002) by providing considered and appraised tools for increasing the use of student assessment and feedback data.

Although the questionnaire contained a space for additional comments at the end, its questions were closed in nature. Some participants did make extensive comments, especially external examiners, and key points have been reported here. Detailed feedback about an earlier version of the visualisations was also obtained from other stakeholders and acted upon. However, considerable value could be gained from focus group discussions with stakeholders both within and beyond the host institution. In addition, the scope of the questionnaire could be extended to investigate specific visualisation issues. Due to the increased time required from stakeholders, it is anticipated that additional incentives would have to be made available to encourage participation if focus groups or longer questionnaires were planned.

The ethics approval statement that underpins this study is given at the end of this section. It should be noted there may be confidential or data protection issues involved in sharing or releasing assessment or module feedback questionnaire responses, as in many HE institutions such data are only typically seen by the module staff and the relevant School and Faculty quality leads. Policies about who can access such data vary between institutions and should always be checked if similar investigations are planned.

As any visualisation is only as good as the original data set from which it is generated, it would be beneficial to appraise the visualisations using other data sets. The production and comparison of visualisations of similar data from other institutions would be an interesting area of further research, echoing Romero and Ventura’s (2013) call regarding educational data mining for more studies and to share data and models.

Receiving information about their relative performance can motivate and encourage most students (Long and Siemens 2011) but it may also be the case that such information can demoralise students who are experiencing difficulties with the material. It would therefore be of interest to study carefully the impact on student confidence and self-esteem that the wider assessment data sharing may cause. Properly evaluated strategies that enable students to use such assessment data to improve performance need to be developed.

Visualisation 1 could be extended to explore attainment gap differences between groups of HE students, such as traditional or non-traditional (e.g. lower socio-economic) learners, by using a different boxplot for each group. If such a tool could be used to identify and improve the performance of under-performing groups, it could be a potentially transformative exercise for wider student support and progression activities.

Ethics Approval Statement

The data associated with this research may be made available in anonymous/randomised form upon request, provided that the guidelines agreed by both the authors’ and the requester’s institutional ethics panels are followed.

The study was conducted with approval from the Faculty Research Ethics panel at the host institution. The primary mechanism for protecting participants in the study was the removal of any direct identifiers from their data. In addition, results are presented for groups of participants rather than for individuals. Combined, these measures protect participants from having their identities revealed. No names are reported of modules, (just Modules 1, 2 and 3) or the academic year from which the data are taken.

Conflicts of Interest

The authors declare there are no conflicts of interest.

Author biographies

Luciana Dalla Valle (MSc, Pavia, PhD, Milan-Bicocca) is a Lecturer in Statistics in the School of Computing, Electronics and Mathematics (SoCEM), University of Plymouth (UoP), UK. Her main research interests include data visualisation, higher education assessment evaluation and enhancement, multivariate modelling and social media information extraction. She has recently developed and delivered modules on a range of Data Science topics. Orcid ID: orcid.org/0000-0001-7506-5712

Julian Stander (MA, Oxford, PhD, Bath) is Associate Professor in Statistics in SoCEM. He is interested in the development of software tools for education and of methodology for applied and computational statistics. Email: J.Stander@plymouth.ac.uk. Telephone: + 44 1752 586850. Orcid ID: orcid.org/0000-0002-1429-9862

Karen Gresty (BSc, Liverpool, PhD, Exeter) is Associate Dean Teaching and Learning, Faculty of Science and Engineering, UoP. She has a quality assurance and enhancement brief, alongside a pedagogic interest in research-informed teaching.Email: K.Gresty@plymouth.ac.uk. Telephone: + 44 1752 584628. Orcid ID: orcid.org/0000-0003-4429-7873

John Eales (MSc, Kent, PhD, Bath) is Associate Head for Teaching and Learning in SoCEM. He has research interests in statistics education. E-mail: J.Eales@plymouth.ac.uk. Telephone: +44 1752 586882. Orcid ID: orcid.org/0000-0002-2122-8587

Yinghui Wei (MSc, PhD, Manchester) is a Lecturer in Statistics in SoCEM. Her research interests include statistical methods for evidence synthesis. E-mail: yinghui.wei@plymouth.ac.uk. Telephone: +44 1752 586331. Orcid ID: orcid.org/0000-0002-7873-0009

Acknowledgements

Luca Bolognesi provided us with considerable help with the design, production and management of the questionnaires. We are grateful to Philip Archer-Lock, Matthew Craven and Neville Davies for very useful discussions.

References

Astin, A. W. (2012) Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education, Rowman & Littlefield Publishers, Lanham, USA.

Badge, L. J., Saunders, F. W. N. & Cann, J. A. (2012) ‘Beyond marks: new tools to visualise student engagement via social networks’, Research in Learning Technology, vol. 20, no. 1, pp. 16283.

Bloxham, S., et al., (2013) External Examiners’ Understanding and Use of Academic Standards, [online] Available at: https://www.heacademy.ac.uk/system/files/downloads/external-examiners-report.pdf

Boud, D. & Dochy, F. (2010) Assessment 2020. Seven Propositions for Assessment Reform in Higher Education, Australian Learning and Teaching Council, Sydney.

Brožová, H. & Rydvak, J. (2014) ‘Analysis of exam results of the subject “Applied Mathematics for IT”’, Journal on Efficiency and Responsibility in Education and Science, vol. 7, no. 3–4, pp. 59–65.

Bryer, J. & Speerschneider, K. (2016) likert: Analysis and Visualization Likert Items. R package version 1.3.5, [online] Available at: https://CRAN.R-project.org/package=likert

Campbell, J. & Oblinger, D. (2007) ‘Academic analytics’, EDUCAUSE Review, vol. 42, no. 4, pp. 40–57.

Daniel, B. (2015) ‘Big data and analytics in higher education: opportunities and challenges’, British Journal of Educational Technology, vol. 46, no. 5, pp. 904–920.

Davies, P. (1999) ‘What is evidence-based education?’, British Journal of Educational Studies, vol. 47, no. 2, pp. 108–121.

Ellis, C. (2013) ‘Broadening the scope and increasing the usefulness of learning analytics: the case for assessment analytics’, British Journal of Educational Technology, vol. 44, no. 4, pp. 662–664.

Essa, A. & Ayad, H. (2012) ‘Improving student success using predictive models and data visualisation’, Research in Learning Technology, vol. 20, sup1, pp. 19191.

Ferguson, R. & Shum, S. B. (2012) ‘Social learning analytics: five approaches’, in Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, eds S. B. Shum, D. Gasevic & R. Ferguson, ACM, Vancouver, BC, New York, pp. 23–33.

Gibbs, G. & Simpson, C. (2004) ‘Does your assessment support your students’ learning?’, Journal of Teaching and Learning in Higher Education, vol. 1, no. 1, pp. 1–30.

HEFCE (2015) A Review of External Examining Arrangements across the UK, [online] Available at: http://www.hefce.ac.uk/pubs/rereports/year/2015/externalexam/

Higher Education Academy (2012) A Handbook for External Examining. ISBN 978-1-907207-40-2, [online] Available at: https://www.heacademy.ac.uk/system/files/downloads/HE_Academy_External_Examiners_Handbook_2012.pdf

Imrie, B. W., et al., (2014) Student Assessment in Higher Education: A Handbook for Assessing Performance, Routledge, London.

Long, P. & Siemens, G. (2011) ‘Penetrating the fog: analytics in learning and education’, Educause Review, vol. 46, no. 5, pp. 30–32.

Melero, J., et al., (2015) ‘How was the activity? A visualisation support for a case of location-based learning design’, British Journal of Educational Technology, vol. 46, no. 2, pp. 317–329.

Quality Assurance Agency (2015) The UK Quality Code for Higher Education, Overview and the Expectations, [online] Available at: http://www.qaa.ac.uk/docs/qaa/quality-code/quality-code-overview-2015.pdf?sfvrsn=d309f781_6

R Core Team (2017) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, [online] Available at: https://www.R-project.org

Romero, C. & Ventura, S. (2013) ‘Data mining in education’, WIREs Data Mining and Knowledge Discovery, vol. 3, pp. 12–27.

Romero, C., et al., (2013) ‘Association rule mining using genetic programming to provide feedback to instructors from multiple-choice quiz data’, Expert Systems, vol. 30, pp. 162–72.

Sclater, N., Peasgood, A. & Mullan, J. (2016) Learning Analytics in Higher Education. A Review of UK and International Practice, Full Report, JISC. Published under licence CC BY, 4.22.04, 2016. [online] Available at: https://www.jisc.ac.uk/sites/default/files/learning-analytics-in-he-v2_0.pdf

Slavin, R. E. (2002) ‘Evidence-based education policies: transforming educational practice and research’, Educational Researcher, vol. 31, pp. 15–21.

Stander, J. & Dalla Valle, L. (2017) ‘On enthusing students about Big Data and social media visualisation and analysis using R, RStudio and RMarkdown’, Journal of Statistics Education, vol. 25, no. 2, pp. 60–67.

Stander, J., Dalla Valle, L. & Cortina Borja, M. (2018) ‘A Bayesian survival analysis of a historical dataset: how long do popes live?’, The American Statistician.

Stander, J. & Eales, J. (2011) ‘Using R for teaching financial mathematics and statistics’, MSOR Connections, vol. 11, pp. 7–11, Spring Term.

Wickham, H. (2016) ggplot2: Elegant Graphics for Data Analysis, 2nd edn, Springer-Verlag, New York.

Williams, P. (2017) ‘Assessing collaborative learning: big data, analytics and university futures’, Assessment & Evaluation in Higher Education, vol. 42, pp. 978–989.

1st International Conference on Learning Analytics and Knowledge, Banff, Alberta, February 27–March 1, 2011. Available at: https://tekri.athabascau.ca/analytics/