ORIGINAL RESEARCH ARTICLE

Exploring learning analytics practices and their benefits through the lens of three case studies in UK higher education

Neil Dixona*, Rob Howeb and Uwe Matthias Richtera

aAnglia Learning and Teaching, Anglia Ruskin University, Cambridge, UK; bLearning Technology, Library and Learning Services, University of Northampton, Northampton, UK

Received: 4 July 2023; Revised: 3 May 2024; Accepted: 21 May 2024; Published: 10 February 2025

Learning analytics (LA) provides insight into student performance and progress, allowing for targeted interventions and support to improve the student learning experience. Uses of LA are diverse, including measuring student engagement, retention, progression, student well-being and curriculum development. This article provides perspectives on the uses of LA in the UK through the analysis of an expert-led panel discussion held in June 2022. Two institutional case studies and a general overview from an LA service are presented, outlining examples of LA from both an institutional and national viewpoint. Following this, this article analyses the panel discussion themes in relation to the literature, covering both the data quality procedures and practices for learning, teaching and assessment. Outcomes and benefits from case studies are highlighted, which serve as best practice for other Higher Education institutions.

Keywords: learning analytics; student data view; data infrastructure; data accuracy

*Corresponding author. Email: neil.dixon@aru.ac.uk

Research in Learning Technology 2025. © 2025 N. Dixon et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2025, 33: 3127 - http://dx.doi.org/10.25304/rlt.v33.3127

Introduction

Data and analytics have become integral to Higher Education institutions, facilitating informed decision-making regarding learning and teaching activities. Learning analytics (LA) enhances understanding of student learning practices, enabling targeted adjustments to the curriculum and course content (Tippens Reinitz et al., 2022) and can be used to highlight issues around student dropout and retention (Aldowah et al., 2019; Hernández-de-Menéndez et al., 2022), though further action is needed to address these issues.

As online systems capture new forms of engagement data, LA practices must evolve to ensure ethical and transparent data use. This article aims to further understand the range of LA practices that are being used in UK Higher Education (HE). To investigate and compare perspectives, researchers held an online event with three HE experts, including short presentations and a facilitated discussion. The results are presented as case studies of experts’ views, followed by an analysis of the themes that emerged from the facilitated expert discussion.

Literature review

LA has evolved as a field of interest since 2008. One of the earliest accepted definitions of LA was proposed at the 2011 Society for Learning Analytics Research (SOLAR) conference: LA is ‘the measurement, collection, analysis and reporting of data about learners and their contexts, for the purposes of understanding and optimising learning and the environments in which it occurs’ (Long & Siemens, 2011, p. 34).

Banihashem et al. (2018) reviewed LA studies between 2014 and 2017 and found that many of them focused on the benefits of LA in education. These benefits include a deeper understanding of student behaviour and interaction with teaching materials, which can help institutions develop interventions to improve student engagement and retention. Other literature identified potential problems and challenges for LA, such as ethics and privacy, storage, data quality and alignment with educational foundations, such as learning theories, pedagogical and learning design considerations (Banihashem et al., 2018; Francis et al., 2020). In this article, we focus on two of these aspects: data quality and alignment with learning, teaching and assessment.

Challenges for LA systems

Data quality

Effective utilisation of data is not always possible as data may be held separately in different applications or platforms, which create data silos (Ifenthaler & Yau, 2020). A strategic approach to data use is needed (e.g. prioritise the selection of data that will be most useful for the purpose required). This will optimise the quality of interventions based on data-informed decisions (Macfadyen & Dawson, 2012).

Artificial Intelligence overlays on ‘big data’ (i.e. where data have been aggregated into a data warehouse or data storage facility) offer analysts the ability to identify patterns within the information. Nevertheless, the risks of cumulative data inaccuracies increase if the algorithms have not been carefully developed. As Tippens Reinitz et al. (2022) note ‘all people have implicit biases and different ways of interpreting the world, [and] these biases and differences are baked into analytics processes’ (p. 23).

Alignment with learning, teaching and assessment

LA should support learning, teaching and assessment processes rather than being reduced to performance and engagement. LA tools, however, have not always been developed with educational aspects in mind or made a strong link with learning design (Banihashem et al., 2018). Studies such as Hernández-de-Menéndez et al. (2022) do that by measuring student usage of online learning systems; higher usage may be correlated with predicting student grades. Ifenthaler and Yau (2020) also found that visualisations through dashboards were beneficial to promote learning but should include information about learning tasks and the learning progress towards specific goals.

LA dashboards have been developed that show a summary of these visualisations which could be staff or student-facing. Both are intended to show student usage with online systems (Macfadyen & Dawson, 2012), incorporating data such as time online using the virtual learning environment (VLE), log-in frequency and regularity, attendance and library use data. For staff-facing dashboards, the benefit is that systems such as Personal Tutors can use these data as the basis for a discussion with the student to provide transparency around any decisions based on LA data. Student-facing LA dashboards can motivate learners by incentivising them to change their engagement goals (Kim et al., 2016). However, there are ethical challenges in configuring these dashboards to show useful and meaningful data, as the information may be perceived differently depending on the student’s achievement level (Kim et al., 2016). For example, the measurement of interaction data from the VLE does not necessarily equate to student engagement. Therefore, collaboration with students is essential to ensure that dashboards are configurable so that students see the data that best serve their specific learning goals (Macfadyen & Dawson, 2012).

Furthermore, the reasons for study success may vary significantly with different learners (Tinto, 2017). The mechanism of making interventions because of LA needs to be embedded and supported within the institution’s learning and teaching processes. For instance, some students may need discussions with a Personal Tutor as a response to LA reports to be able to structure a way forward in their progression and/or performance. Other students may benefit from the use of tasks, such as assignment deadline reminders to help them manage their work and study balance. Those using LA systems also need to consider whether they are using the tools to identify potential underperformance, such as student retention. Alternatively, LA also presents positive enforcement, such as encouraging students who are already performing well.

Research methods

This paper investigates how specific institutions are meeting both data quality and learning, teaching and assessment challenges informed by LA practices. This article uses a case study methodology to inform practice. ‘A case study can be defined as an intensive study about a person, a group of people or a unit, which is aimed to generalise over several units’ (Gustafsson, 2017 quoted in Heale & Twycross, 2018, p. 7). Case studies were based on two UK Higher Education institutions (University of Bedfordshire and University of Hertfordshire) that have implemented LA and a general overview from an UK institutional LA non-profit service agency (Jisc). The research questions are:

RQ1: How can the findings of the case studies be used to inform practice around LA data quality processes?

RQ2: How can the findings of the case studies be used to inform practice around student engagement (such as the use of dashboards and personal tutoring for student engagement)?

RQ3: How can the findings of the case studies be used to inform practice around institutional reporting (such as retention)?

The research method involved data collected from a recorded panel discussion on LA held on 28 June 2022, organised by the Association for Learning Technology East England (ALT, 2023). Ethics approval was obtained from the School of Education and Social Care at Anglia Ruskin University to undertake the research (application number ETH2122-0378). Panel members were provided with an outline of the themes to be discussed and asked for consent to record the discussion and use the data for research purposes and publication prior to the panel.

The panel discussion lasted 90 min and consisted of three, 5-min presentations offering different perspectives on LA, followed by a facilitated discussion led by one of this paper’s co-author’s (and member of ALT East England’s organizing committee). The panel discussion was recorded online via Microsoft Teams, and the audio was transcribed using Microsoft Stream and was edited for accuracy by a member of the research team. The transcript was then subject to a thematic analysis, which is a qualitative research method suited ‘for identifying, analysing and reporting patterns (themes) within data’ (Braun & Clarke, 2006, p. 79).

The authors first used the facilitated discussion questions as an initial thematic structure, and relevant passages in the transcript were coded under each theme. Other transcript passages deemed relevant but did not fit under one of the themes were also coded separately. This resulted in data infrastructure and processes, predictive analytics, alignment with learning and teaching, engaging staff training and support, and data accuracy as initial themes. This process diverges from Braun and Clarke’s (2006, p. 94) recommendation to avoid ‘using of the data collection questions (such as from an interview schedule) as the themes that are reported’. However, after further analysis, data accuracy, and data infrastructure and processes were grouped into data quality processes (RQ1). Alignment with learning and teaching, and engaging staff were subsequently divided into student engagement and institutional reporting measures (RQ2 and RQ3). This demonstrates that repeated thematic analysis is likely to change the themes and subthemes depending on how they relate to each other forming coherent patterns. Furthermore, whilst the questions provided an initial structure, the authors had to code the responses to these questions to identify coherent and consistent patterns by grouping-related codes. These patterns subsequently became subthemes. The main challenge to coherence and consistency (Braun & Clarke, 2006) was to delineate unique patterns relating to the case studies and those themes and subthemes, which were common across the case studies. There were also codes which went beyond the panel discussion and were predominately introduced by audience comments and questions. Part of the analysis consisted of making decisions on how relevant codes and emerging themes were related to the research questions and if there was enough data to constitute a theme or subtheme. For example, predictive analytics and engaging staff had insufficient data richness so were not included in this paper.

Once the coding was done and the themes and subthemes were identified, we allocated selected themes to each author, for analysis and interpretation, supporting the findings with the literature, and including transcript quotes to illustrate points made which are reported in the Results and discussion section. Quotes are attributed to the presenters and facilitator according to their respective organisations and are cited in the article as Jisc (JiscRep), University of Hertfordshire (Herts) or University of Bedfordshire (Beds) and the ALT East England facilitator (Author). Images in this article either received authors’ permissions (Jisc Learning Analytics, personal communication 2022) or were covered by Creative Commons licence (Sclater et al., 2016, p. 19).

Results and discussion

This section is structured according to the research questions. For the analysis of RQ1, a general overview is provided of the overarching themes found on data quality processes from the three case studies, before examining the outcomes for each of the respected institutional case studies. For the analysis of RQ2 and RQ3, a general overview of learning, teaching and assessment is given in the context of the three case studies. Following this, the benefits and outcomes of the institutional case studies are explored separately for RQ1 and RQ2.

General overview of data quality processes (RQ1)

According to JiscRep, institutions adopt LA for different reasons. The three main drivers are using data to inform retention, learner engagement and progression. JiscRep stated that increasingly universities are also using LA to inform well-being and curriculum revision.

For this data to be available to inform different audiences and processes, institutions need a data architecture to support LA. Sclater et al. (2016) described and illustrated the architecture as being comprised of three levels: data input, data warehouse and data processing and output in the form of staff dashboards, student apps, and alerts and interventions (see Figure 1):

Fig 1
Figure 1. Jisc’s learning analytics architecture (Sclater et al., 2016, p. 19).

This shows how data from sources such as the VLE, the SIS [Student Information System], library systems and students’ own ‘self-declared’ data feed into the learning analytics warehouse. At the heart of the architecture is the learning analytics processor where predictive analytics are carried out, and lead to action coordinated by the alert and intervention system. Visualisations of the analytics for staff are available in a series of dashboards, and a student app allows learners to view their own data and compare it with others. (Sclater et al., 2016, p. 18, emphasis in original)

Similarly, JiscRep described that an institutional data architecture comprises three layers – the data layer consisting of structured and unstructured data from different systems including student records and education systems such as the library, attendance, media and the learning management system or virtual learning environment (VLE). The other two layers are data storage and analysis, and data presentation and action (see Figure 2).

Fig 2
Figure 2. Overview of the processes from data collection to presentation (Jisc Learning Analytics, personal communication, 2022).

Depending on the available data sources and processing, data can provide hindsight as descriptive (what happened) and diagnostic (why did it happen) analytics, or foresight as predictive (what will happen) and prescriptive (how to make it happen) analytics (see Figure 3).

Fig 3
Figure 3. Types of analytics in education (Jisc Learning Analytics, personal communication, 2022) based on Gartner’s Analytics Ascendancy Model (Maoz, 2013; McNellis, 2019) – recoloured.

Central to data quality is system architecture. Figure 1 illustrates how the Jisc LA systems operate. Structured data are collected from institutional student record systems alongside unstructured data brought in via Application Protocol Interface (API), plugins or direct databases from educational systems. These systems include the VLE, library systems, attendance monitoring systems, lecture capture and other applications. Data are stored and analysed in Jisc’s ‘learning data hub’, and after aggregation, the data are visible on specific dashboards for both staff and student users.

Outcome of the case studies to inform practice around data quality processes

Data accuracy is critical to the success of all LA systems and the decision-making processes that rely on it. Author noted that achieving total accuracy is difficult due to both the potential for human error and system issues. This causes further challenges when actions are based on the data (Gupta et al., 2021).

Beds noted that systems which lack data entry validation or contain information that is not regularly checked may result in inconsistencies between different information repositories. These problems are common to most educational institutions and may also include problems with the interoperability of data formats and data quality (Arroway et al., 2016). Arroway et al. (2016) also note that two-thirds of respondents believed that data used for analytics are not always accurate. On some occasions, particular tools simply do not correctly aggregate analytics data.

We did work with [our VLE] because the mobile app was giving inaccurate data to us, and 70% of our students use the [VLE] app, so we have very high usage of it. So, we needed to ensure that that data was accurate. (Herts)

As a consequence, robust checks were required to investigate how these data inaccuracies were caused and to prevent them in future. Herts mentioned how they worked with their VLE provider Instructure Canvas to ensure the data were accurately captured, though they did not outline the specific actions. Problems may also occur when incorrect data are entered manually into the system:

We do have issues with it where it’s academic staff inputting assignments incorrectly… especially because students take notice of [the grades]… So, when they’re seeing the total module grade as being inaccurate, it really then does [start], a conversation or two with the students and with the staff. (Herts)

Institutions may address the accuracy of manually inputted data through support and training:

We’re running lots of CPD for our staff, so I’m running assessment surgeries and we’re doing training for new staff where we can sit with the staff members and make sure that their assignments are set up correctly, talk them through the process. (Herts)

Additionally, it also required communication with academic staff, which was done through a mix of methods:

… we did a video that went out onto the program sites for the students. We did lots of communications […] so we can send them announcements and things like that out to them. (Herts)

In summary, the case studies suggest that the data quality processes were more about culture change, incorporating communications, promotion and training to raise awareness of the LA system with academic staff. Additionally, institutions that have put processes in place to assure the quality of their data will benefit from the LA systems that rely on them. Herts described how the introduction of the LA system was gradual, with a pilot with 15 programs. Arroway et al. (2016) describe this method and the benefits of getting people to report on the issues or using other staff to cross-check data. The resulting cleaner data will result in improved data-informed decision-making and overall benefits for staff and students.

General overview of LA practice around learning, teaching and assessment (RQ2 and RQ3)

Jisc LA systems primarily focus on measuring student engagement, retention, progression, student well-being and curriculum development (Jisc, 2023). According to JiscRep, a major driver for measuring retention is the reporting required by the Office for Students (OfS). The OfS regulates UK higher education, with responsibilities such as distributing funding, granting degree-awarding powers, and collecting and analysing different types of official statistics related to performance monitoring, including student retention measures (Office for Students, 2024a). Furthermore, institutions are motivated to measure student engagement because higher engagement is a factor in student retention and progression. As stated by JiscRep, Higher Education institutions want insights into students’ progression and completion of their courses as these are performance indicators reported to the UK Higher Education Statistics Agency (HESA). HESA, now part of Jisc, collects and analyses data on universities and colleges in the UK to inform policy and decision-making in higher education (HESA, 2024a). HESA statistics are also a statutory requirement (HESA, 2024b), though broader in scope than OfS performance measures, providing a detailed picture of the Higher Education landscape.

The student dashboards supporting Personal Tutors and module leaders are the main ways to measure engagement. LA supports Personal Tutors by providing evidence of how students are performing and engaging with their learning, therefore affording the basis of a discussion with students. Module leaders use LA to understand how students used the course material and engaged in activities and to identify potential areas where their teaching provision could improve. Furthermore a student dashboard can encourage students ‘to take control and champion their own learning by setting their own goals and benchmarking themselves against their peers’ (Jisc, 2023).

Herts has a bespoke, built-in-house system, which is used for personal tutoring and student engagement. A ‘Programme level’ display is included, which shows student enrolment and assignment submission information. For example, students with missing summative assignments are highlighted.

Further information shown by the ‘Programme level’ display includes the average summative and formative score from all the modules upon which the student is enrolled, reading list views for all modules and the last date the student swiped their library card on campus. In addition, there is an ‘engagement status’ column, which shows whether student engagement fell below a threshold and whether an email has been sent to students.

Herts noted that ‘all our referrals are done by humans. We don’t have any systems [automatically] referring our students on, it’s the personal tutor…’. A Personal Tutor would know the personal circumstances of the student and have more insight into any changes in activity over a particular period. Herts said that ‘programme leaders only have access to their own programme of study and then they give access to personal tutors who need to see students on their particular programme’. This information is not revealed or disclosed inadvertently to preserve confidentiality.

Herts explained the engagement status as:

We then have a module engagement score and now this is engagement with our virtual learning environment, and this is a score out of three, three being high, zero being low. And it is done by our VLE algorithm, and it looks at the different participation with page views put together and it gives them a score. (Herts)

There is also an ‘Individual Student’ view, which shows more specific information around the factors contributing to their engagement status. Herts explains that ‘all the different modules will be listed. We can then see which modules [the student is] actually missing summative assignments and their current summative score. We could then look at missing formative and average formative scores’.

We have a participation level, and this again is to do with our VLE and this is how many times they are engaging with things like quizzes and discussion boards, submitting assignments. We then have the last time that they’ve gone into these particular modules, the total activity time they’ve spent inside the module. (Herts)

One issue is that a calculation needs to be made about what constitutes the level of student engagement. Herts described one example of the process used to show how the score is different for each context and module:

[The system] …pulls in the stuff for example like clicking on different items inside the VLE, engaging with quizzes, submitting assignments, how many page views, how many hours have they spent… then it compares it to the rest of the cohort. So, that score of three is like 3 tiers. So the top tier would be closer to three, the lowest tier down to 0, and that’s how it scores and sorts them. (Herts)

However, over time, Herts realised that the assignment hand-ins and students not logging in at all indicated the students most at risk.

Herts also described how the system offers module analytics where the engagement status of a single student can be compared against the rest of the cohort. In addition, students can see their own data through a visual dashboard called ‘My Learning’. The student dashboard is more visual, showing data such as summative and formative grades, classroom and VLE engagement in a graphical format. A summary was given:

For us, it ties in nicely with our personal tutor framework that goes hand-in-hand. We are able to identify early who are our at-risk students. Emails are triggered to our non-engaging students, which are very much supportive emails, asking the students to go to their personal tutors. (Herts)

The emails provided a way to contact students. Herts did not say whether this intervention increased student engagement subsequently, or whether there were increased meetings with Personal Tutors.

Yousuf and Conlan (2015) found that enabling peer comparisons amongst learners can support motivation and engagement. Herts described how their VLE allows students to compare themselves to other students concerning both formative and summative assessments. This feature has always been available to students. Herts did not say whether students could opt out or decline to have their data included. However, Herts did comment that when students were questioned in focus groups about the ability to compare their marks with other students, most stated they wanted to know how they compared to their peers. Herts said that although the assignment grades gave students some indication of their study performance, the VLE data analytics offered them more detailed, comparative information. Herts noted that their bespoke LA tool emphasises the competitiveness between students. The LA tool also highlights to students when they are underperforming and suggests when they might need to approach a Personal Tutor for further support. Herts stated that staff appeared more anxious about this function being visible to learners than the students.

Burleson et al. (2005) and Harvey and Keyes (2019) found that the way students utilised social comparison to better-performing peers impacted the interpretation of the comparison. For example, those students who aspired to be like their better-performing peers used the comparison to improve their self-perception. In contrast, those who felt they performed worse than their peers experienced a negative effect on their self-perception. This suggests that students should be coached on constructively using the LA comparisons available to them. In addition, although students liked being able to compare themselves to their anonymised peers, due to concerns around privacy they did not want this information to be available to those outside their course (Santos et al., 2012).

Outcomes for the case studies to inform practices around student engagement

A benefit of the system is that it gives module leaders a way to develop their assignments, activities and content purposefully. The systems provided a way for the module leader to further improve or streamline the curriculum design. Herts mentioned the LA dashboard served as an additional verification of what worked in a module alongside student feedback. The outcome was that module leaders could make evidence-informed decisions about how they could design their teaching content in the future. For example, a long video could be made shorter or split into multiple different videos if students did not engage with the longer video.

Outcomes from the case studies to inform practices around institutional reporting measures

Beds stated their dashboard, which was built in-house, enables comparison against institutional benchmarks such as continuation rates, which are reported to the OfS (Office for Students, 2024b). The benefits of this LA system to Beds were that the system indicates the extent to which a course performs against the HESA reporting measures such as continuation rates, which are indicators of course quality and viability. These specific benchmarks are required by the Condition B3 indicator, which is designed to measure whether students achieve outcomes recognised for employment or further study (Office for Students, n.d.), and the Teaching Excellence Framework (TEF) indicator which demonstrates student outcomes and experience indicators (Office for Students, 2023). The TEF is the UK university assessment scheme, run by the OfS, that evaluates the quality of undergraduate teaching in universities and other higher education providers (Office for Students, 2023). ‘The TEF does this by assessing and rating universities and colleges for excellence above a set of minimum requirements for quality and standards’ (Office for Students, 2023). Beds’ dashboard is designed to show whether the University meets, exceeds or falls below the benchmark indicator threshold, and what data contribute to this indicator:

The dashboard is designed to present the institutional continuation rate against the threshold and to provide incrementally detailed breakdowns all the way down to course level. This enables us to drill down to the required levels so that we can identify areas of good practice or to target any areas that may be cause for concern and require intervention. (Beds)

The data can be viewed at three levels: at the top level, 4-year aggregate figures are shown at an institutional level, then broken down by course, type/delivery, campus and OfS subjects grouped according to Level 2 of the Common Aggregation Hierarchy (CAH2). The Common Aggregation Hierarchy (CAH) is a standardised system for grouping subject codes and terms for UK universities, which allows for comparison between institutional subjects (HESA, 2024c). There are three levels (CAH1, CAH2 and CAH3) (HESA, 2024c). CAH1 is the broadest and comprises major subjects (e.g. Engineering and Business), CAH2 provides additional groupings (e.g. Mechanical Engineering) and CAH3 gives the most specific groupings (e.g. Aerospace Engineering).

For Beds, the second level of the dashboard shows the CAH2 subjects, which are then broken down by course type and campus, and further defined into more granulated OfS subjects, at level 3 of the Common Aggregation Hierarchy (CAH3). Finally, the dashboard can view the CAH3 as course type, campus and individual course code. With the three levels of data, the dashboard can show how the data contribute to each institutional benchmark. For example, a course like Biology may meet the OfS benchmark and threshold for continuation, but further analysis may reveal the foundation year has an underperforming level of continuation. For example, Beds described:

We can estimate our continuation rate to within two or three percent. So we know it is generally going up. So we almost predict what the OfS are going to tell us in a year’s time just to give us a heads up on what to expect. (Beds)

This level of detail allows Beds to identify areas of best practice in certain subjects or courses or intervene if performance falls below the OfS benchmark threshold. Beds illustrated a fictional example of an issue such as falling student numbers, and how the data could be interrogated to extrapolate trends, to see if the data were a continuing trend or an outlier. It showed how different courses (such as foundation degrees) fed into this data, and how to isolate a particular course that was showing negative trends for reporting purposes. According to Beds, this data could then be used to investigate good practices around the courses that exceeded these benchmarks.

Conclusion

This article discussed the benefits, challenges and opportunities LA may provide for Higher Education. Data from the various electronic systems that students engage with, such as the VLE, attendance monitoring, library systems and the student information system, can draw a picture of learner engagement and performance at individual, module, programme and institutional levels, thus enabling the monitoring of student progression and performance, as well as institutional performance of academic programmes. LA provides opportunities for positive interventions to support students’ academic and social well-being, institutional benchmarks and a way to highlight student engagement with modules and programmes. With appropriate actions, this can lead to change and improved outcomes.

The discussion elicited several challenges around data quality (accuracy and consistency) and using data to inform best practices in learning, teaching and assessment (Sclater & Mullan, 2017). For data quality, some of the issues are around the institutional data infrastructure and the need for training, and effective procedures on using LA to ensure the systems are integrated most effectively. Whilst some of the data infrastructure can be outsourced (e.g. Jisc learning analytics services), data use requires institutions to invest in expertise and procedures, such as different dashboards for different data, data officers and access permissions. For learning, teaching and assessment practices, whilst most institutions use data to monitor performance against data reported to HESA, data used to inform the curriculum and student support (such as personal tutoring) are ongoing (Ahern, 2018, 2020; Cormack & Reeve, 2020; Newham & Francis, 2021).

Acknowledgements

The authors would like to thank the panel members from Jisc, the University of Bedfordshire, and the University of Hertfordshire for their valuable discussion contributions and insights. We would also like to thank Jennie Dettmer for their contributions as a participatory investigator during the start of the research process.

Declaration of interest statement

The authors report there are no competing interests to declare.

References

Ahern, S. J. (2018). The potential and pitfalls of learning analytics as a tool for supporting student wellbeing. Journal of Learning and Teaching in Higher Education, 1(2), 165–172. https://doi.org/10.29311/jlthe.v1i2.2812

Ahern, S. J. (2020). Making a #Stepchange? Investigating the alignment of learning analytics and student wellbeing in United Kingdom higher education institutions. Frontier in Education, 5, 5314245. https://doi.org/10.3389/feduc.2020.531424

Aldowah, H., Al-Samarraie, H. & Fauzy, W. M. (2019). Educational data mining and learning analytics for 21st century higher education: a review and synthesis. Telematics and Informatics, 37, 13–49. https://doi.org/10.1016/j.tele.2019.01.007

ALT (2024). ALT East England. Association for Learning Technology. Retrieved from https://www.alt.ac.uk/groups/members-groups/alt-east-england

Arroway, P. et al. (2016, July 29). Learning Analytics in Higher Education. EDUCAUSE. Retrieved from https://library.educause.edu/~/media/files/library/2016/2/ers1504la

Banihashem, S. K. et al. (2018). Learning analytics: a systematic literature review. Interdisciplinary Journal of Virtual Learning in Medical Sciences, 9(2), 63024. https://doi.org/10.5812/ijvlms.63024

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. Retrieved from https://www.tandfonline.com/doi/abs/10.1191/1478088706QP063OA

Burleson, K., Leach, C. W. & Harrington, D. M. (2005). Upward social comparison and self-concept: inspiration and inferiority among art students in an advanced programme. British Journal of Social Psychology, 44(1), 109–123. https//doi.org/10.1348/014466604X23509

Cormack, C. & Reeve, D. (2020, July 22). Code of Practice for Wellbeing and Mental Health Analytics. Jisc. Retrieved from https://www.jisc.ac.uk/guides/code-of-practice-for-wellbeing-and-mental-health-analytics

Francis, P. et al. (2020). Thinking critically about learning analytics, student outcomes, and equity of attainment. Assessment and Evaluation in Higher Education, 45(6), 811–821. https://doi.org/10.1080/02602938.2019.1691975

Gupta, N. et al. (2021, August 14–18). Data quality for machine learning tasks. In, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD ‘21), 4040–4041. https://doi.org/10.1145/3447548.3470817

Heale, R. & Twycross, A. (2018). What is a case study? Evidence Based Nursing, 21(1). https://doi.org/10.1136/eb-2017-102845

Harvey, A. J. & Keyes, H. (2019). How do I compare thee? An evidence-based approach to the presentation of class comparison information to students using Dashboard. Innovations in Education and Teaching International, 57(2), 163–174. https://doi.org/10.1080/14703297.2019.1593213

Hernández-de-Menéndez, M. et al. (2022). Learning analytics: state of the art. International Journal on Interactive Design and Manufacturing, 16(3), 1209–1230. https://doi.org/10.1007/s12008-022-00930-0

HESA. (2024a). About HESA. HESA. Retrieved from https://www.hesa.ac.uk/about

HESA. (2024b). Designated Data Body for England. HESA. Retrieved from https://www.hesa.ac.uk/about/what-we-do/designated-data-body

HESA. (2024c). HESA Collections. HESA. Retrieved from https://www.hesa.ac.uk/collection/coding-manual-tools/hecoscahdata/cah#:~:text=The%20Common%20Aggregation%20Hierarchy%20(CAH,for%20the%20majority%20of%20uses

Ifenthaler, D. & Yau, J. Y. K. (2020). Utilising learning analytics to support study success in higher education: a systematic review. Educational Technology Research and Development, 68(4), 1961–1990. https://doi.org/10.1007/s11423-020-09788-z

Jisc. (2023). Learning Analytics. Jisc. Retrieved from https://www.jisc.ac.uk/learning-analytics

Kim, J., Jo, I. H. & Park, Y. (2016). Effects of learning analytics dashboard: analyzing the relations among dashboard utilization, satisfaction, and learning achievement. Asia Pacific Education Review, 17, 13–24. https://doi.org/10.1007/s12564-015-9403-8

Long, P. & Siemens, G. (2011, September 12). Penetrating the Fog: Analytics in Learning and Education. Educause. Retrieved from https://er.educause.edu/-/media/files/article-downloads/erm1151.pdf

Macfadyen, L. P. & Dawson, S. (2012). Numbers are not enough: why e-Learning analytics failed to inform an institutional strategic plan. Educational Technology & Society, 15(3), 149–163.

Maoz, M. (2013, June 26). How IT should deepen big data analysis to support customer-centricity. Gartner Research. Retrieved from https://www.gartner.com/en/documents/2531116

McNellis, J. (2019, November 5). You’re likely investing a lot in marketing analytics, but are you getting the right insights? Gartner Blog. Retrieved from https://blogs.gartner.com/jason-mcnellis/2019/11/05/youre-likely-investing-lot-marketing-analytics-getting-right-insights/

Newham, J., & Francis, P. (2021, August 17). Mental health analytics: An innovative approach to understanding students’ wellbeing. Office for Students. Retrieved from https://www.officeforstudents.org.uk/for-providers/equality-of-opportunity/effective-practice/mental-health-analytics-an-innovative-approach-to-understanding-students-wellbeing/

Office for Students (OfS). (2023, 28 September). The TEF – A Guide for Students. Retrieved from https://www.officeforstudents.org.uk/for-students/teaching-quality-and-tef/the-tef-a-guide-for-students/

Office for Students (Ofs). (2024a). What We Do. OfS. Retrieved from https://www.officeforstudents.org.uk/about/what-we-do/

Office for Students (Ofs). (2024b). Key Performance Measures. OfS. Retrieved from https://www.officeforstudents.org.uk/about/key-performance-measures

Office for Students (OfS). (n.d.). Condition B3: Baselines for Student Outcomes Indicators. OfS. Retrieved from https://www.officeforstudents.org.uk/media/490d884f-03aa-49cf-907d-011149309983/condition_b3_baselines.pdf

Santos, J. L. et al. (2012). Goal-oriented visualizations of activity tracking: a case study with engineering students. In, Buckingham Shum, S., Gasevic, D. & Ferguson, R. (Eds.). Proceedings of the 2nd International Conference on Learning Analytics and Knowledge. Association for Computing Machinery, 143–152.

Sclater, N. & Mullan, J. (2017, January). Jisc Briefing: Learning Analytics and Student Success – Assessing the Evidence. Jisc. Retrieved from https://repository.jisc.ac.uk/6560/1/learning-analytics_and_student_success.pdf

Sclater, N., Peasgood, A. & Mullan, J. (2016, April). Learning analytics in higher education. A review of UK and international practice. Jisc. Retrieved from https://www.jisc.ac.uk/sites/default/files/learning-analytics-in-he-v3.pdf

Tinto, V. (2017). Through the eyes of students. Journal of College Student Retention: Research, Theory & Practice, 19(3), 254–269. https://doi.org/10.1177/152102511562

Tippens Reinitz, B. et al. (2022, July 28). 2022 EDUCAUSE Horizon Report. Data and Analytics Edition. EDUCAUSE. Retrieved from https://library.educause.edu/-/media/files/library/2022/7/2022hrdataandanalytics.pdf?la=en&hash=9FA4BFE5CDA22F19AEB4F7B46F8F1AAC6206BE3F

Yousuf, B. & Conlan, O. (2015). VisEN: motivating learner engagement through explorable visual narrative. In, Conole, G., Klobucar, T., Rensing, C., Konert, J. & Lavoue, E. (Eds.), Design for teaching and learning in a networked world. Springer, 367–380.