ORIGINAL RESEARCH ARTICLE

The effects of interactive mini-lessons on students’ educational experience

Lindsay D. Richardson*

Department of Psychology, Carleton University, Ottawa, ON, Canada

(Received: 19 September 2022; Revised: 2 February 2023; Accepted: 4 March 2023; Published: 6 April 2023)

With the shift to online learning, many instructors have been forced into course delivery that involves educational lecture videos. There are a number of different elements that impact the quality of educational videos and overall student experience (e.g. instructor eye gaze, audio levels, screen sizing). More specifically, research has demonstrated that segmented videos have educational benefits over the traditional didactic ones. The present experiment aimed to examine whether interspersed interactive content could increase post-secondary students’ retention and engagement above simple segmentation. As such, young adults experienced one of four lesson types: didactic video, segmented videos, segmented videos with interactive content, and a condensed version of the interactive segmented videos. Then, they were asked to complete an engagement scale, an online learning experience questionnaire, and a surprise test. The results demonstrated a performance benefit to segmented videos for post-secondary students who prefer to learn in person as opposed to online.

Keywords: education; online learning; lectures; lessons; interactive

The appendices for this paper are available as supplementary material

*Corresponding author. Email: lindsayrichardson@cunet.carleton.ca

Research in Learning Technology 2023. © 2023 L.D. Richardson. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2023, 31: 2900 - http://dx.doi.org/10.25304/rlt.v31.2900

Introduction

Online learning is hardly a new concept, as students have had the choice to learn from a distance for many years (Beatty 2002). However, the onset of a global pandemic (i.e. March 2020) revoked students’ choice to learn from home. Similarly, instructors were forced to teach online courses and many of them had little-to-no experience or guidance in doing so. Some instructors found themselves teaching through web-conferencing by using tools such as Zoom and Microsoft Teams. Others turned to creating asynchronous courses, wherein most of the course material was delivered via pre-recorded lecture videos. Hence, all instructors were required to learn a potentially new skill: media production and multimedia presentation.

Unfortunately, much of the research on multimedia (e.g. Mayer 2009) comes from the late 1990s and early 2000s, coinciding with the surge of technology within the classroom. Moreover, much of it pertains to technology within a face-to-face (F2F) classroom and may not always be applicable to an e-learning environment. For research that pertains to e-learning, the literature remains quite sparse. This might be, at least in part, due to the challenge of managing courses with different requirements. Bates’ (2015) book, entitled Teaching in a Digital Age, exemplifies the difficulty of disseminating best practices in online teaching. Bates discussed many general guidelines for teaching and learning online and instructors might find this to be a great starting point. Still, many questions have not been answered concretely, likely because there are many nuances related to teaching online and, therefore, the topic itself is so extensive. While all of these nuances are important factors related to online teaching and learning, the present research aimed to investigate one key component: creating engaging educational videos for asynchronous dissemination.

While media production practices, such as lighting techniques and sound checks, can vastly increase students’ attention, there are other factors that influence cognitive engagement and learner retention in online lecture videos. In his influential work, Mayer (2009) outlined 12 principles of multimedia use in learning that are meant to reduce students’ cognitive load (e.g. excluding extraneous material, highlighting main points, and adding relevant pictures). This present article is interested in learner-centred post-secondary education, wherein instructors are meant to facilitate the learning of young adults. This area of research, interested in adult learning, has been termed andragogy (Knowles 1984). Andragogy, or adult learning theory, posits that many factors influence the learner’s ability to effectively incorporate new information in a meaningful way: working memory, learner engagement, and motivation to listen. Knowles argued that the adoption of the term andragogy acknowledges the difference between teaching children and adults, that is, where pedagogy refers to teacher-centred learning, andragogy encompasses learner-centred designs, wherein the teacher acts as a facilitator of learning. This approach adds another layer of complexity to the issue: creating asynchronous lecture videos that encompass learner-centred course design. Creating engaging lecture videos that help learners meet the learning objectives for the course without the presence of the learning facilitator (i.e. instructor) is likely to be a difficult task.

To add to this intricacy, the literature is mixed with regard to researchers’ understanding of empirically based practices surrounding online lecture video creation. For instance, some researchers have advocated for shorter lecture videos due to learners’ relatively short attention span (e.g. Cooper and Richards 2017; Jeffries 2010; Mayer et al. 1996). Jeffries argued for a lecture model in which lectures should comprise 15- to 20-min segments. However, Bradbury (2016) argued that evidence suggesting a short attention span during lectures is lacking in the primary literature. In fact, Bradbury argued that adhering to the 15-min lecture model, with no empirical evidence to support it, implies that we, as instructors of science, ‘don’t really care about evidence’ (p. 513). Bradbury’s stance highlights the importance of conducting empirical investigations to determine whether shorter and segmented videos can lead to academic benefits over the typical 90-min didactic lecture model.

Recently, Humphries and Clarke (2021) aimed to examine the effect of video length on learner satisfaction and retention. The researchers hypothesised that segmented videos would be perceived as more satisfying by learners compared with the typical didactic lecturing. In addition, they posited that segmented videos would lead to performance benefits such as higher grades. Humphries and Clark measured students’ preference for didactic videos versus segmented ones over the course of an academic term and collected students’ final grades. Their results demonstrated a strong preference towards segmented recordings compared with didactic lecture videos. Moreover, learners who watched segmented video lectures outperformed their peers academically in the course. This study demonstrates an empirical benefit to shorter lecture videos on measures of student preference and performance.

While this research offers a great contribution to the literature, Humphries and Clark (2021) do not differentiate between active and passive learning in their model. That is, all participants acted as passive learners, as opposed to active ones, in both the segmented and didactic lecture videos. One method for allowing learners to be active in their learning, while watching lecture videos, is to add a component of interactivity to them; that is, allow students a chance to do something with what they are learning. In a F2F lecture, it is not uncommon for an instructor to stop lecturing and pose a question to the class. This simple change in teaching style can allow for students in the classroom to become active learners rather than passive consumers of knowledge. While the focus may still be on the content, this teaching strategy allows the instructor to shift from an instructor-centred approach to a learner-centred one. Unfortunately, research on interactive components within educational lecture videos is non-existent. However, some researchers have investigated interactivity in the context of online textbooks.

For instance, Sommers et al. (2018) investigated the efficacy of interactive textbooks used in post-secondary education. Participants were randomly assigned to one of three different textbook conditions: print book, on-screen portable document format (PDF), or interactive eBook. They were then surveyed on satisfaction and were tested on retention. The results demonstrated that students reported increased levels of satisfaction with eBooks compared with both print and PDF documents. Additionally, students retained more information from the eBooks compared with print and PDF. This suggests that interactive content may be superior to static content either in print or online. While it has not yet been applied to online lecture videos, it is plausible to expect that the addition of interactive components to an educational video might increase retention and cognitive engagement as well.

Creating a series of segmented lecture videos with interspersed interactive components might be a challenge, as there are many components that influence students’ engagement and retention. For example, research on conducting online surveys has highlighted the importance of minimising scrolling, zooming, and pinching. More specifically, Bacon et al. (2017) found that these actions were related to lower levels of participant engagement in online surveys. Since the platform for online surveys is similar to online learning systems in look and feel, this notion can be reasonably applied to online learning as well. Accordingly, it would be cognitively taxing to have several lecture components on one page, ultimately forcing students to continue scrolling to finish the lecture. Just as PowerPoint animations can be either engaging or distracting in lectures (Schmaltz and Enström 2014), it is likely that other factors contribute to the engaging or distracting nature of an interactive lesson. For instance, the number and type of interactive components between content videos might either engage students or distract from consolidating the information from the previous video. This should be taken into consideration when designing interactive lessons.

The goal of this experiment was to investigate the influence of interactivity and video length on students’ engagement and retention. As such, participants experienced one of four different types of lessons, depending on condition: a 20-min didactic lecture (Lecture); a 20-min series of segmented videos (Series); the 20-min segmented videos interspersed with interactive content (Lesson); and a 16-min compact version of the interactive lesson (Compact Lesson). They then completed a modified version of O’Brien et al.’s (2018) Engagement Scale. After completing a survey about their recent online learning experiences, participants were asked to complete a surprise retention test. It was hypothesised that participants would report higher levels of engagement for segmented and interactive lessons compared with the typical didactic lecture model. It was also expected that participants in the interactive and segmented lesson would outperform their peers on the test.

Methods

Participants

Participants were 440 undergraduate students from Carleton University enrolled in introductory level psychology courses. They were recruited via Carleton’s SONA system. As such, they were compensated with partial course credit for participation.

Training

The stimuli for this experiment were mini lessons. Participants experienced one of four lessons, depending on condition: 20-min video (Lecture), five 4-min videos (Series), five 4-min videos with interactive content interspersed (Interactive Lesson), or five 3-min videos with interactive content interspersed (Compact Interactive Lesson). The content for all lessons was the same; that is, the topic for the lesson was The Neural Basis of Addiction.

Lecture

The Lecture video was 20 min and 18 s in length and comprised a narrated Power-Point presentation with the occasional appearance of the instructor on screen. The average pace of the lecture was 140 words per minute. For a transcript of the lecture, please see Appendix A (supplementary material).

Series

The series was constructed by cutting the Lecture video into five videos of approximately equal length. Videos were 3:57, 4:01, 4:30, 3:37, and 4:11 min in length. Videos varied slightly in length as it was ensured that they were segmented at a natural stopping point (e.g. at the end of a sentence). Between each video, participants needed to press a ‘Next’ button to advance the lesson.

Interactive lesson

The interactive lesson was created using the same five videos from the Series. This was done to eliminate the possibility that video length or modularity could confound the results. Just like the Series, participants needed to press ‘Next’ to advance the lesson. However, for this condition, participants were given tasks (e.g. ‘[…] rank order the following items by how often you would experience that “dopamine dump” in the nucleus accumbens’) between each lecture videos. For a full list of the interactive content (and where they appeared within the lesson), please see Appendix B (supplementary material).

Necessarily, the Interactive Lesson would take more time to complete than the other two lessons. Hence, it would be plausible to argue that time-on-task could drive any effects that were found in the Interactive Lesson condition. In an attempt to rule out that possibility, a fourth condition was created: Compact Interactive Lesson.

Compact interactive lesson

This lesson was an exact replicate of the Interactive Lesson but with extraneous information cut from the videos. Thus, the total video length was 16 min rather than 20. Note that no content was eliminated and all interactive materials were kept as well.

Filler tasks

To provide participants with some time in between training and testing, all questionnaires, including demographic information, were administered after the lesson and before the test. Firstly, participants were asked to provide the following demographic information: sex, gender, age, ethnicity, first language, English fluency, program major, year of study, and GPA. They were also asked to list their favourite course and the reasoning behind that decision. After providing this information, participants were asked to complete two surveys: an Engagement Scale and an Online Lecture Experience Survey.

Engagement scale

O’Brien et al.’s (2018) Engagement Scale was used to assess participants’ engagement with the simulated lesson. Participants were presented with 29 statements (e.g. ‘I lost myself in this experience’; ‘I felt frustrated while watching this lesson’; ‘This lesson was attractive’) and asked to rate them on a 5-point Likert scale (i.e. Strongly Disagree to Strongly Agree). The entire 29-item scale is presented in Appendix C (supplementary material).

Online lecture experience survey

Participants were asked about their recent online lecture experiences in the Online Lecture Experience Survey. Firstly, they were asked how many courses they had taken online. Then, they were instructed to think about their recent online learning experiences when answering two sets of statements. For the first set, participants indicated their level of agreement on a 5-point Likert scale with statements such as ‘I lack the self-regulatory skills needed to succeed in online learning’ and ‘I appreciate the freedom that online learning provides’. For the second set, they were asked to rate their level of satisfaction with certain learning experiences such as ‘video quality’, ‘level of engagement’, and ‘access to internet’. Again, a 5-point Likert scale was implemented. The full survey is presented in Appendix D (supplementary material).

Testing

Participants completed a 15-item retention test that comprised multiple-choice questions. The questions were randomly selected from a pool of 30 questions. The presentation order of the questions was randomised. Thus, each participant received a unique version of the test. All 30 questions are presented in Appendix E (supplementary material).

General procedure

This experiment was conducted online. Qualtrics (Qualtrics, Provo, UT) was used to present all stimuli and instructions and to record all responses and response times. After first providing informed consent, participants immediately experienced one of four lessons, depending on condition: Lecture, Series, Interactive Lesson, or Compact Interactive Lesson. After the lesson, participants were asked demographic questions before completing the engagement scale and the online lecture experience survey. Participants were then surprised with a 15-item multiple-choice test before viewing the debriefing form. In total, the experiment lasted about 65 min.

Results

Firstly, participant compliance was investigated. Since the length of the video was 20 min, participants who increased the speed to the maximum (i.e. 2.5x speed) could have finished the lesson in 480 s. Participants who did not take at least 480 s to complete the lesson were removed from the dataset due to non-compliance (n = 130, 29.5%). Furthermore, 27 participants’ durations were more than two standard deviations above the mean. Thus, their data were also eliminated from the dataset (8.7%). Next, participants who were expected to complete interactive components were removed if it was found that they failed to do so (n = 5). Remaining were 278 participants with unequal sample sizes among conditions: Lesson (n = 62), Series (n = 63), Lesson (n = 74), Compact Lesson (n = 79). Therefore, participants’ data were randomly selected to remain in the dataset to achieve equal sample sizes (n = 60) across conditions.

Time on task

Recall that participants completed four different conditions, whereby the Compact Lesson was created to account for the extended amount of time participants might have spent completing the interactive components of the Lesson compared with both the Series and the Lecture. So, the correction for time on task was examined. In other words, the time on task (i.e. total time completing the lesson) was compared in all conditions. The results are presented in Figure 1 and demonstrated that there was a difference in the amount of time it took to complete the different lessons, F(3, 236) = 18.61, p < 0.001. The Lesson took the longest to complete (M = 1845.35, SD = 518.20) compared with the other conditions: Lecture (M = 1242.33, SD = 513.88), Series (M = 1346.99, SD = 523.91), Compact Lesson (M = 1650.97, SD = 426.64). The Compact Lesson took significantly longer to complete than the Lecture, Δ = 408.65, p < 0.001 and the Series, Δ = 303.99, p = 0.001. Importantly, the Lesson did take significantly longer to complete when compared with the Compact Lesson, Δ = 90.79, p = 0.03. Still, it is impossible to rule out the explanation that any benefit to retention or engagement can be explained by time on task. This means that duration was further considered in all further analyses.

Fig 1
Figure 1. The time participants spent completing the different lessons: 20-min video (Lecture), five 4-min videos (Series), five 4-min videos with interactive content interspersed (Interactive Lesson), or five 3-min videos with interactive content interspersed (Compact Interactive Lesson). Error bars represent ± 2 standard error. The difference between Lecture and Lesson is significant. So too is the difference between Series and Lesson.

Learner engagement

Next, participants’ scores on the Engagement Scale were analysed. The results found no significant correlation between Duration and Engagement. Thus, Duration was not considered as part of the Engagement analyses. Results also demonstrated that participants were generally quite engaged (M = 3.15, SD = 0.68). However, lesson type did not have a significant influence on participants’ self-reported engagement with the lesson, F(3,326) = 0.64, p = 0.59.

Learner retention

Before analysing participants’ scores on the surprise test, the test itself was analysed because it had never been validated. Firstly, questions 2, 5, 14, 15, 18, 14, and 30 were removed, as the majority of participants chose the wrong answer. Secondly, each question’s validity was analysed using a discrimination index, which is one method for analysing assessment quality. Specifically, this index evaluates the validity of multiple-choice questions by examining the question’s ability to discriminate between high- and low-performers on the same test. As a result, question 28 was removed from further analyses.

Finally, participants’ scores on the surprise test were analysed. The results found that participants generally performed well (M = 60.66, SD = 18.98). It was also found that Duration was not significantly correlated with Retention. Then, an ANOVA was conducted on Score with Condition as the independent variable with four levels: Lecture, Series, Lesson, Compact Lesson. The results were not significant, F(35,2) = 0.86, p = 0.46. A priori predictions were that participants in the Series would outperform their peers from the Lecture condition, as per Humphries and Clark’s (2021) results. However, this difference was not significant, Δ = 5.14, p = 0.14. Similarly, even though participants in the Lesson condition scored higher on the test (M = 62.31, SD = 18.57) compared with those from the Lecture condition (M = 57.43, SD = 18.97), this difference was not significant, Δ = 4.88, p = 0.16.

Online learning preferences

To explore the data more granularly, participants’ responses on the Online Learning Questionnaire were examined. The two subscales (i.e. preference towards online learning and satisfaction with their recent experience) were analysed separately. While the majority of participants reported that they would rather take part in F2F courses (M = 4.00, SD = 1.18), 40% of them reported that they enjoyed learning online. Moreover, as seen in Figure 2, some participants reported that they appreciate the freedom that online learning provides while others reported that they lack the self-regulatory skills need to succeed in online learning. These two points were negatively correlated, r = −0.30, p > 0.001.

Fig 2
Figure 2. Participants’ self-reported beliefs toward their recent online learning experience. A rating of 5 indicated that they extremely agreed while a rating of 1 indicated that they extremely disagreed with the statement.

In terms of feeling equipped and satisfied, the majority of participants reported they were satisfied with video quality, audio quality, instructors’ ability to use technology to teach online, presentation quality, level of understanding, access to internet, and access to a learning space. However, Figure 3 shows that participants satisfaction with their own level of engagement was mixed (M = 2.83, SD = 1.17).

Fig 3
Figure 3. Participants’ self-reported level of engagement in their most recent online learning experiences. A rating of 1 indicated that they were extremely dissatisfied while a rating of five indicated they were extremely satisfied.

Subsequently, a composite variable was created to denote participants’ relative preference towards online learning. This was done using questions from the Online Learning Experience Survey (presented in Appendix D). Specifically, questions 2, 3, 5, 8, 9, 10 and reverse-coded questions 4 and 7 were averaged to provide one variable to represent a preference for online learning. This new variable suggested that there were some students who preferred online learning and others who did not (M = 2.78, SD = 0.84). Thus, a median split was conducted that separated the sample into two distinct Groups: Online Learners and Non-Online Learners. This median split had a significant impact on the results of the effects of lesson type (Condition) on learner retention (Score).

More specifically, an ANOVA was conducted to determine whether there was an overall difference in participants’ score based on Condition (Lecture, Series, Lesson, Compact Lesson) and Group (Online Learners vs. Non-Online Learners). The results of the omnibus test were found to be significant, F(7,232) = 2.46, p = 0.02. Thus, the dataset was split based on Group and two separate ANOVAs were conducted. The results are shown in Figure 4. While there were no significant differences across Conditions for Online Learners, F(3,116) = 0.77, p = 0.52, there were significant differences for Non-Online Learners, F(3,116) = 5.03, p = 0.003. Bonferroni-adjusted post hoc comparisons demonstrated that the only significant pairwise difference was between Lecture and Series, Δ = 18.02, p = 0.001. No other differences reached significance (all p-values > 0.06). In sum, for students who preferred F2F learning (as opposed to online learning), their score on the test was better when they had engaged in a series of short videos compared with when they had watched one long lecture.

Fig 4
Figure 4. Non-online learners’ and online learners’ test score (as a percentage) after completing the different lessons: 20-min video (Lecture), five 4-min videos (Series), five 4-min videos with interactive content interspersed (Interactive Lesson), or five 3-min videos with interactive content interspersed (Compact Interactive Lesson). Error bars represent ± 2 standard error. The difference between Lecture and Series is significant only for non-online learners.

Discussion

Recently, many instructors have been required to create more intensive educational media, including asynchronous lecture videos. There are several ways to engage students in online synchronous lectures that might mirror F2F delivery (e.g. creating smaller discussion groups, allowing for live polling, responding to student questions in real-time). However, for asynchronous lectures, ensuring that videos are engaging and informative can be a difficult feat. The purpose of this experiment was to determine whether there are engagement and retention benefits to interactive segmented lessons when compared with didactic lecture videos. As such, participants experienced one of four types of lessons, depending on condition: Lecture, Series, Interactive Lesson, and Compact Interactive Lesson. After completing a modified version of O’Brien et al.’s (2018) Engagement Scale and an online learning experience survey, participants were asked to complete a surprise test.

Two hypotheses were made. Firstly, it was expected that participants would report being more engaged in an interactive lesson compared with a full video lecture. Additionally, it was expected that those who experienced the interactive lesson would outperform those who watched the full video lecture on the surprise multiple-choice test. These hypotheses were not fully supported, however. Students reported similar levels of engagement for all four conditions. Similarly, it did not seem as though interactivity and segmentation had an effect on learner retention. However, once the sample was split based on participants’ preference for online learning (i.e. Online Learners and Non-Online Learners), an effect of segmentation emerged.

It seems as though students who had a tendency to prefer some online learning aspects (e.g. flexibility in learning) were unaffected by interactivity and segmentation of online lecture videos. Conversely, students who reported a general dislike for online learning were affected by segmentation. The results demonstrated that Non-Online Learners from the Series outperformed their peers from the Lecture. This means that the simple act of hitting ‘Next’ caused an increase in learning but not engagement. This effect cannot be explained by duration because there was no difference in the amount of time it took participants to complete the Series compared with the Lecture.

Limitations and future research

Even though the results demonstrated an effect of segmentation on learning for students who report a general dislike for the online learning environment, no effect of interactivity was found. Moreover, interactivity and segmentation did not affect students’ self-reported level of engagement. A ceiling effect might explain the latter. More specifically, it is plausible to argue that students were too engaged in an interesting topic (i.e. The Neural Basis of Addiction). Future research might investigate topics that are more variable in level of interest. Another explanation might be the length of the lessons. Regardless of segmentation and interactivity, the stimuli used in this experiment were lessons that took approximately 20 min to complete. It could be that students sustain engagement for this short period of time. Future research might consider increasing the amount of time participants spend learning a subject. Segmentation might become more meaningful when the learning phase becomes long enough to disengage students at certain points throughout learning.

This present research has provided an initial basis upon which to further examine the effectiveness of interactivity and segmentation. Even though no support has been found for a benefit to interactive lessons, there is some indication that segmentation can increase learner retention.

References

Bacon, C., et al., (2017) ‘How effective are emojis in surveys taken on mobile devices? Data-quality implications and the potential to improve mobile-survey engagement and experience’, Journal of Advertising Research, vol. 57, no. 4, pp. 462–470. doi: 10.2501/JAR-2017-053
Bates, A. W. (2015) Teaching in a Digital Age: Guidelines for Designing Teaching and Learning, Tony Bates Associates, Vancouver, BC.
Beatty, B. J. (2002) Social Interaction in Online Learning: A Situationalities Framework for Choosing Instructional Methods (Order No. 3054431), ProQuest Dissertations & Theses Global: Social Sciences, (305512502), [online] Available at: https://proxy.library.carleton.ca/login?url=https://www-proquest-com.proxy.library.carleton.ca/dissertations-theses/social-interaction-online-learning/docview/305512502/se-2?accountid=9894
Bradbury, N. A. (2016) ‘Attention span during lectures: 8 seconds, 10 minutes, or more?’, Advanced Physiological Education, vol. 40, pp. 509–513. doi: 10.1152/advan.00109.2016
Cooper, A. & Richards, J. (2017) ‘Lectures for adult learners: breaking old habits in graduate medical education’, The American Journal of Medicine, vol. 130, no. 3, pp. 376–381. doi: 10.1016/j.amjmed.2016.11.009
Humphries, B. & Clark, D. (2021) ‘An examination of student preference for traditional didactic or chunking teaching strategies in an online learning environment’, Research in Learning Technology, vol. 29, pp. 1–12. doi: 10.25304/rlt.v29.2405
Jeffries, W. B. (2010) ‘Teaching large groups’, in An Introduction to Medical Teaching, eds W. Jeffries & K. Huggett, Springer, Dordrecht, pp. 11–26.
Knowles, M. (1984) The Adult Learner: A Neglected Species, 3rd edn, Gulf Pub. Co., Book Division, Houston, TX.
Mayer, R. E. (2009) Multimedia Learning, 2nd edn, Cambridge University Press, Cambridge.
Mayer, R. E., et al., (1996) ‘When less is more: meaningful learning from visual and verbal summaries of science textbook lessons’, Journal of Educational Psychology, vol. 88, no. 1, pp. 64–73. doi: 10.1037/0022-0663.88.1.64
O’Brien, H. L., Cairns, P. & Hall, M. (2018) ‘A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form’, International Journal of Human-Computer Studies, vol. 112, pp. 28–39. doi: 10.1016/j.ijhcs.2018.01.004
Qualtrics. (2020) Qualtrics (March, 2020) [Computer software], Provo, UT, [online] Available at: https://www.qualtrics.com
Schmaltz, R. M. & Enström, R. (2014) ‘Death to weak PowerPoint: strategies to create effective visual presentations’, Frontiers in Psychology, vol. 5, p. 1138. doi: 10.3389/fpsyg.2014.01138
Sommers, S. R., et al., (2018) ‘Quasi-experimental and experimental assessment of electronic textbook experiences: student perceptions and test performance’, Scholarship of Teaching and Learning in Psychology, vol. 4, no. 1, pp. 11–22. doi: 10.1037/stl0000129