ORIGINAL RESEARCH ARTICLE

The E-Design Assessment Tool: an evidence-informed approach towards a consistent terminology for quantifying online distance learning activities

Helen Walmsley-Smitha*, Lynn Machinb and Geoff Waltonc

aAcademic Development Unit, Staffordshire University, Stoke-on-Trent, Staffordshire, UK;

bSchool of Life Sciences and Education, Staffordshire University, Stoke-on-Trent, Staffordshire, UK;

cDepartment of Languages, Information & Communications, MMU, Manchester UK

(Received: 10 July 2018; final version received: 31 October 2018; Published 08 February 2019)

Abstract

Online distance learning (ODL) continues to expand rapidly, despite persistent concerns that student experience is poorer and retention lower than for face-to-face courses. Various factors affect ODL quality, but the impact of recommended learning activities, such as student interaction activities and those involving feedback, have proven difficult to assess because of challenges in definition and measurement. Although learning design frameworks and learning analytics have been used to evaluate learning designs, their use is hampered by this lack of an agreed terminology. This study addresses these challenges by initially identifying key ODL activities that are associated with higher quality learning designs. The learning activity terminology was tested using independent raters, who categorised the learning activities in four ODL courses as ‘interaction’, ‘feedback’ or ‘other’, with inter-rater reliability near or above recommended levels. Whilst challenges remain for consistent categorisation, the analysis suggests that increased clarity in the learning activity will aid categorisation. As a result of this analysis, the E-Design Assessment Tool (eDAT) has been developed to incorporate this key terminology and enable improved quantification of learning designs. This can be used with learning analytics, particularly retention and attainment data, thus providing an effective feedback loop on the learning design.

Keywords: learning activity; technology enhanced learning; terminology; online learning; learning design

*Corresponding author. Email: h.walmsley-smith@staffs.ac.uk

Research in Learning Technology 2019. © 2019 H. Walmsley-Smith et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2019, 27: 2106 - http://dx.doi.org/10.25304/rlt.v27.2106

Introduction

Higher education students are increasingly combining face-to-face learning with online distance and blended courses. In the United States 6 million students took at least one online course as part of their degree, which represented 30% of students in 2015 (Allen and Seaman 2017). In the UK, 10% of students in 2012–2013 were distance learners (Garrett 2015). The University of Edinburgh intends to include at least one fully online course in every undergraduate programme by 2025 (Haywood 2016), a trend that is likely to continue because of demand for more flexible learning.

Online distance learning has its critics. High retention rates are often used as a measure of overall course quality (Lenert and Janes 2017), but retention is of concern, often being much lower than the equivalent face-to face version (Simpson 2013). For example, the UK Open University retention rate was 22% in 2010 despite its specialism in distance learning (Simpson 2010). A range of possible factors affecting retention have been examined, ranging from learner-specific factors including age, gender, prior educational experience, levels of motivation and self-efficacy to institutional and course-specific factors including support available, course structure and the development of a learning community (e.g. Bawa 2016). A study of distance learning course designs identified that some courses did not contain quality course features, for example, synchronous activities or projects (Lenert and Janes 2017). Furthermore, ‘[w]hat is missing is the trajectory that would complete the feedback loop: the built-in evaluation of designs to see whether they achieved the expected outcomes’ (Mor, Ferguson, and Wasson 2015, p. 224). A feedback loop would enable exploration of the specific impact online learning designs have on students’ learning and make possible recommendations for effective learning activities to enhance learning and retention.

Evaluation of learning designs is hampered by a lack of shared vocabularies for pedagogic practice (Currier et al. 2006, section 2.2, no pagination). To achieve effective evaluation through a feedback loop requires ‘a more widely used language or framework for sharing Learning Designs’ (Dalziel et al. 2016, p. 260). For Laurillard (2012) it is an educational imperative to describe and represent online learning designs so that they can provide feedback to tutors about their effectiveness.

Research objectives

A variety of common educational terminology is used by tutors to describe learning activities, but the extent to which they agree with the meaning and application is not known. This study therefore aimed to provide a reliable quantitative framework for categorising online activities by means of the following:

Objective 1: identifying types of effective online learning activities that support retention

Objective 2: testing terminology used to describe learning activities to identify the extent to which different users agree

Objective 3: developing the e-Design Assessment Tool (eDAT) utilising this terminology to describe and quantify learning activities

Literature review: effective online learning activities

Levels of feedback and interaction in the course are two course design features often cited as having a significant impact on retention and each are discussed in the following sections.

Interaction

Support for interaction in learning comes from social constructivist learning theory (Vygotsky and Cole 1978). Moreover, Croxton’s (2014) meta-analysis indicates that both level and quality of interaction influence online retention.

The literature includes different ways to define and measure interaction (Wanstreet 2006). ‘Transactional distance’ (Moore and Kearsley 2011) suggests physical and psychological distance between tutor and student is the main difficulty of distance learning. Moore (1989) identified three types of interaction: student–student, student–tutor and student–content. A fourth type of student-interface interaction has been proposed (Hillman, Willis, and Gunawardena 1994). Despite the wide use of Moore’s interaction types, there is no clear agreement on how to measure them (Ekwunife-Orakwue and Teng 2014). The following examples demonstrate how different surveys and data have been used to explore the impact of interaction on student retention.

The Community of Inquiry model (Garrison 2011) for online learning emphasises interaction between students and tutors, referred to as ‘social presence’. Liu, Gomez, and Yen (2009) used the Social Presence and Privacy Questionnaire to measure social presence and identified it as a significant predictor of course retention and final grade. ‘Resonance’ was used as a way to increase social presence by the use of video lectures, and analysis of the video access data suggested that this increased retention (Geri 2012).

An analysis suggested that the number of communication activities designed into a course was the primary predictor for retention (Rienties and Toetenel 2016). They examined 151 ODL courses and calculated the time students were expected to spend on ‘communication’ using Conole’s learning activity taxonomy (Fill and Conole 2005).

A combination of data mining of forum posts and the use of their own student survey showed a positive correlation between student satisfaction and interaction rates (Fasse, Humbert, and Rappold 2009). However, the challenge of isolating individual features of online courses to assess the impact of retention was highlighted by Godwin, Thorpe, and Richardson (2008). They found no significant difference between courses with a variety of interaction patterns when comparing retention and attainment.

Ekwunife-Orakwue and Teng (2014) found a positive correlation between tutor–student interaction and retention by using student satisfaction and computer self-efficacy surveys. Hawkins et al. (2013), using their own survey, found that feedback, procedural interaction and social interaction positively impacted on course completion.

A web-based peer-tutoring system called Online Peer-Assisted Learning, which enhanced interaction by supporting students tutoring each other, also resulted in improved retention (Evans and Moore 2013). The study used social network analysis and the Student Assessment of Learning Gains survey. The use of web-conferencing and structured group tasks achieved high retention as measured by course data and a course experience survey (Thorpe 2008). Interaction in collaborative group assignments using synchronous and asynchronous discussion as well as social media activities increased retention, according to data in the virtual learning environment (VLE) student activity log (Fisher and Baird 2005). Furthermore, frequency, rather than degree, of student interaction was identified as a positive marker for retention when VLE data was analysed (Shelton, Hung, and Lowenthal 2017).

Few studies have explored the impact of student–content interaction in online learning, making this is an area for possible further development (Xiao 2017). The use of the eDAT as discussed in the following will enable further research in this area.

Feedback

Assessment and feedback activities are common in online learning. There are a variety of types, including formative individual and group tasks, online quizzes and tests, simulations, provision of model answers and summative assignments. Hattie’s (2003) meta-analysis of teacher effectiveness found that giving students feedback was identified as a highly effective intervention.

The impact of regular feedback to student postings was highlighted by Stott’s (2016) case study, suggesting that low levels of student engagement and satisfaction may be the result of a lack of tutor feedback. A series of analytical writing assignments with feedback increased retention on a PhD programme by 39% (Sutton 2014). A cross-unit diagnostic that gave feedback to online learners from different learning units also had a positive effect on retention (Lin et al. 2014).

Bonk and Khoo (2014) highlighted the negative impact on online retention when prompt and individual feedback was not given. Choi et al.’s (2013) survey identified that a lack of feedback from tutors was a key reason for students not re-enrolling.

A systematic review of the impact of peer-assessment in online learning indicated that this ‘improves performance of students in learning environments in over 60% of the evaluated articles’ (Tenório et al. 2016, p. 103). A course redesigned to include regular tests with automatic feedback increased attainment and reduced withdrawal (Sancho-Vinuesa, Escudero-Viladoms, and Masià 2013).

Interaction and feedback are inherently linked: a tutor giving feedback to students is a form of interaction, and interactions with students provide feedback to tutors on how students are progressing (Hatzipanagos and Warburton 2009).

Representing learning designs

The impact of course design features on retention can be investigated using the Learning Design Conceptual Framework (Dalziel et al. 2013). Dalziel argues that Learning Design can be used in fine-grained comparisons in educational research and that there is a need ‘to keep trying to develop a broadly accepted representational framework(s)’ (Dalziel et al. 2016, p. 256). Laurillard agrees:

Perhaps the attempt is doomed. But without it there is no basis for the comparative analysis of the range of conventional and digital teaching methods that will tell us how they may best be used to support student learning. That is an imperative for our education systems now, so we have to try.

(Laurillard 2012, Chapter 5, no pagination)

Learning design representations are ways to represent or ‘codify’ learning designs to help online tutors and learning designers analyse and innovate, facilitate software developers to instantiate lessons in software or share designs with others (Conole 2013). Representations can include practice-based, conceptual, abstract or technical learning designs and those based on a specific theoretical approach. They can represent individual lessons or whole courses and provide different lenses to explore specific features including the nature of the task, the tools, resources or pedagogic principles. The most common type of representation is textual; other examples include content and course maps, pedagogy profiles, task swim lanes (visualisations) and learning outcome maps (Conole 2013). However, each representation uses different terminology and formats, some embedding pedagogic guidance and others not. The learning design representations in Table 1 illustrate the variety of terminology used to describe learning activities by different tools.

Table 1. Learning activity taxonomies.
Name of learning design framework or tool Terminology for learning activities
Updated Bloom’s taxonomy (Anderson and Krathwohl 2001) Type of activity: remember, understand, apply, analyse, evaluate, create
Australian Universities Teacher Committee (AUTC) Learning Design (Agostinho et al. 2002) Elements of online learning design: resources, tasks, supports
Ulster hybrid (University of Ulster 2008) Online learning events: receives, debates, experiments, creates, explores, practises, imitates, meta-learns
Open University Learning Design Initiative (OULDI) project (Cross et al. 2012) Online learning activity types: assimilative, finding and handling information, communication, productive, experiential, interactive or adaptive, assessment
7Cs framework (Conole 2014) Online learning activities: capture, communicate, collaborate, consider
Learning Designer online design tool (London Knowledge Lab 2016) based on the Conversational Framework (Laurillard 2002, 2012) Online learning activities: read, watch, listen (acquisition); collaborate; discuss; investigate; practice; produce
E-Design Template (Walmsley 2017), based on Stephenson and Coomey (2001) Online learning activity types: student-managed, tutor-managed, open activity, closed task

This variety of learning activity terminology is challenging for learning designers when evaluating the effectiveness of learning designs. For example, the Open University mapping project used Conole’s taxonomy (Cross et al. 2012) to create a learning activity map over many courses. However, the authors commented on the difficulty of applying these terms, saying the process was ‘subjective’ and that they held ‘regular meetings to improve consistency’ (Rienties, Toetenel, and Bryan 2015, p. 316). Swan edited and applied six of Reeves’ (1996) 14 pedagogical dimensions to her work describing MOOC pedagogies; she also commented that raters needed a number of discussions to agree on their application (Swan et al. 2015). Similarly, Laurillard observed that although tutors were able to map their own activities to a taxonomy, they were unable to agree when asked to map another tutor’s task (Charlton, Magoulas, and Laurillard 2012). Analysis of a number of US online courses used a rubric for raters to score each of four key elements on a three-point scale; it experienced similar difficulties (Jaggars and Xu 2016). Even very simple terms seem to cause difficulties; for example, some users thought there was ambiguity between ‘resource’ and ‘support’ in the AUTC representation (Agostinho 2011). A group of learning designers conducted an interesting study to apply different learning design tools to a single lesson plan to ‘represent’ the design. Their challenges and varied results highlight the lack of consistency in learning design tools (Persico et al. 2013). This variety of disparate terms makes consistent analysis of learning activities difficult.

The eDAT, as described in the following, utilises the two commonly used terms (‘interaction’ and ‘feedback’) that are associated with higher retention in ODL. The consistent use of these terms, as suggested by the following analysis, could enable a more accurate and effective way for tutors and learning designers to describe learning activities. When learning activities can be accurately described, they can be quantified and used with learning analytics to provide evidence for effective learning designs that increase retention (Bakharia et al. 2016).

Methods

The literature discussed suggests that retention is increased when ODL includes interaction and feedback activities. However, these terms may not be used by tutors in the same way. These terms were tested using content analysis methodology to identify the extent to which tutors were using them consistently.

Content analysis

Content analysis is a method of quantifying text to enable statistical analysis of the text by a process of ‘coding’ or categorising. It is a ‘research technique for making replicable and valid inferences from texts (or other meaningful matter) to the context of their use’ (Krippendorff 2013, p. 24). It has been used in a variety of educational settings, for example to analyse the impact of tutors’ roles in online discussions (Dubuclet, Lou, and MacGregor 2015). To carry out a valid and reliable content analysis for this study, the following steps were taken:

  1. specifying the units of analysis
  2. identifying learning activity vocabulary to test
  3. recruiting raters
  4. calculating inter-rater reliability (IRR)

(adapted from Neuendorf 2002, p. 50)

Specifying the units of analysis

For this study the specific learning activities or task descriptions written by tutors and presented in the VLE for students were analysed. A convenience sample of four distance learning modules from one Higher Education (HE) institution were chosen to represent a variety of courses. They were varied and from different subject areas (law, politics, games and sport), aimed at different levels (undergraduate and postgraduate) and included a total of 215 learning activities of different types and lengths.

Identification of units of analysis is critical but also challenging (Gorsky and Blau 2009). If the unit of analysis is too general, it may be easy to categorise but hard to analyse; if too small, it may be difficult to categorise reliably. For this study the units of analysis were prepared by splitting activities into multiple parts based on the learning activity ‘verbs’. For example, a typical student activity was as follows:

  1. Read xx, answer the following [structured] question and then post your response to the forum.

This was divided into the following for analysis:

(i) read xx,

(ii) answer the following [structured] question and then

(iii) post your response to the forum.

Some courses included ‘optional activities’, for example, extended reading or open forums. These were also included as units of analysis because the impact of voluntary participation may be significant (So 2009).

Identifying learning activity terminology to test

Based on the literature mentioned, analysis was conducted on the learning activity terms ‘interaction’, ‘feedback’ and ‘other’. Activity types and examples were provided to assist the rater when categorising each activity, as in Table 2.

Table 2. Terminology tested.
Activity terminology Activity type Example
Interaction with … A. the tutor online webinar/lecture, 1–1 tutorial, coaching session, email, phone
B. other students forum discussion (may include tutor), group work, peer assessment, adding comments to peer wikis or blogs
C. (interactive) content computer simulation, multimedia interactions (excludes interaction with text or video)
Feedback from … 1. the tutor formative or summative feedback or grades
2. peers structured peer-assessment exercise, grading activity
3. self-feedback using model answers, self-reflection, trial and error exercises
4. computer (automatic) from computer simulation, computer-marked test
Other activities reading or watching, research, creating

Recruiting raters

In many studies, only two raters are used when a larger number would produce greater validity. Independent raters may be unbiased, but in many studies raters are either researchers or the researchers’ assistants (e.g. Rienties and Toetenel 2016). Raters require familiarity with the language and context for analysis but not overfamiliarity with specialised vocabulary, which may reduce the universality of their analysis (Krippendorff 2013). Here, all four raters were academic colleagues, familiar with educational terminology and who completed the content analysis task independently following training.

Calculating inter-rater reliability

When raters all agree, this increases confidence that the analysis is consistent and objective and that other raters would be likely to obtain the same result. However, even high reliability scores do not guarantee validity. For example, raters may all display the same prejudice or use the same concepts as others in a specialised community. High reliability may also indicate a loss of validity; for example, the categories may be oversimplified or superficial (Krippendorff 2013). In addition, high agreement between raters may simply mean that a particular item is missing from the content being analysed or that there is a high degree of similarity between the items being rated.

Inter-rater reliability is often measured using Cohen’s kappa, but this has been criticised as it encourages the use of just two raters when more raters would provide more robust findings (Krippendorff 2013). Krippendorff’s alpha (α) is a more effective measure of IRR as it can be applied to any number of observers, any number of categories, any metric or level of measurement, as well as to incomplete data and large and small sample sizes (Krippendorff 2011), and has been used in this study to calculate IRR.

There is no statistical rationale presented in the literature for acceptable levels of IRR. Krippendorff (2004) suggests that where the analysis is critical, a level of α ≥ 0.800 should be considered necessary, and in situations where conclusions may be more tentative an IRR of α ≥ 0.667 may be acceptable.

All 215 learning activities from four courses were categorised by four raters. Each course was rated independently, and each activity was categorised as ‘interaction’ and/or ‘feedback’ or ‘other’.

Results

The raters’ overall categorisations of ‘interaction’ or ‘feedback’ for each activity were compared, and IRR was calculated with Krippendorff’s alpha. There was some disagreement among raters and although the ‘interaction’ category had an acceptable level of agreement, the ‘feedback’ categorisations were near to but did not reach an acceptable level of IRR as in Table 3.

Table 3. Inter-rater reliability results.
‘Interaction’ ‘Feedback’
Inter-rater reliability:
4 raters and 215 activities 0.815 0.612

Discussion

The IRR figures show the difficulties in categorising learning activities even when using the commonly used terms ‘interaction’ and ‘feedback’.

In total, of the 308 possible discussion-type categorisations, 285 were categorised as peer interaction and 197 as peer feedback. A significant issue was the way discussion forum activities were written; for example, discussion-type activities included five different terms: ‘discuss’, ‘post’, ‘comment’, ‘post & comment’ and ‘post & discuss’. Raters categorised both ‘discuss’ and ‘post’ activities as including feedback when this was not indicated in the task. Sixteen ‘discussions’ were rated as peer feedback. In addition, discussion activities were sometimes categorised as ‘other’, perhaps because raters thought that posting on a forum did not comprise interaction. Within this variety of categorisations there was also noted a lack of consistency within raters. The highest level of agreement was for activities that specified both ‘post/comment’ and ‘post/discuss’, suggesting a greater clarity in the task.

There were noticeable differences between raters when categorising feedback activities. For example, one rater categorised the activity ‘Students access Blackboard for topic lecture notes, videos, etc. Try to apply these techniques to your own work’ as feedback when no other rater had categorised it as such. Another rater categorised the activity ‘Please post … on the discussion board’ as feedback 22 times when the other raters did not. Assessment activities were not consistently categorised as feedback, presumably because this was not specified in the activity.

Some learning activities that were inconsistently categorised did not conform to good practice recommendations for interaction activities (e.g. Akin and Neal 2007; Salmon 2004) or recommendations for feedback (e.g. Nicol and Macfarlane-Dick 2006). However, a good practice example – ‘Students post questions/comments in bulletin board for peer and tutor discussion’ – was categorised the same way by all raters.

The selection of courses for this study included a variety of subject disciplines, and the raters were from different disciplines. This may have impacted on the ways the learning activities were written and also on the individual ways that the raters interpreted both the learning activity and the terms in the eDAT when completing the content analysis task. Further research in this area is needed.

Conclusion

Feedback on the effectiveness of learning designs is needed to improve ODL, but this is difficult to obtain without a consistent way to describe learning activities. Two types of activities are highlighted in the literature as having the potential to improve retention and quality of online learning: interaction with tutor and peers and feedback on learning. However, despite these terms being commonly used, they were difficult to apply consistently to the learning activities in this study. The eDAT utilises this terminology to help improve categorisation and quantification.

The difficulties in using common terms to categorise learning activities was surprising. The IRR for interaction was acceptable, but the IRR for feedback did not reach an acceptable level, suggesting that this is a complex term, difficult to use consistently. These terms, as used by the online course designers and by the raters, have different implicit meanings and reflect different teaching perspectives (Trigwell, Prosser, and Ginns 2005). However, the example given of an activity categorised consistently suggests that increased clarity about opportunities for interaction and feedback in a task will improve consistent use of these terms.

The eDAT has been developed to attempt to address these issues. It builds on the other Learning Design representation tools mentioned but focusses on two key online learning activities that are associated with higher retention. The eDAT enables tutors and designers to carry out the analysis described, that is, to categorise their learning activities using the terms ‘interaction’ and ‘feedback’ and to quantify them. Interaction activities can be categorised with some confidence, but feedback activities may be less easy to identify and require review and editing for clarity. Further analysis of the effectiveness of the tool is being conducted and will be reported separately.

Using the eDAT to categorise learning activities helps to provides quantitative data about the learning design. It also highlights to tutors the need to specify clearly to students when and how they will be interacting with others and when they can expect to receive feedback on each of their activities, thus potentially improving the learning design.

Appendix 1: The E-Design Assessment Tool

The E-Design Assessment Tool (Walmsley 2017) employs the tested terminology in both a Word template and Excel for use by tutors and designers, together with examples and a guide to quantifying learning activities. A sample follows, and both are freely available for download from the eDAT site: http://blogs.staffs.ac.uk/bestpracticemodels/edat/.

E-Design Assessment Tool

Tutor instructions: Add your activities below and indicate where you have specifically included interaction and/or feedback activities. Calculate the % of each activity type to help you reflect on your learning design. Use retention and attainment rates to evaluate the quality of the learning design.

No Specific learning activities/tasks (you may need to split activities that include separate parts) Interaction with…
A. tutor
B. peers
C. (interactive) content
Feedback from…
1. tutor
2. peers
3. self
4. computer (automatic)
Other content or activities
Activity text here … [add interaction type here if present in activity] [add feedback type here if present in activity]
[Insert additional rows as required]
Total activities: __ _% with interaction _% with feedback _% other

References

Agostinho, S. (2011) ‘The use of a visual learning design representation to support the design process of teaching in higher education’, Australasian Journal of Educational Technology, vol. 27, no. 6, pp. 961–978. http://doi.org/10.14742/ajet.923

Agostinho, S., et al., (2002) ‘A tool to evaluate the potential for an ICT-based learning design to foster “high-quality learning”’, Winds of Change in the Sea of Learning. Proceedings of the 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education, Auckland, pp. 29–38, [online] Available at: http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1128&context=edupapers

Akin, L. & Neal, D. (2007) ‘CREST+ model: writing effective online discussion questions’, Journal of Online Learning and Teaching, vol. 3, no. 2, pp. 191–202, [online] Available at: http://jolt.merlot.org/vol3no2/akin.htm

Allen, I. E. & Seaman, J. (2017) Distance Learning Compass: Distance Education Enrollment Report, [online] Available at: https://onlinelearningsurvey.com/reports/digtiallearningcompassenrollment2017.pdf

Anderson, L. W. & Krathwohl, D. R. (2001) A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, Longman: New York.

Bakharia, A., et al., (2016) ‘A conceptual framework linking learning design with learning analytics’, in Proceedings of the 6th International Conference on Learning Analytics & Knowledge, Edinburgh, pp. 329–338.

Bawa, P. (2016) ‘Retention in online courses: exploring issues and solutions-A literature review’, SAGE Open, vol. 6, no. 1, pp. 1–11. http://doi.org/10.1177/2158244015621777

Bonk, C. J. & Khoo, E. (2014) Adding Some TEC-Variety: 100+ Activities for Motivating and Retaining Learners Online, Open World Books, Bloomington, IN.

Charlton, P., Magoulas, G. & Laurillard, D. (2012) ‘Enabling creative learning design through semantic technologies’, Technology, Pedagogy and Education, vol. 21, no. 2, pp. 231–253. http://doi.org/10.1080/1475939X.2012.698165

Choi, H., et al., (2013) ‘The extent of and reasons for non re-enrollment: a case of Korea National Open University’, International Review of Research in Open & Distance Learning, vol. 14, no. 4, pp. 19–35. http://dx.doi.org/10.19173/irrodl.v14i4.1314

Conole, G. (2013) Designing for Learning in an Open World, Springer Science & Business Media, Milton Keynes.

Conole, G. (2014) ‘The 7Cs of learning design – A new approach to rethinking design practice’, in Proceedings of the 9th International Conference on Networked Learning, pp. 502–509, [online] Available at: http://www.lancaster.ac.uk/fss/organisations/netlc/past/nlc2014/abstracts/pdf/conole.pdf

Cross, S., et al., (2012) Challenge and Change in Curriculum Design Process, Communities, Visualisation and Practice, [online] Available at: http://www.open.ac.uk/blogs/OULDI/wp-content/uploads/2010/11/OULDI_Final_Report_Final.pdf

Croxton, R. A. (2014) ‘The role of interactivity in student satisfaction and persistence in online learning’, Journal of Online Learning and Teaching, vol. 10, no. 2, pp. 314–325, [online] Available from: http://jolt.merlot.org/vol10no2/croxton_0614.pdf

Currier, S., et al., (2006) ‘Vocabularies for describing pedagogical approach in e-learning: a scoping study’, in Proceedings of the International Conference on Dublin Core and Metadata Applications: Metadata for Knowledge and Learning, Colima, Oct 2006.

Dalziel, J. R., et al., (2013) The Larnaca Declaration on Learning Design, [online] Available at: https://larnacadeclaration.wordpress.com/

Dalziel, J. R., et al., (2016) ‘Learning design: where do we go from here?’, in Learning Design: Conceptualizing a Framework for Teaching and Learning Online, ed J. R. Dalziel, Routledge, Abingdon.

Dubuclet, K. S., Lou, Y. & MacGregor, K. (2015) ‘Design and cognitive level of student dialogue in secondary school online courses’, American Journal of Distance Education, vol. 29, no. 4, pp. 283–296. http://doi.org/10.1080/08923647.2015.1085722

Ekwunife-Orakwue, K. C. V. & Teng, T. L. (2014) ‘The impact of transactional distance dialogic interactions on student learning outcomes in online and blended environments’, Computers and Education, vol. 78, pp. 414–427. http://doi.org/10.1016/j.compedu.2014.06.011

Evans, M. J. & Moore, J. S. (2013) ‘Peer tutoring with the aid of the Internet’, British Journal of Educational Technology, vol. 44, no. 1, pp. 144–155. http://doi.org/10.1111/j.1467-8535.2011.01280.x

Fasse, R., Humbert, J. & Rappold, R. (2009) ‘Rochester Institute of Technology: analyzing student success’, Journal of Asynchronous Learning Networks, vol. 13, no. 3, pp. 37–48. Available from: https://files.eric.ed.gov/fulltext/EJ862354.pdf

Fill, K. & Conole, G. (2005) ‘A learning design toolkit to create pedagogically effective learning activities’, Journal of Interactive Media in Education, vol. 1, no. 9, pp. 1–16. http://doi.org/10.5334/2005-8

Fisher, M. & Baird, D. E. (2005) ‘Online learning design that fosters student support, self-regulation, and retention’, Campus-Wide Information Systems, vol. 22, no. 2, pp. 88–107. http://doi.org/10.1108/10650740510587100

Garrett, R. (2015) Up, Down, Flat: Distance Learning Data Collection and Enrollment Patterns in Australia, UK, and USA, [online] Available at: http://www.obhe.ac.uk/documents/download?id=1023

Garrison, D. R. (2011) Community of Inquiry Model, [online] Available at: https://coi.athabascau.ca/coi-model/an-interactive-coi-model/

Geri, N. (2012) ‘The resonance factor: probing the impact of video on student retention in distance learning’, Interdisciplinary Journal of E-Learning and Learning Objects, vol. 8, [online] Available at: https://www.learntechlib.org/p/44757/

Godwin, S. J., Thorpe, M. & Richardson, J. T. E. (2008) ‘The impact of computer-mediated interaction on distance learning’, British Journal of Educational Technology, vol. 39, no. 1, pp. 52–70. http://doi.org/10.1111/j.1467-8535.2007.00727.x

Gorsky, P. & Blau, I. (2009) ‘Online teaching effectiveness: a tale of two instructors’, International Review of Research in Open & Distance Learning, vol. 10, no. 3, pp. 1–28. http://dx.doi.org/10.19173/irrodl.v10i3.712

Hattie, J. (2003) ‘Teachers make a difference: what is the research evidence?’, in Proceedings of the Australian Council for Educational Research Annual Conference: Building Teacher Quality, Melbourne, Oct 2003.

Hatzipanagos, S. & Warburton, S. (2009) ‘Feedback as dialogue: exploring the links between formative assessment and social software in distance learning’, Learning, Media and Technology, vol. 34, no. 1, pp. 45–59. http://doi.org/10.1080/17439880902759919

Hawkins, A., et al., (2013) ‘Academic performance, course completion rates, and student perception of the quality and frequency of interaction in a virtual high school’, Distance Education, vol. 34, no. 1, pp. 64–83. http://doi.org/10.1080/01587919.2013.770430

Haywood, J. (2016) ‘Learning from MOOCs: lessons for the future’, in From Books to MOOCs? Emerging Models of Learning and Teaching in Higher Education, eds E. De Corte, L. Engwall & U. Teichler, Portland Press, London, pp. 69–80.

Hillman, D., Willis, D. & Gunawardena, C. (1994) ‘Learner-interface interaction in distance education: an extension of contemporary models and strategies for practitioners’, American Journal of Distance Education, vol. 8, no. 2, pp. 30–42. http://doi.org/10.1080/08923649409526853

Jaggars, S. S. & Xu, D. (2016) ‘How do online course design features influence student performance?’, Computers & Education, vol. 95, pp. 270–284. http://doi.org/10.1016/j.compedu.2016.01.014

Krippendorff, K. (2004) ‘Reliability in content analysis: some common misconceptions and recommendations’, Human Communication Research, vol. 30, no. 3, pp. 411–433. http://doi.org/10.1111/j.1468-2958.2004.tb00738.x

Krippendorff, K. (2011) Computing Krippendorff’s Alpha-Reliability, Annenberg School for Communication Departmental Papers, Philadelphia.

Krippendorff, K. (2013) Content Analysis: An Introduction to its Methodology, 3rd edn., Sage, London.

Laurillard, D. (2002) Rethinking University Teaching: A Conversational Framework for the Effective Use of Educational Technology, 2nd edn., Routledge, London.

Laurillard, D. (2012) Teaching as a Design Science: Building Pedagogical Patterns for Learning and Technology [Vitalsource e-book], Routledge, Abingdon.

Lenert, K. A. & Janes, D. P. (2017) ‘The incorporation of quality attributes into online course design in higher education’, International Journal of E-Learning & Distance Education, vol. 33, no. 1, pp. 1–14, [online] Available at: http://www.ijede.ca/index.php/jde/article/view/987/1658

Lin, J.-W., et al., (2014) ‘Development and evaluation of across-unit diagnostic feedback mechanism for online learning’, Journal of Educational Technology & Society, vol. 17, no. 3, pp. 138–153, [online] Available at: https://www.jstor.org/stable/jeductechsoci.17.3.138

Liu, S., Gomez, J. & Yen, C.-J. (2009) ‘Community college online course retention and final grade: predictability of social presence’, Journal of Interactive Online Learning, vol. 8, no. 2, pp. 165–182. Available from https://pdfs.semanticscholar.org/3bea/7b0a25381625b933f0d91f6e3a5286ff9ac2.pdf

London Knowledge Lab. (2016) Learning Designer, UCL Institute of Education, [online] Available at: https://www.ucl.ac.uk/learning-designer/index.php

Moore, M. G. (1989) ‘Three types of interaction’, American Journal of Distance Education, vol. 3, no. 2, pp. 1–7. http://doi.org/10.1080/08923648909526659

Moore, M. G. & Kearsley, G. (2011) Distance Education: A Systems View of Online Learning, 3rd edn., Wadsworth, Belmont.

Mor, Y., Ferguson, R. & Wasson, B. (2015) ‘Editorial: learning design, teacher inquiry into student learning and learning analytics: a call for action’, British Journal of Educational Technology, vol. 46, no. 2, pp. 221–229. http://doi.org/10.1111/bjet.12273

Neuendorf, K. A. (2002) The Content Analysis Guidebook, Sage, London.

Nicol, D. J. & Macfarlane-Dick, D. (2006) ‘Formative assessment and self-regulated learning: a model and seven principles of good feedback practice’, Studies in Higher Education, vol. 31, no. 2, pp. 199–218. http://doi.org/10.1080/03075070600572090

Persico, D., et al., (2013) ‘Learning design Rashomon I – Supporting the design of one lesson through different approaches’, Research in Learning Technology, vol. 21. http://doi.org/10.3402/rlt.v21i0.20224

Reeves, T. (1996) Evaluating What Really Matters in Computer-Based Education, [online] Available at: http://eduworks.com/Documents/Workshops/EdMedia1998/docs/reeves.html

Rienties, B. & Toetenel, L. (2016) ‘The impact of learning design on student behaviour, satisfaction and performance: a cross-institutional comparison across 151 modules’, Computers in Human Behavior, vol. 60, pp. 333–341. http://doi.org/10.1016/j.chb.2016.02.074

Rienties, B., Toetenel, L. & Bryan, A. (2015) ‘“Scaling up” learning design: impact of learning design activities on LMS behavior and performance’, in Proceedings of the 5th International Conference on Learning Analytics And Knowledge, Poughkeepsie, March 2015.

Salmon, G. (2004) E-Moderating: The Key to Online Teaching and Learning, Routledge, London.

Sancho-Vinuesa, T., Escudero-Viladoms, N. & Masià, R. (2013) ‘Continuous activity with immediate feedback: a good strategy to guarantee student engagement with the course’, Open Learning: The Journal of Open, Distance and e-Learning, vol. 28, no. 1, pp. 51–66. http://doi.org/10.1080/02680513.2013.776479

Shelton, B. E., Hung, J.-L. & Lowenthal, P. R. (2017) ‘Predicting student success by modeling student interaction in asynchronous online courses’, Distance Education, vol. 38, no. 1, pp. 59–69. http://doi.org/10.1080/01587919.2017.1299562

Simpson, O. (2010) ‘22% – can we do better?’ – The CWP Retention Literature Review, Centre for Widening Participation, The Open University, Milton Keynes.

Simpson, O. (2013) ‘Student retention in distance education: are we failing our students?’, Open Learning: The Journal of Open, Distance and e-Learning, vol. 28, no. 2, pp. 105–119. http://doi.org/10.1080/02680513.2013.847363

So, H.-J. (2009) ‘When groups decide to use asynchronous online discussions: collaborative learning and social presence under a voluntary participation structure’, Journal of Computer Assisted Learning, vol. 25, no. 2, pp. 143–160. http://doi.org/10.1111/j.1365-2729.2008.00293.x

Stephenson, J. & Coomey, M. (2001) ‘Online learning: It is all about dialogue, involvement, support and control – According to the research’, in Teaching and Learning Online: New Pedagogies for New Technologies (Creating Success), ed J. Stephenson, Kogan Page, London. pp. 37–52.

Stott, P. (2016) ‘The perils of a lack of student engagement: reflections of a “lonely, brave, and rather exposed” online instructor’, British Journal of Educational Technology, vol. 47, no. 1, pp. 51–64. http://doi.org/10.1111/bjet.12215

Sutton, R. (2014) ‘Unlearning the past: new foundations for online student retention’, Journal of Educators Online, vol. 11, no. 3, pp. 1–30. Available from https://www.thejeo.com/archive/archive/2014_113/suttonpdf

Swan, K., et al., (2015) ‘Metaphors for learning and the pedagogies of MOOCs’, in Meeting of the American Educational Research Association, Chicago, IL, April 2015.

Tenório, T., et al., (2016) ‘Does peer assessment in on-line learning environments work? A systematic review of the literature’, Computers in Human Behavior, vol. 64, pp. 94–107. http://doi.org/10.1016/j.chb.2016.06.020

Thorpe, M. (2008) ‘Effective online interaction: mapping course design to bridge from research to practice’, Australasian Journal of Educational Technology, vol. 24, no. 1, pp. 57–72. http://doi.org/10.14742/ajet.1230

Trigwell, K., Prosser, M. & Ginns, P. (2005) ‘Phenomenographic pedagogy and a revised “Approaches to teaching inventory”’, Higher Education Research & Development, vol. 24, no. 4, pp. 349–360. http://doi.org/10.1080/07294360500284730

University of Ulster. (2008) Hybrid Learning Model, [online] Available at: http://addl.ulster.ac.uk/odl/hybridlearningmodel

Vygotsky, L. S. & Cole, M. (1978) Mind in Society: the Development of Higher Psychological Processes, Harvard University Press. Cambridge, MA.

Walmsley, H. (2017) Best Practice Models for e-Design, [online] Stoke-on-Trent Available at: http://blogs.staffs.ac.uk/bestpracticemodels/

Wanstreet, C. E. (2006) ‘Interaction in online learning environments: a review of the literature’, The Quarterly Review of Distance Education, vol. 7, no. 4, pp. 399–411.

Xiao, J. (2017) ‘Learner-content interaction in distance education: the weakest link in interaction research’, Distance Education, vol. 38, no. 1, pp. 123–135. http://doi.org/10.1080/01587919.2017.1298982