Designing and evaluating representations to model pedagogy

Elizabeth Mastermana* and Brock Craftb

aIT Services, University of Oxford, Oxford, UK; bDepartment of Computing, Goldsmiths’ University of London, London, UK

Abstract

(Received 3 December 2012; final version received 19 June 2013; Published 16 September 2013)

This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit that attend to: (1) the underlying ontology of the domain, (2) the purpose of the task that the representation is intended to facilitate, (3) how best to support the cognitive processes of the users of the representations, (4) users’ differing needs and preferences, and (5) the tool and environment in which the representations are constructed and manipulated.

Through showing how epistemic efficacy can be applied to the design and evaluation of representations, the article presents the Learning Designer, a constructionist microworld in which teachers can both assemble their learning designs and model their pedagogy in terms of students’ potential learning experience. Although the activity of modelling may add to the cognitive task of design, the article suggests that the insights thereby gained can additionally help a lecturer who wishes to reuse a particular learning design to make informed decisions about its value to their practice.

Keywords: representations; epistemic efficacy; Learning Design; evaluation

*Corresponding author. Email: liz.masterman@it.ox.ac.uk

Research in Learning Technology 2013. © 2013 E. Masterman and B. Craft. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 3.0 Unported (CC BY 3.0) Licence (http://creativecommons.org/licenses/by/3.0/) permitting use, reuse, distribution and transmission, and reproduction in any medium, provided the original work is properly cited.

Citation: Research in Learning Technology Supplement 2013, 21: 20205 - http://dx.doi.org/10.3402/rlt.v21i0.20205

Introduction

Learning Design research has resulted in the development of a number of software tools to support teachers in designing learning experiences for their students. These tools include Phoebe (Masterman and Manton 2011), the London Pedagogy Planner (San Diego et al. 2007), Compendium LD (Conole 2013) and the Learning Designer, which is the subject of this article.

Design support tools are intended to help teachers through the series of decisions involved in bringing together into a learning design the aims, learning outcomes, teaching approach, method of assessment and the activities that learners will carry out in a particular sequence of learning, together with the resources needed and the constraints on the learning situation such as the learning environment and learner characteristics. The learning in question may occupy a single session (for example, tutorial, lecture, seminar or practical class), or it may extend across a module (i.e. a series of sessions related through an overarching topic of study and learning outcomes) or an entire programme.

Design can be characterised as an individual or collective cognitive activity, which is normally externalised through a series of intermediate representations, each of which can be characterised by its underlying structure, or form. The purpose of these representations is to facilitate the designer's thinking and, in collaborative design activities, to share the emerging learning design with others in a process that culminates in a final assemblage of artefacts: traditionally, a tabular representation of the learning activities and the resources that will be required during the learning session.

Several representations, with differing forms, have been proposed by Learning Design research for supporting the process and product of the design activity. They include flowchart-style visualisations of learning activities (LAMS: Dalziel 2003), a columnar arrangement of resources, tasks and supports (Agostinho 2006), and representations of the learning design in concept-map form (Conole 2013). However, what is missing from their descriptions is a principled account of the rationale behind the chosen form of representation and its associated notation: that is, how they are intended to facilitate the cognitive tasks involved in planning students’ learning experiences. This is not to suggest that these representations are not effective in terms of their usability and value to lecturers. Rather, we propose that adopting an explicitly theory-informed approach to the design and evaluation of such representations is advantageous in two ways. Firstly, it should optimise the user's task and, thereby, maximise the usability and usefulness of the representations. Secondly, it should contribute to our understanding of users’ differing – and sometimes contradictory – reactions to the representations, as revealed by evaluation data. This should equip us to address any shortcomings in the representations more effectively.

This paper puts forward one possible theoretical basis for designing and evaluating such representations: the framework of epistemic efficacy proposed by Peterson (1996) in his editorial introduction to a collection of papers authored largely by researchers from the cognitive science field. The framework draws on a number of schemes for classifying representations according to the factors that account for their effectiveness in supporting cognitive tasks, and combines these factors under a unified set of “general headings” that nonetheless acknowledge the “web of relations and trade-offs” entailed. It begins by outlining the five dimensions of fit that constitute epistemic efficacy, illustrating each one with examples drawn from the initial research and prototyping phases of the Learning Designer project. It moves on to show how these dimensions are manifested in representations within the Learning Designer tool that enable teachers to model their pedagogy in a constructionist microworld. It then uses epistemic efficacy as a framework for analysing evaluation data collected by the project, discusses its contribution to answering the questions addressed in the evaluation of the Learning Designer and appraises its overall value to the design community.

Epistemic efficacy: a theoretical framework for analysing, designing and evaluating representations

If representations are to be offered to teachers specifically to facilitate the pedagogic design process, then the developers of supportive digital tools should have a theory-informed understanding of what forms of representation are more, or less, conducive to this purpose. Previous research by the first author has drawn from the cognitive science literature in order to analyse lecturers’ relationships to existing forms of representation (Masterman 2009) and to design alternative forms where the existing ones fall short (for example, representations to support students’ reasoning about problems of historical causation: Masterman 2004). The particular theoretical framework adopted in those studies and, hence, in the present paper, is epistemic efficacy, operationalised through its five dimensions of fit (Peterson 1996): ontology, task, process, user and circumstance.

In outlining the framework here, we draw from data collected from lecturers during the requirements-gathering elicitation phase of the Learning Designer project, when we interviewed lecturers about their current design practice. A fuller account of this initial phase and a summary of the findings from the interviews can be found in Laurillard et al. (2013) and Masterman (2013). However, many of the quotations from the data are published here for the first time.

Domain-fit

Peterson uses the term “ontology-fit” to discuss the extent to which it is possible to represent all of the elements of the domain being represented, as well as the relationships between them for the purposes of problem-solving (cf. also Reusser 1992). However, to avoid possible confusion with the ontology that underpins the Learning Designer (to be discussed later), we have chosen instead to use the term “domain-fit.” Typical elements of the Learning Design domain include the topic, number of students, level of study, a code to identify the design, intended learning outcomes, learning activities, their lengths, the resources required for the activities and the method of assessment. In Figure 1, which reflects the predominant (tabular) layout of lesson plans in UK schools and Further Education, it is clearly possible to show these elements. However, the only relationship that can readily be expressed is the temporal contiguity of the learning activities. In order to link, for example, each learning outcome with the learning activity that supports it, the learning outcomes have to be re-specified on the relevant rows. This introduces redundancy into the representation.

Fig 1
Figure 1.  A simple lesson plan in tabular format, showing the activities, learning outcomes and resources needed to complete the plan.

Teachers do use other forms of representation during the design process, but these may have different shortcomings in terms of domain-fit. For example, an interviewee who was a keen user of mind maps found it difficult to represent the element of time:

mind maps are very good at dealing with the spatial layout… […] but to actually deal with, with putting time on each activity, it gets quite complicated. […] You can do it, but it becomes quite difficult to track them through.

Task-fit

The dimension of task-fit relates to how useful and appropriate the abstract form of the representation is to the purpose of the task for which it is being used. Therefore, it may be necessary to change the form of representation according to the nature of the task. For example, physicists will use mathematical symbols for stating laws and deriving predictions, diagrams for plotting data for analysis and computational models for simulating the behaviour of phenomena (Cheng 1996).

Our research has shown that the design process can start with a “brainstorming” activity, in which the teacher will jot down ideas for activities and resources in an unordered fashion, either as lists of notes or in mind-map format, before assembling them into a more coherent text-based plan such as Figure 1. It is in this initial phase of rapid, fluid thinking that mind maps demonstrate their advantage over tabular plans, as this lecturer commented when outlining his own vision for a design support tool:

… something that maybe supports […] brainstorming your ideas about a session. […] something […] mind-mappy [sic] where you can just, you know, throw your ideas into something and work those things around and then maybe extract those into aims and outcomes, or something like that.

Process-fit

Process-fit addresses the extent to which a representation facilitates – or, conversely, impedes – reasoning and problem-solving in the domain as the user manipulates the elements in an interactive representation or interprets a ready-made representation (Larkin and Simon 1987). Two techniques that can contribute to this “computational offloading” are graphical constraining and re-representation (Rogers and Scaife 1998).

Graphical constraining refers to the way in which graphical elements and their position relative to each other constrain the kinds of inferences that can be made about the concept being represented. For example, in Figure 1, the heights of the rows representing the learning activities are determined by the cell containing the largest number of lines of text on each row. If row height were determined by the duration of each activity instead, then it would be possible to make inferences about the relative proportions of the lesson spent doing the different activities.

Re-representation opens different “windows” on the problem/task at hand in order to illuminate different aspects of the problem space (Norman 1993; Stenning and Oberlander 1995). Re-representation is more easily achieved by digital means, especially where the user may need to switch rapidly between, for example, a pictorial representation of feeding relationships in a pond and a diagram of the underlying food web (Rogers and Scaife 1998).

User-fit

User-fit is perhaps the most problematic dimension to address when designing a particular representation, as it depends on an individual's characteristics: for example, their capacity for different kinds of reasoning; their level of expertise in the domain (Peterson 1996); their familiarity with the particular form of representation; or the relative ease with which they handle, say, information presented in graphical format versus textual information. Thus, the epistemic efficacy of a representation may vary considerably from person to person. For example, one of the participants in the Learning Designer project commented, “I am quite visual and some of my ideas I think about in terms of pictures. […] I might not be thinking in words.”

Circumstance-fit

The final dimension relates to the tool and the physical environment in which the representation is constructed and manipulated. It is closely associated with usability and learnability, and with the extent to which the tool – as much as the representation – is appropriate to the other four dimensions of fit. Software that is both designed for the purpose and well designed in terms of usability can make a considerable difference to the cognitive burden of the design task. For example, one of our interviewees felt that “you get a more creative result […] by doing it on a piece of paper that doesn't look like a PowerPoint story.”

Applying epistemic efficacy to representations of pedagogy in the Learning Designer

Having outlined the five dimensions of fit with illustrations from teachers’ design practice, we now turn to their application in the design and evaluation of representations of pedagogy in the Learning Designer tool. In this section of the article, we describe its intended purpose and the representations that were designed in order to enable teachers to model their pedagogy.

The Learning Designer tool has been developed to help teachers to develop their practice within a knowledge-building community of educators. In so doing, it goes beyond merely supporting tasks in the design process such as specifying learning outcomes, durations, learning activities and resources, and so forth. Rather, it provides an environment, or microworld, in which teachers can manipulate representations containing these elements in order to explore, reflect on and adapt their ideas until they have created a learning design which meets their objectives, and which they can share with others (Laurillard et al. 2013).

The computational model underlying the microworld is an ontology of Learning Design, which was developed by the project team through a lengthy and detailed collaborative process of abstraction from a wide range of artefacts. These included lesson plans and module design documents collected from interviewees and from the six institutions involved in the project. The ontology itself was implemented in the software using a standard methodology, described by Charlton, Magoulas and Laurillard (2012).

“Pedagogy” is broadly interpreted here as “learning in the context of teaching, and teaching that has learning as its goal” (Beetham and Sharpe 2007, p. 2). The modelling functionality in the Learning Designer enables teachers to model the kind of learning that their students might experience when the learning design is realised in a learning session, and to explore how changes in the type of learning activity, its duration and/or the use of digital technology could affect that learning experience. This entailed extending the Learning Designer's ontology with two additional sets of concepts, which are defined as properties (attributes) of each type of learning activity:

The concepts were derived from Laurillard's (2002) Conversational Framework, and the relative proportions of cognitive activity were based on data collected over a number of years at one of the partner institutions in the project. However, users can modify these default values according to their context. It is important to emphasise that the Learning Designer only provides an indication of the learning experience that might ensue from these changes since learning design is, ultimately, design for learning (Beetham and Sharpe 2007).

To support the new modelling task and expose the additional elements from the ontology, we selected four forms of representation with which users would be broadly familiar in their professional and everyday lives: timeline, bar chart, table and pie-chart. These are deployed in two linked “views”: Timeline and Analysis.

The Timeline view (Figure 2) is where the teacher assembles the teaching and learning activities (TLAs) and determines their duration, group size (in collaborative learning) and the resources required. The timeline addresses the graphical constraining issues of the tabular representation, in that the relative durations of the different learning activities are now clear. For example, in comparison with Figure 1, the teacher's initial briefing can now clearly be seen to be one-third the length of data collection. However, although the timeline exploits the familiar linear conceptualisation of time, it serves a slightly different purpose in that it shows the accumulation of learning time, not elapsed time. This means that all of the learning activities must appear on the timeline contiguously with each other, even if they are to be carried out on, say, different days. In this way, the Learning Designer functions as a design tool, rather than as a planning or scheduling tool – i.e. the purpose that might more usually be associated with a timeline. Our selection of this form of representation, therefore, carried potential implications for process-fit.

Fig 2
Figure 2.  Timeline view of the lesson described in Figure 1. The teacher drags each learning activity from the palette of generic descriptions at the right, drops it onto the timeline, renames it and makes the required alterations to the default values.

The cognitive activities are represented in this view in two ways. The bar chart within each learning activity on the timeline shows how that activity is comprised of the different types of cognitive activity. For example, students merely listen and watch (“acquisition”) during the teacher's initial briefing, but collecting data in collaboration with other students is a more varied exercise that includes a substantial element of active learning through inquiry. However, the bar chart is actually a re-representation of numeric information, which is shown in tabular form in the “Properties” pane on the right: namely, the proportions for the learning activity, which is currently highlighted on the timeline. For example, data collection, an activity of the type “TEL resource-based group activity,” is estimated to involve 45% acquisition, 25% discussion and 30% inquiry. By seeing the proportions expressed quantitatively, the teacher can decide whether they are appropriate to the way in which she normally designs such activities, and modify them. The changes will be reflected in the bar chart accordingly.

Although the teacher can make an overall inference regarding the proportion of students’ learning taken up in different cognitive activities from the bar charts in the timeline, this requires some cognitive effort. Moreover, the timeline does not give any visual indication of the learning experience. For a cumulative representation of these aspects of learning, the teacher can switch to the Learning Designer's Analysis view, shown in Figure 3.

Fig 3
Figure 3.  The Learning Designer's analysis of the activities from the lesson in Figure 1.

The cognitive activities have been aggregated into a single pie-chart for the whole lesson, using the same colour coding as on the timeline. The different types of learning experience supported by the learning activities are represented as a stacked bar. Unsurprisingly, the dominance of discussion has resulted in an overwhelmingly social learning experience, but there is no individualisation.

The modelling functionality can be seen at work by comparing Figures 3 and 4. Figure 4 assumes that the group activities to plan, collect and analyse the data have been replaced by, respectively, a teacher-led discussion, and data collection and analysis conducted by students working individually. Moreover, the final group discussion has been replaced by an individual essay, which is marked by the teacher.

Fig 4
Figure 4.  Effect on the Learning Designer's analysis resulting from the replacement of collaborative activities with individual ones.

The cognitive activity of discussion has been much reduced, and there is a greater element of practice and production (writing the essay). The learning experience has also changed, with “social” learning making way for “one-size-fits-all” and “individualised” learning (the latter is accounted for by the feedback which the student will receive on their essay). Neither of these two models is right or wrong: what the Learning Designer has done is to help the teacher to see in advance how variations in learning activities can alter the students’ experience. They can thus fine-tune their design on the timeline, checking the outcomes in Analysis view until it approximates to the kind of learning that they want their students to have.

The principal dimensions which have been considered in this section are task-fit, process-fit, ontology-fit and, to a lesser extent, user-fit. This is not to say that circumstance-fit (usability) was not considered in the design of these representations; however, apart from adding learning activities from the palette to the timeline and editing the values in the table of learning activity properties, users do not directly manipulate the representations themselves. Moreover, circumstance-fit was compromised to some extent by the incorporation of ready-made open source code for the timeline for the sake of efficiency. This limited the extent to which the timeline could be modified, both to meet our initial user requirements and to address subsequent suggestions by evaluation participants.

Teachers’ evaluation of the representations: an analysis in terms of epistemic efficacy

The Learning Designer was evaluated with experienced and early-career lecturers in several UK universities as part of an iterative cycle of design, development, evaluation and redesign. To concentrate on specific aspects of the tool, we conducted guided walkthroughs with individual lecturers; to try it out with particular demographic groups, we ran workshops. We collected qualitative data regarding the modelling functionality and its representations in four discrete events as follows:

The walkthroughs were audio-recorded. Qualitative data were gathered in the workshops through free-text responses to reflective questionnaires and through audio-recorded plenary discussions facilitated by the researchers.

In relation to the modelling functionality and representations, we wanted to find out:

  1. Do lecturers understand the concepts and terminology?
  2. Do the representations make sense to them?
  3. Do they find the ability to model these aspects of their pedagogy useful?

In this section, we analyse data from these evaluations in terms of the five dimensions of fit.

Domain-fit

As noted in the previous section, providing the functionality for modelling in the Learning Designer entailed extending the ontology to embrace concepts and terminology that teachers might not have hitherto encountered in the context of designing for their students’ learning.

Some participants contested our choice of concepts; for example: “inquiry and discussion are so linked together, and even acquisition, when you are talking to other people you are acquiring, inquiring, and acquiring through discussion […] Discussing could be also production […] write things down, come up with new ideas.”

The terminology, too, was questioned, with one graduate teaching assistant feeling as though “I was forced to translate what I was going to do to an alien language back and forth.” However, we were responsive to feedback received in early evaluations. For example, the “individualised” category in the stacked bar chart was originally labelled “personalised,” but was changed after participants noted the potential for confusion with more commonly established usage: “personalised learning is where the learner is […] making their choices” (lecturer).

Task-fit

In relation to task-fit, the evaluations sought to establish whether lecturers found modelling students’ likely learning experience to be a useful task. The results were encouraging; for example: “I'd set myself up a perfect pie-chart outcome […] and then […] I would just use it over and over and over again for that model to see if it worked” (lecturer). Another lecturer discerned the potential to improve their overall approach to design:

It is rather a haphazard approach I have at the moment […] this will certainly help to structure it and record my design and then be able to modify it and then, after I have presented the session, be able to reflect and redesign certain aspects.

In contrast, still another lecturer seemed to feel that the design process was being subordinated to the exigencies of the tool – and that, for her, the modelling task was redundant: “But why I am doing that [i.e. mapping the learning activities in her draft plan to the Learning Designer's TLAs]? So that [the] metrics work or so that I can teach this session?”

Process-fit

As suspected, some participants initially interpreted the timeline as a schedule, not as a representation of cumulative learning time. However, they seemed able to adjust their thinking when the researcher explained the difference; for example: “It's a new concept because we work with schedules and dates […] I can see that it would be useful because it's making you focus on the learning for that module” (lecturer).

The pie-chart analysis of cognitive activities was generally well received, with a number of lecturers expressing appreciation at the new insights that the Learning Designer offered into their designs, as these two quotations show:

…each of these colours represents a different learning [i.e. cognitive] activity that's taking place, […] it's multiple ways of students acquiring learning, so they've got the acquisition – there's a little bit of chalk and talk, which students actually quite like, they think they're getting something specialist. There's lots of discussion going on, so there's lots and lots of […] self-explanations; there's inquiry-based learning…. (lecturer)
Some of the assumptions built in are quite useful in making you think […] For example, most of my classes are based on a small group discussion and it just makes you think about how much actual inquiry is in there, how much acquisition there is…. (graduate teaching assistant)

However, at least one lecturer questioned the helpfulness of numeric data in judging the proportions of the different cognitive activities comprising each learning activity: “I don't think it is actually important to have accurate percentage figures and absolute accurate representation. I'd be quite happy as a teacher to have sort of ballpark graphical representation of what's likely to happen to get me to think about my practice.”

User-fit

Evidence of user-fit can be found in self-reports in which participants explicitly contextualised their experience of the Learning Designer in relation to their personal needs and preferences. For example – and in contrast to the final quotation in the preceding section – one lecturer said of the pie-chart analysis: “I am [a] quantitative kind of person, so I like the fact that it's quantified in such a way that you can see the distribution in the pie-chart for those different types of learning.” However, one of her colleagues felt that, because her teaching approach was contingent on a number of variables, the pie-charts were of little value in suggesting learners’ likely experience: “I present in many, many different ways to many, many different audiences, the funnier I am, the perkier I am or the level of preparation I've done, so if I am setting the thing [i.e. proportions] in advance […] I wouldn't even know on a day….”

Circumstance-fit

Although we did not conduct rigorous usability evaluations, issues relating to both usability and learnability nevertheless emerged in the walkthroughs and workshops. Where feasible, individual issues were resolved in subsequent prototypes. For example, a “zoom” feature was added to the timeline in response to a request for all of the learning activities in a session to be visible at once.

Snapshots of the overall usability and learnability of the tool were captured at the two workshops. Most participants in the first workshop reported that learning how to use the Learning Designer properly would require a considerable investment of time, and 10 out of the 16 indicated in a post-workshop survey that they would want support in exploring it further. However, participants in the second workshop, who worked with a revised version of the tool, seemed to find it easier to use. Of the 10 responses to a free-text survey question “How easy was it to use the Learning Designer to create a teaching plan in the way you have been taught?”, nine included the phrases “fairly easy,” “easy,” “very easy” or “not difficult.” Nevertheless, five people felt they would need some support in exploring the tool further.

Discussion: mapping dimensions of fit to evaluation questions

To answer our first evaluation question, “Do lecturers understand the concepts and terminology?”, we can draw on data collected in relation to domain-fit. In the main, lecturers understood the Learning Designer's terminology. However, it was not wholly surprising that they contested the categories of cognitive activity, since not only do alternative typologies exist (such as Conole's pedagogy profile: Conole 2013), but the disagreement also reflected a broader issue in our research. Although we were able to identify a core set of commonly used concepts when developing the ontology, we found that they can be named differently; for example, a “module” can also be known as a “unit” or “course.” Moreover, concepts can be operationalised in different ways: for example, the duration of a learning session can be measured in minutes or in hours (and fractions thereof). These differences emphasise the need for users to customise the ontology to their needs.

Data in the categories of process-fit and user-fit help address question 2, “Do the representations make sense to lecturers?” We noted that lecturers in our evaluations were able to adapt to the unexpected (to them) cognitive operations supported by the timeline; however, they had the benefit of a researcher to point out this novel use to them. The question therefore remains as to how quickly a lecturer coming to the tool without such support would become aware of it. This situation is also reflected in the comments of lecturers under the heading “circumstance-fit” in the previous section, and suggests that the Learning Designer is best introduced to lecturers in the context of staff development initiatives.

Another question raised in relation to process-fit was the quantification of the different proportions of cognitive activities comprising a learning activity, which for at least one lecturer came as an unnatural activity. This suggests that a semi-quantitative approach to adjusting the proportions might be desirable, possibly by concealing the numeric data beneath “sliders” in the user interface. In manipulating these, the lecturer would merely need to think in terms of, for example, “mostly discussion,” “very little acquisition” or “roughly equal proportions of inquiry and practice.” These approximations would then be translated into numeric values by the tool, but the user need never be exposed to them.

User-fit is closely associated with process-fit, since the ease with which users read off the information in a representation depends on how comfortable they are with different representational forms. However, our criterion for classifying data in this category depends on participants’ level of self-awareness, and so we have to take their contributions at face value. An appreciation of user-fit can help to identify those issues that can be reasonably reliably attributed to individual differences: in other words, it means we must acknowledge that some lecturers will not be won over to the tool.

Question 3, “Do lecturers find the ability to model these aspects of their pedagogy useful?”, invited an analysis of the data categorised under task-fit. Lecturers’ mixed opinions regarding the introduction of a new task (modelling their pedagogy) suggests that introducing an additional cognitive burden is acceptable provided that the tool offers them something in return. In the case of the Learning Designer, it offers insights into students’ possible learning experience that enable a lecturer to review their design and to make adjustments that have a positive impact on students’ satisfaction as well as their performance. This notion of repayment for additional effort applies also to the intended role of the Learning Designer in supporting knowledge-building among the teaching community. As Falconer and Littlejohn (2009) note, a teacher contemplating reusing a particular learning design needs to have sufficient information about the pedagogic intention underlying that design in order to make an informed decision about its usefulness. This requires the person who is sharing the design to make explicit some aspects of their pedagogy that are normally tacit, and so they may be more motivated to do so if they too can benefit.

Conclusion

In this paper, we have sought to demonstrate how epistemic efficacy can be applied both as a prescriptive framework for designing representations and as an analytical framework for evaluating those representations. To illustrate our case we have used the “modelling” feature of the Learning Designer. In appraising the extent to which the goals in designing the representations have been met, we have also been able to understand the reasons why certain users have found them wanting and, hence, how we might address those shortcomings.

We should emphasise that the framework is not intended to act as a guide to deploying existing representations in particular design tasks, and so it stands in contrast to the typology of representations offered by Conole (2013).

An additional strength of epistemic efficacy – and, hence, an argument for using it for the purposes described here – is that each dimension is reflected in respected empirical research into the cognitive aspects of representations (some of which was cited in our initial outline earlier in the article). This means that we can bring together, into a coherent and cohesive whole, findings from different pieces of research, each of which on its own can only offer partial insights into the properties and affordances of representations. Of course, one must take care that these different research findings are compatible with each other and avoid a “pick-and-mix” approach that disregards inconvenient contradictions. We trust that future applications of the framework will reinforce its value to the design community.

Acknowledgements

The Learning Designer project was funded from 2008 to 2011 by the ESRC-EPSRC TLRP TEL programme, grant ref. RES-139-25-0406. The partners were: Institute of Education (lead), Birkbeck University of London, University of Oxford, London School of Economics & Political Science, Royal Veterinary College, and London Metropolitan University. The authors acknowledge the contribution made to the research described here by the following team members: Diana Laurillard, Patricia Charlton, Dejan Ljubojevic, Roser Pujadas and Carrie Roder.

References

Agostinho, S. (2006) ‘The use of a visual learning design representation to document and communicate teaching ideas’, ascilite 2006, Who's learning? Whose technology, Sydney, pp. 3–8.

Beetham, H. & Sharpe, R., (eds) (2007) Rethinking Pedagogy for the Digital Age, 1st edn, Routledge, London.

Charlton, P., Magoulas, G. & Laurillard, D. (2012) ‘Enabling creative learning design through semantic technologies’, Technology, Pedagogy and Education, vol. 21, no. 2, pp. 231–253. Publisher Full Text

Cheng, P. C.-H. (1996) ‘Problem solving and learning with diagrammatic representations in physics’, in Forms of Representation: An interdisciplinary theme for Cognitive Science, ed. D. Peterson, Intellect, Exeter, UK, pp. 47–66.

Conole, G. (2013) Designing for Learning in an Open World, Springer, New York.

Dalziel, J. R. (2003) ‘Implementing learning design: the Learning Activity Management System (LAMS)’, ascilite 2003, ‘Interact, Integrate, Impact’, Adelaide, Australia.

Falconer, I. & Littlejohn, A. (2009) ‘Representing models of practice’, in Handbook of Research on Learning Design and Learning Objects: Issues, Applications, and Technologies (vol. 1), eds L. Lockyer, et al., Information Science Reference, Hershey, PA, pp. 20–40.

Larkin, J. H. & Simon, H. A. (1987) ‘Why a Diagram is (Sometimes) Worth Ten Thousand Words’, Cognitive Science, vol. 11, no. 1, pp. 65–100. Publisher Full Text

Laurillard, D. (2002) Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies, 2nd edn, RoutledgeFalmer, London.

Laurillard, D., et al., (2013) ‘A constructionist learning environment for teachers to model learning designs’, Journal of Computer-Assisted Learning, vol. 29, no. 1, pp. 15–30. Publisher Full Text

Masterman, E. (2004) Representation, Mediation, Conversation: Integrating Socio-Cultural and Cognitive Perspectives in the Design of a Learning Technology Artefact for Reasoning about Historical Causation. Doctoral Thesis …, University of Birmingham.

Masterman, E. (2009) ‘Activity theory and the design of pedagogic planning tools’, in Handbook of Research on Learning Design and Learning Objects: Issues, Applications, and Technologies (vol. 1), eds L. Lockyer, et al., Information Science Reference, Hershey, PA, pp. 209–227.

Masterman, E. & Manton, M. (2011) ‘Teachers’ perspectives on digital tools for pedagogic planning and design’, Technology, Pedagogy and Education, vol. 20, no. 2, pp. 227–246. Publisher Full Text

Masterman, L. (2013) ‘The challenge of teachers’ design practice’, in Rethinking Pedagogy for the Digital Age, 2nd edn, eds H. Beetham & R. Sharpe, Routledge, London, pp. 64–77.

Norman, D. A. (1993) Things That Make Us Smart: Defending Human Attributes in the Age of the Machine, Addison-Wesley, Reading, MA.

Peterson, D., (ed) (1996) Forms of Representation: An interdisciplinary theme for Cognitive Science, Intellect, Exeter, UK.

Reusser, K. (1992) ‘Tutoring systems and pedagogical theory: representational tools for understanding, planning, and reflection in problem solving’, in Computers as Cognitive Tools, eds S. Lajoie & S. Derry, Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 143–177.

Rogers, Y. & Scaife, M. (1998) ‘How can interactive multimedia facilitate learning?’, in Intelligence and Multimodality in Multimedia Interfaces: Research and Applications, ed. J. Lee, AAAI Press, Menlo Park, CA.

San Diego, J. P., et al., (2007) ‘The feasibility of modelling lecturers’ approaches to learning design’, ALT-J, vol. 16, no. 1, pp. 15–29. Publisher Full Text

Stenning, K. & Oberlander, J. (1995) ‘A cognitive theory of graphical and linguistic reasoning: logic and implementation’, Cognitive Science, vol. 19, no. 1, pp. 97–140. Publisher Full Text