Evaluating complex digital resources

Squires (1999) discussed the gap between HCI (Human Computer Interaction) and the educational computing communities in their very different approaches to evaluating educational software. This paper revisits that issue in the context of evaluating digital resources, focusing on two approaches to evaluation: an HCI and an educational perspective. Squires and Preece's HCI evaluation model is a predictive model it helps teachers decide whether or not to use educational software whilst our own concern is in evaluating the use of learning technologies. It is suggested that in part the different approaches of the two communities relate to the different focus that each takes: in HCI the focus is typically on development and hence usability, whilst in education the concern is with the learner and teacher use.


Foreword
David Squires had an important influence on the work of all the authors and on that of the Computers and Learning Research group (CALRG) at the Open University, where four of the authors are located.Ann first met David some twenty-five years ago when one of his roles was working with teachers to develop educational software.Over the years his work has often taken a similar route to the work of the CALRG and we have been indebted to his contributions.In the early days of educational 'microcomputer' use (as it was known in the 1980s) David provided invaluable advice to our Micros in Schools project at the OU, and we also drew on his work with teachers which developed models and guidelines for their review and use of educational software.
Erica felt honoured that David Squires examined her Ph.D. thesis and valued his insightful comments and the discussion he encouraged and supported.In 1996, David ran a British Computer Society Human Computer Interaction Special Interest Group meeting to explore the issues of joint concern to educational and HCI evaluators and invited Ann along.This paper and the work reported here was influenced by the lively discussion at the workshop, the special issue of 'Interacting with Computers' that followed and the continuing debate.

Introduction: digital resources for teaching
There has recently been a rapid increase in the development and use of digital resources for teaching and learning.'Digital resources' is a broad term: it can include electronic books, online journals, movies, reference texts such as dictionaries as well as audio or image files; it is used to cover material created digitally or by scanning analogue resources.In this paper we are using the term 'complex digital resources' to mean any or all the above but we have a particular interest in learning environments which combine text and graphics, especially maps.This increase in use has been accompanied by a high level of concern by government and other funding bodies both in the UK and the US about the impact of such developments.For example, a recent programme jointly funded by the Joint Information Systems Committee (JISC) in the UK and the National Science Foundation (NSF) in the US focuses on how innovative applications of emerging IT and digital resources may transform teaching and learning.It is clear that there is an unprecedented level of interest in and use of such digital resources, and a clear political wish to encourage and foster this.Both JISC and NSF expressed their wish to promote 'effective use of large scale distributed digital content and advanced networking technologies in the context of the (higher education) classroom'.In particular, they emphasrze the availability of the combination of state-of-the-art digital and Internet-based services as well as digital content available globally for emerging applications in undergraduate education.
The particular focus of this paper is the evaluation of such complex digital resources.It is only through such evaluation that we begin to understand whether using such resources does transform teaching and learning.Evaluations of learning technologies have been a continuing part of the work of the Computers and Learning Research Group (CALRG) at the Open University for some twenty-five years.Whilst the group has been involved with evaluating the use of IT in education in the broadest sense, this paper will focus on digital resources and, in particular, a project on digital maps.
The outline of this paper is as follows.The next two sections describe the resources that have been developed in this area and their advantages for learning.Next we outline the CALRG's approach to evaluation and explain its rationale, and discuss how HCI and educational concerns may differ.This section also describes an HCI evaluation model developed for evaluating digital library services and compares the two approaches.Following this we outline how the framework was applied to evaluating an external project before drawing our conclusions.

Resources for learning with ICT in geography and cognate areas
Within geography and cartography, and related subjects such as geology, meteorology, environmental science and earth science, a range of educational resources has been developed (for example, CTIGGM, 1998).The Higher Education Funding Councils' Fund for the Development of Teaching and Learning (FDTL), the Teaching and Learning Technology Programme (TLTP) and the DeLiberations Geography project within the JISC Electronic Libraries programme have all funded efforts to increase access to computer-based teaching resources for this group of disciplines.For example, with TLTP funding, a group of UK earth science departments formed the UK Earth Science Courseware Consortium (UKESCC), which developed a set of computer-aided course modules for use by consortium members and for purchase by non-members.Similarly, seventy-two university and college geography departments formed a consortium to produce the GeographyCal software course modules (GeographyCal, see http:llwww.geog.le.ac.uklctilTltplintro.htm),which like the UKESCC resources are aimed at first-or second-year undergraduates.GeographyCal contains modules on map design and on introductory Geographical Information Systems (GIS), as well as incorporating digital maps into other modules.
Less ambitiously, the 'Virtual Fieldtrips' section of the 'Virtual Geography Department' at the University of Wisconsin encourages geographers to follow a simple template to produce a field trip primer for a given area used in their teaching, and to include location maps (Ritter, 1997).The site then acts as a gateway to all such 'virtual field trips', giving access to basic details of and exercises about areas all over the world.
Another recent development is the growth of the use of GIS.In the strictest sense, a GIS is a computer system capable of assembling, storing, manipulating and displaying geographically referenced information, that is, data identified according to their locations.More simply, we can view GIS as the digital equivalent of a map.In the same way that individual maps contain a wealth of information and are used in diverse ways by different individuals and organizations, GIS are also used in diverse applications.Applications range from databases of electricity networks to aid maintenance and supply to displaying the extent of deforestation in the Brazilian Amazon.(See, for example, http://www.geo. ed. ac. uklhomelresearchlwhatisgis.html.)Given such diverse applications, GIS are now used by many institutions including local and national governments, research institutions, businesses and industry.For example, planning offices might use GIS to keep records of property boundaries and they could be used in market analysis where it is necessary to know location of customers, the distance they have to travel, the best places to advertise and location of competitors.The wide application area of GIS requires many subject areas to incorporate the teaching of spatial skills and data manipulation into their programmes to assist graduates with career options.These users may not need a deep understanding of all of the elements of spatial data, such as its data structures, analysis and visualization, but they need to have sufficient knowledge to make appropriate and valid use of these data (Purves, Medycki-Scott, Blake, Fairbain and Mackaness, forthcoming).
The Open University has begun in recent years to use digital maps not only within geography and related disciplines but also within other less predictable contexts.An early example of this is the use of maps for historical research by students in a fourth-level course entitled, 'Charles Booth and Social Investigation in Britain 1850-1914'.The CD-ROM developed for this course includes a section labelled the 'map room', which includes monochrome Ordnance Survey maps as well as the social maps drawn up by Booth and the opportunity for students to plot data for themselves.The aim is as much to make students question the decisions and value systems reflected in the maps as to use them for study of the actual phenomena: Booth's work, like every other cartographer, reflected his own agenda and social context.Thus maps are being studied as visual artefacts in their own right, with students encouraged to consider critically their context and interpretation, in a similar vein to OU developments such as-Art Explorer and its successors (Durbridge and Stratford, 1996).A Research Libraries Support Programme-funded project based at the London School of Economics has creating free online digital versions of these maps {http:llbooth.lse.ac.ukl).
The project whose evaluation we consider in this paper, 'e-MapScholar' (http://edina.ac.uk/ projectslmapscholarf), also arose out of the need to enable students to evaluate and apply map resources appropriately even in non-geographic disciplines.In this case, however, the project arose out of an online data provision service funded by the JISC and managed at the University of Edinburgh: EDINA Digimap.This service provides current Ordnance Survey digital map data to subscribing UK higher education institutions so that academics and students can use the data in their projects.Evaluations of the service showed that this was encouraging the use of such data beyond the geographic disciplines for which most of the above learning resources were developed.It was felt that delivering learning resources to help non-geographers learn about map use alongside the data would be the best way to help this audience to appreciate the digital mapping techniques and skills required both within and beyond the academic environment.The next section will consider the concepts and difficulties faced by such learners.

Digital maps and learning
Digital maps can provide the learner with a number of advantages over using conventional paper-based maps (Davies, 1998): the learner can edit and change the appearance of the information they contain, has more control over the information and can choose what to display.Equally importantly from an educational viewpoint, far more information can be made available to the user than could be fitted onto a paper map.
• Learners can hide or display -different combinations of 'layers', showing different feature types or variables, and can thus observe various different views or relationships.'Layers' can include a reference grid, text labels and other explanatory features, as well as actual geographical entities.
• Learners may be given the choice over some or all aspects of the map's appearance: symbolization, categorization, colour, texture, scale, projection, label placement, generalization and description.
• Spatial correlations and other statistical relationships between features or variables can be calculated and displayed, to test whether apparent effects are really significant.
• Particular phenomena (such as floods, emigration or erosion) can be modelled and animated to show changes of extent or distribution over time.
• A digital map can be continuous and can be much larger than the screen at a given scale: the user can 'zoom out', 'zoom in' and 'pan' across the map to change the area displayed at any given moment.
• A database can be linked to the map so that displayed objects (such as a building) can be selected with a mouse click, and further information displayed (for example, about the building's history or owners) in a pop-up window.The data linked to the map may include more than simple text records: aerial or other photographs, numeric tables or spreadsheets, and hypermedia entities such as video clips or hypertext could also be included.
In other words, besides the visible design of the map, digital map-based multimedia has a complex information structure.The structure also differs between different digital maps, even from the same supplier.For example, the users of EDINA Digimap have to learn that different Ordnance Survey datasets are designed for use at different scales of accuracy and detail and cannot be effectively overlaid upon each other (for example, the extra-thick green line drawn to depict a trunk road at one scale looks nonsensically massive when overlaid on a street-scale image; individual buildings appear to be 'swallowed' by it).This focus on the information structure necessitates specific tutoring.
The obvious flexibility and depth of information provide potenticil benefits for learners, but also risk misleading or confusing them.The sections that follow describe the CALRG's approach to evaluation in order to investigate the extent to which such potential benefits can become reality.

An educational approach to evaluation: the CIAO! framework
At the Open University (OU) we have developed the CIAO! model to evaluate learning technology in context.This framework has been applied internally to evaluate learning technology applications developed at the university and has also been the basis for evaluating external projects (for example, Scanlon, Jones, Calder, Barnard and Thompson, 2000).
The framework outlines three dimensions to evaluation: (i) context, (ii) interactions and (iii) attitudes and outcomes.The framework is described elsewhere (Jones, Scanlon, Tosunoglu, Ross, Butcher, Murphy and Greenberg, 1996), as are the issues in evaluating learning technologies that led to the emphasis on these three areas (Jones, Scanlon and Blake, 1998).Here we will briefly discuss the reasons for the emphasis on two of the dimensions, those of context and interactions, and the implications of this emphasis for evaluating digital resources.We will also discuss how such a model relates to other approaches to evaluating digital resources, and educational computing and usability issues more generally.

Context
In all our studies we have found it particularly important to pay attention to context.In the CIAO! framework, context refers to a number of things, ranging from the wider context, such as the 'framing' of the teaching (for example, its location and who is involved, and whether the use is individual or collaborative), to a finer grained level (such as the context of the digital resource within the course and the components of the resource itself).This allows us to focus at a detailed level on how learners use information technology resources without losing the broader frame in which the learning is situated.It also includes the rationale for using or developing the particular application or resource -this is one of the most important aspects as it emphasizes the need to understand the intention of the designer or educator.Whether such intentions are realized is a matter for the evaluation to establish.
Context was a major concern in early evaluation literature (for example, Kemmis, Atkins and Wright, 1977) and has been emphasized more recently in the evaluation of TLTP projects and the many technology initiatives in the 1990s (for example, Draper, Brown, Henderson and McAteer, 1996).Oliver and Conole (1998) also emphasize context, although here it is part of what they refer to as authenticity which 'describes the notion of how closely an evaluation captures the context of an existing course' (Oliver and Conole, 1998: 4).However, with online learning resources such as those developed in 'e-MapScholar', the context of learning will vary unpredictably from classroom-based group teaching with formal progress assessments, to lone self-motivated browsing for the sake of understanding the maps used in a practical project.
A crucial aspect of context is the designer's rationale in introducing the technology.
Analysis and understanding of this pedagogical rationale is essential in determining the evaluation questions to be asked.For example, specific educational software may be designed to help learners understand concepts that are known to be difficult; perhaps by offering a different representation, or in subjects such as biology by demonstrating a dynamic process (such as relationships between parts of the circulatory system).One way to reflect this is to ask the designers and teachers what benefits they are expecting from using learning technology.In cases such as the examples above, this might be that it will help learners in their understanding.
We are not suggesting that the evaluation questions are driven only by the teacher's or designer's view, but that understanding both is a crucial element.It is also true that learners may benefit in ways that were not anticipated at all by the designers and may have expectations that have not been considered.For example, Chan, Jones, Joiner and Scanlon (forthcoming) have been studying learners using Teach Me Piano (TMP), a program designed to support the learning of generic performance skills.However, learners' specific motivation for using such a program cannot be assumed.It may be that some of the young students using it have aspirations towards playing their favourite band's current 'hit'aspirations that may not be realistic.Such expectations can be revealed through interviewing and observing the learners.Paying attention to context can also help to bridge the gap between the different approaches in HCI and education evaluation.It provides one way of taking different perspectives into account; in this case the perspective of the software developer whose evaluation concerns may be around HCI issues and the teacher who is concerned with pedagogical issues.
Squires (1999) elaborated on the different traditions of research and evaluation in educational computing and HCI, and discussed ways that each community could benefit from the work of the other.As part of this, Squires and Preece (1999) developed a predictive model of evaluation which recasts an HCI evaluation paradigm in terms of a socio-constructivist view of learning to produce a set of 'learning with software' heuristics.These attend to the integration of usability and learning issues.It is this integration that has been a particular concern for Squires.However, this is a predictive evaluation model; that is, it allows teachers to decide which software to use with their students.The CIAO! framework complements such an approach by concentrating on the software in use by learners, which Squires and Preece have referred to elsewhere (1996) as an interpretive model.The approach overall is educational.However, we would agree with Squires that it is necessary to attend to both usability and educational issues.The framework allows us to do that because it is a range of approaches rather than a prescribed method and so the evaluator has the flexibility to choose the most appropriate methods.
It is also worth noting that the two concerns should dovetail together, since digital resources increasingly integrate the interface with the educational content.In the online mapping tutorial in 'e-MapScholar', to give two examples, the user can see separated types (layers) of geographical data and examine the meanings of and relationships between map symbols.This prepares students conceptually, as well as procedurally, for handling the same issues of geographical structure and semiotics when using a real GIS to examine their own project data.

Collecting information about the process: tracking interactions
As well as valuing context, the CIAO! framework emphasizes studying the learning process in addition to learning outcomes.The problems of evaluating innovative technologies in learning through assessing outcomes are well debated in the general literature (for example, McFarlane, Harrison, Somekh, Scrimshaw, Harrison and Lewin, 2000;Jones, Scanlon and Blake, 1998).Not surprisingly, the same finding occurs in the literature on using digital maps in learning (Proctor and Richardson, 1997).In any case, evaluation is usually intended not just to provide information about success but also to suggest improvements (see, for example, Calder, 1994).Hence in addition to any outcome data it is at least as important to try to understand the learning process.
The OU's approach to this has been to look in detail at students' use of the software by analysing their interactions.This can provide information on why and how particular elements work (or not), rather than just finding out whether something works.Where possible, students are observed working with the software: this has included inviting students to come to the campus where members of the evaluation team can observe this.When these sessions have involved software developers, they have found the process of watching learners and their problems to be particularly insightful.In one such formative evaluation the software developer was part of a team observing the use of computerassisted learning materials in algebra for undergraduates (Jones, Scanlon and Blake, 1998); in this case, she found the process so helpful that she initiated and incorporated such evaluation trials in developing other materials and also developed some innovative collaborative development processes (Shipp, 2002).Such interactions can be recorded by audio or video, in order to provide protocol data for later analysis.Computer logs can also be collected of all key presses and the routes that students take through the materials.
More recently, we have been making increasing use of the Data Capture Suite.This is a facility which can combine video records of each user with a synchronous record of their computer screen (Blake and Scanlon, 2002).This allows for four different video screens to be displayed at the same time.It might, for example, include two screens showing video of two participants and a further two screens showing their computer screens.This allows observers to track facial gestures and movements as well as participants' utterances and interactions with the computer.A more detailed account 'of this facility is given in Blake and Scanlon (2002).
A different evaluation model was proposed by Dorwood et al. (2002).This was developed for the evaluation of a digital library services tool, and addresses both process and outcome.It views evaluation designs as combinations of different approaches that are the most appropriate for addressing the information needs of developers and other stakeholders.In this, it is similar to the CIAO! framework, which is not prescriptive but suggests a range of approaches and methods so that those that are adopted are appropriate to the study in hand.In CIAO! the focus on context requires the evaluator to assess what is needed in each particular case.Dorwood et al.'s model was used to evaluate the Instructional Architect (IA), which enables users to discover, select, reuse, sequence and annotate digital library learning objects.The evaluation process here consisted of a number of phases.The first was a needs assessment -surveying teachers' use of online resources, their perceptions of IA and their needs.This was followed by an expert review of the interface design, both by an internal and external panel.The next phase was an evaluation of the prototype design by the target audience, involving observation and a post-evaluation focus group interview with the evaluation team.
One of the findings from this evaluation phase was that, as the development team were involved in the evaluation, the problems users faced were very apparent to them and they could work on fixing these immediately.As discussed above, similar benefits arose from involving software developers in evaluations at the Open University (Jones, Scanlon and Blake, 1998).A major part of the evaluation design was a case analysis examining how teachers assess, combine and use teaching objects in the classroom.This has a particular focus on understanding the factors that enable or prevent the reuse of digital resources in education.Again, this process incorporated observations, and in the next section we consider the role of observation in tracking interactions in the CIAO! framework.

Applying the evaluation approach to 'e-MapScholar'
The 'e-MapScholar' project aims to develop customizable resources appropriate to the uses made of spatial data by a wide community.These are developed around three themes: • working with digital data; • integrating spatial data; and • visualization.
All resources produced allow interactive, customizable learning experiences, enabling effective constructivist learning that is related to prior knowledge.Both the tools and learning materials access the Digimap map and data servers in real time.
Learning materials developed under the 'Working with Digital Map Data' strand include concepts of geographic data (extent, scale and generalization); how objects in the real world are portrayed within Ordnance Survey data; and how the student should select data based on fitness for purpose.The tools developed include simple map querying, reporting and measurement functions.Digimap supports the production of high-quality maps based on Ordnance Survey digital map data.These can be used or customized to illustrate concepts relating to the use of digital map data.However, a major aspect of this work is the deconstruction of the Digimap user interface into components (atoms) which can be recombined into simple interactive client-based tools and embedded within learning materials to provide an interactive illustration of a concept.
Learning materials developed under the Data Integration strand focus on developing skills in, and understanding of, integrating a variety of external data (census, remote sensing, environmental) as well as user-generated data (Global Positioning Systems positions, other measured datasets) with the Ordnance Survey data available through Digimap.A number of client-based tools have been developed to interact with the servers, and learners and teachers will be able to upload their own data for use against the Ordnance Survey backdrop.
Learning materials developed in the Data Visualization strand focus on developing skills in, and understanding of, 2D and 3D visualization and visual problem-solving techniques.Areas addressed include fitness for purpose, collecting data for visual problem-solving and working through the decision-making process.The tools and materials developed can be adapted to the learner's own prior knowledge and subject area, and enable the development of appropriate skills by participation.The evaluation of these resources includes both a formative and summative component.A summative evaluation is planned for the end of 2002 and beginning of 2003 when the resources are fully developed.
The formative evaluation is integrated into the development phase of the learning resources and, given that the materials are under development, focuses on usability issues by adopting techniques such as walk-through and expert and peer review.These lead to the feeding back of comments and recommendations to the development team.The production of learning resources was planned in such a way that, during the development phase, selected resources were being made available to the evaluators.This facilitated formative evaluation and meant that the findings could be incorporated in to the design of newly developed resources.This allowed modifications to be tested, in an iterative way, by the evaluators.i Such iterative models of formative evaluation are also used in the Open University in the development of teaching materials.These share a number of features with Dorward et al.'s model described earlier.We would argue that formative evaluations of resources under development will necessarily have more emphasis on the basic usability aspects such as consistency and clarity, as it is difficult to evaluate their educational impact for two reasons.Firstly, un-fixed usability issues 'get in the way', so learners find their attention drawn to difficulties in navigating and using the resources rather than learning from them, and secondly it is difficult to simulate using the resource in its intended context until it is close to completion.
We have recently carried out a formative evaluation of a module in the 'Working with Digital Map Data' workpackage.The formative evaluation took place after a peer review by the members of the evaluation team.Following this we tested the basic interface to the resources, the usability of navigational aids and the help facility, presentation of maps on screen, usability of tools (for example, a tool to present different layers of a map to the user via automatic and user-controlled buttons, a magnifying glass tool to examine features of a map more closely), time taken to complete the module and general academic content.The findings from this formative evaluation were communicated to the software developers and they incorporated necessary modifications into the newly developed resources.

Discussion
In considering what can be evaluated and when, it is important to bear in mind the factors that affect usability.In our very early ICT evaluation work we proposed a 'Chinese box' model of ICT adoption and use (Jones and O'Shea, 1982) which views the process of adoption as a number of barriers to be surmounted.Interestingly this model applies as much now as it did then, and very similar models have been proposed by evaluation teams investigating the takeup of ICT resources on a large scale.For example, Haywood, Anderson, Day and MacLeod (1999) list a number of contextual features that influence ICT use including institutional and departmental characteristics.Being mindful of such factors should enable evaluators to select the most appropriate form of evaluation for their particular purpose.
Earlier we discussed the benefits of involving the development team in observing and tracking how users interact with the software, both in making any problems that the users faced apparent and because the developers can then work on fixing any problems immediately.Patton (1997) explores this issue in some detail and discusses, for example, whether particular programs or organizations are ready for evaluation, given that the 'reality testing' this involves can be both uncomfortable and threatening.In the 'e-MapScholar' evaluation, the evaluation and development teams were working at different sites.This was a challenge for the evaluation team who needed to persuade the development team of the learners' perspective as the developers did not have the advantage of the immediacy of observing users' reactions.
This evaluation, as we discussed earlier, studies a resource under development and thus focuses on usability aspects.In this case, as in many development projects, the target audience cannot be involved in the evaluation until the final stage, and then practical constraints and timing only allow the evaluators to run one trial with end-users near the end of the project.The walkthrough and expert evaluations used by the team did identify usability issues but these could not always be fixed quickly by the development team.We believe that this is partly an issue of timeliness: resources needed to be developed to a tight timescale.By the time the development team had the evaluation findings, they were faced with the tension between 'fixing' the modules that had been evaluated or applying them to the resources they were currently working on.Thus they were engaged in a learning curve, applying lessons learnt from the earlier modules to later ones.
In evaluating 'e-MapScholar', the emphasis on context alerted the team to the importance of the pedagogical context.It could not be assumed that technical terms and specialized language introduced in one unit would be remembered and understood in later units, either because students had skipped the appropriate unit or had forgotten the terms.So it was necessary to remind students about such terms.This issue is particularly important where stand-alone learning materials are being developed, for example given the current emphasis and interest in re-usable 'learning objects' (such as Littlejohn and Campbell, 2002;Harvey, 2002).It also points to the need for learners to be engaged in applying the terms to help them remember them.
Another contextual issue that emerged was the students' need to appreciate why and when they might want to use a particular feature before being shown how to do it.This motivation could sometimes be provided by the learning context (such as a project they were trying to execute with a real map or through discussion with their tutors about how maps were used in their discipline).Arguably, given that particular users' goals for this resource are not known and cannot be anticipated, this kind of context needs to be provided exactly in this way, in situ.
Finally, early feedback from student evaluators suggested that they sometimes found it difficult to make the connection between the processes and features described in the resource and their use in real contexts.For example, although the resources showed how different features could be represented on the maps, they did not include photographs of such features, nor did they include images of people using such resources (for example, digitizing).Again, they emphasized the need to relate what they were learning back to the context in which such resources were used.At a more general level this is a reminder of the importance of making full and appropriate use of any particular medium.However, with customizable resources like 'e-MapScholar', teachers would be able to add such contextrelevant information and images, as suited their intentions for the unit's use, to overcome the necessarily context-independent nature of the core resource itself.

Conclusions
Both the CIAO! framework and the Digital Library Services Tool evaluation model are non-prescriptive frameworks that allow different approaches to be combined according to the needs of the particular evaluation.In applying these approaches to IA and 'e-MapScholar 1 , there are interesting similarities: both involved iterative testing by different groups of participants ranging from experts in the field to, eventually, the end users.Both include formative evaluations -which we have argued focus on usability issues and are needed for development work.Both frameworks also highlight the value of the developers being part of the evaluation team in order to appreciate the kinds of problems that end users have and how they use the resources.
We would argue that it is important to have a range of instruments/approaches in tightly timetabled development projects -and the flexibility to replace one with another if the planned evaluation is not possible (for example, if the development is not finished on time).Our experience has been that this is not an uncommon situation in evaluating externally funded projects that involve working to very tight timescales.
We have also argued that different forms of evaluation are appropriate in different circumstances.For example, when focusing on digital resources that are being developed,usability/HCI issues are more important, not least because evaluators will not find out about educational value if the resource is not usable.It is more appropriate to focus on broader learning and educational advantages once the usability issues are resolved, and once the resource is embedded in its intended educational context.