Assessment choices to target higher order learning outcomes: the power of academic empowerment

Margot McNeilla*, Maree Gospera, and Jing Xub

aLearning and Teaching Centre, Macquarie University, Australia; bDepartment of Statistics, Macquarie University, Australia

(Received 23 February 2012; final version received 22 August 2012; Published 24 September 2012)

Abstract

Assessment of higher order learning outcomes such as critical thinking, problem solving and creativity has remained a challenge for universities. While newer technologies such as social networking tools have the potential to support these intended outcomes, academics’ assessment practice is slow to change. University mission statements and unit outlines may purport the value of higher order skills; however, questions remain about how well academics are equipped to design their curriculum and particularly their assessment strategies accordingly. This paper reports on an investigation of academic practice in assessing higher order learning in their units. Despite their intentions towards higher order learning outcomes for their students, the results suggest academics may make decisions when planning their assessment tasks that inadvertently lead students on the path towards lower order outcomes. Among the themes to emerge from the study is the importance of academics’ confidence and their attitudes towards the role of assessment in learning and how these perspectives, along with the types of learning outcomes they intend for their students, can influence their task design.

Keywords: assessment; higher order learning; educational technologies; curriculum alignment; confidence

*Corresponding author. Email: margot.mcneill@mq.edu.au

RLT 2012. © 2012 Margot McNeill. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons “Attribution 3.0 Unported (CC BY 3.0)” license (http://creativecommons.org/licenses/by/3.0/) permitting use, reuse, distribution and transmission, and reproduction in any medium, provided the original work is properly cited.

Citation: Research in Learning Technology 2012, 20: 17595 - http://dx.doi.org/10.3402/rlt.v20i0.17595

Introduction

Universities increasingly acknowledge the value of skills such as problem solving, critical thinking and creativity (Bath et al. 2004; Winchester-Seeto et al. 2011), yet the curriculum needs to be designed to support and scaffold development of these skills, and integrating them into assessment strategies has proven a challenge (Astleitner 2002; Burns 2006; Clarkson and Brook 2007; Race 2003). While new technologies have sometimes been heralded as having the potential to address an apparent gap between the rhetoric of curriculum alignment and assessment practice in universities, academic practice is slow to change, and the uptake of new tools to support the development and demonstration of higher order skills remains relatively low. In a study undertaken at an Australian university, academics’ confidence in their curriculum design capabilities emerged as an important link with the types of learning they intend for their students, their assessment strategies and the technologies they chose to support assessment. An overview of each of these themes is explored in the next section.

Assessment technology literature

Assessment is at the heart of students’ learning experiences (Brown and Knight 1994; Rust 2002), and Ramsden (1992) suggested it defines the curriculum from the students’ point of view. It serves to highlight for students what is important, how they spend their time and ultimately how they view themselves as students and graduates (Brown 1997). Among those asserting the importance of assessment in learning, Boud and Falchikov (2005) advocated development of skills for lifelong learning, encompassing the capabilities expected of graduates such as problem solving, critical thinking and metacognition (Falchikov and Thompson 2008).

Despite the increased recognition of the importance of assessment as part of an aligned curriculum to support student learning (Biggs and Tang 2007; Boud and Falchikov 2006), Bryan and Clegg (2006) lamented that the focus of much of our assessment is on “testing knowledge and comprehension and ignores the challenge of developing and assessing judgments” (p. 3). Falchikov and Thompson (2008) provided a stark reminder that the traditional methods relied on a limited number of techniques such as closed book examinations and essay-type assessments, which focused primarily on summative assessment and have been found to be unsuitable for developing these desired graduate skills. This gap between the intentions of teaching academics and their assessment strategies reinforces questions raised by Arum and Roska (2010) about whether higher order learning is taking place at all, or whether it is simply not assessed.

A study by Samuelowicz and Bain (2002) suggested that academic perspectives about the role of assessment might influence whether units are designed to address higher order learning outcomes. Their work traced the effect of disciplinary traditions, pedagogical beliefs and epistemological frameworks on the types of assessment used. They analysed academics’ orientations towards assessment using a framework developed to describe their beliefs about the role of assessment and feedback, about what should be learned and assessed, and finally about the differences between good and poor answers. Academics with an orientation towards “reproduction of important bits of knowledge, procedure and skill” were likely to require it in their assessments, with assessment tasks such as multiple choice questions testing understanding of facts or open-ended questions testing the ability to apply principles to a given, familiar situation. Conversely, if academics perceived assessment important in “transforming conceptions of the discipline and/or world”, then they were more likely to design assessment requiring higher order tasks such as evaluation and creation of new solutions (Samuelowicz and Bain 2002). Building on Samuelowicz and Bain's work, Northcote (2003) proposed that the beliefs held by academics about the role of assessment in learning and teaching also influenced their choices about assessment in the online learning context. They lamented that “despite all the evidence supporting the value of integrated qualitative assessment and the new affordances of the new technologies, online assessment has remained predominantly summative” (p. 68).

A study by McGoldrick (2002) of academics who were likely to introduce the development of student creativity in their curriculum found that confidence emerged as a key characteristic. Along with a sound understanding of their discipline area, the academics studied demonstrated enough self-efficacy to explore different ways of delivering their curriculum rather than limiting themselves to previously tried models. This could suggest that academics’ willingness to innovate may be a factor in designing assessment tasks to target higher order outcomes and select appropriate technologies to support these aims. In a recent article, Gray et al. (2012) suggest that “an academic without a sound rationale for assessing students’ Web 2.0 activities will struggle to justify the added effort flowing from the assessment (re) design”.

Jonassen and Reeves (1996) were among those who saw computers as having the potential to transform learning and assessment to a focus on higher order rather than lower order learning outcomes. Since then, the opportunities offered by technologies to support the design, delivery and administration of diagnostic, formative and summative assessment have been well documented in the literature (Crisp 2007; Philips and Lowe 2003). As social networking tools such as blogs and wikis emerged, their potential was raised to capture both the processes of student learning and the final artefacts to be submitted, in either collaborative or individual contexts (Boulos et al. 2006; Bower et al. 2009; Churchill 2007; Hewitt and Peters 2006). Bower et al. (2009) built on Anderson et al.'s A Taxonomy for Learning, Teaching and Assessing (2001) to propose a framework for conceptualising learning designs with Web 2.0 technologies, raising numerous possibilities for utilising their affordances and encouraging academics to put the whole curriculum at the core of their decisions. Shephard suggested that the use of these technologies could enable higher education to “better assess aspects of learning that have proved difficult to assess using more conventional means” (2009, p. 386).

What of academic practice in using technologies to support assessment of higher order outcomes? This article explores the extent to which academics’ choices about assessment technologies are influenced by their attitudes towards the role of assessment in learning and their confidence in their curriculum design capabilities.

The study

The study was undertaken in an Australian university to explore academic practice in using technologies to support the assessment of higher order learning. An exploratory mixed methods approach (Creswell and Clark 2007) was used, comprised of a three-phase study conducted over 4 years including:

The first two phases illuminated academic practices as largely employing technologies to support the assessment of lower order learning outcomes (McNeill 2011; McNeill et al. 2011). Among the challenges identified by participants in assessing student learning was designing assessment to target higher order learning outcomes such as problem solving, creativity and metacognition. The final survey, conducted as Phase 3 of the study, explored whether these themes were representative of academic practice in a larger sample from across the university. The survey was designed to explore academics’ use of technologies to support assessment, particularly:

This article reports aspects of the final survey results in relation to possible links between confidence, attitudes, assessment and intended learning outcomes. Results from other parts of the survey have been explored in previous publications (McNeill et al. 2010a, 2010b).

Survey

The convenors of online units using the university's learning management system (LMS) during Semester 1, 2010, were invited to participate in the survey. Since the study intended to explore possible uses of technology to address assessment challenges, only those academics using technology in their units were included. An adaptation of Anderson et al.'s (2001) taxonomy was used as a theoretical framework to explore the categories:

This framework was used as the basis for questions around curriculum design; the types of outcomes they intended for their students, the teaching and learning activities and assessment tasks. In order to explore possible links between the respondents’ attitudes towards the role of assessment in learning and their choices of technology, questions were included based on Samuelowicz and Bain's (2002) orientations to assessment framework. Respondents were also asked about which technologies they used and how they used for assessment. Demographic information about discipline, unit level and enrolment mode were also collected.

Findings

Of the 734 academics invited to participate, 180 responded to the survey (24.5%). There were respondents from a wide range of discipline areas, and all faculties were represented. Postgraduate units were most commonly represented with 31.8% respondents, followed by the middle years of undergraduate programs (29.1%), then first year (21.2%) and final year (17.9%). Regarding student enrolment modes, the highest representation was from units with a mixture of internal and external students (51.1%) followed by internal only modes (41.6%) and external only (7.3%).

Intended learning outcomes

Respondents were asked about the types of learning outcomes they intended for their students. They were asked about their levels of agreement with a list of statements, using a five-point scale from To a large extent down to Not applicable. The results are presented in Table 1.


Table 1.  Intended learning outcomes.
The learning outcomes target the students’ ability to: n Mean Standard deviation
Recognise 131 3.702 0.848
Understand 131 4.145 0.703
Apply 131 4.573 0.621
Analyse 131 4.351 0.876
Evaluate 131 4.42 0.859
Create 131 3.832 1.046
Critique 131 4.015 1.023
Average for intended learning outcomes 131 4.148 0.492

The type of learning outcome rated most highly was apply, with more than 90% of respondents agreeing that this was targeted to a large or moderate extent. Evaluate and analyse were also highly rated. Outcomes associated with recalling information and creativity were more likely to be targeted to a small extent, not at all or viewed as not applicable.

Technology uses

Respondents were asked about the types of technologies they used for assessment. Table 2 summarises the responses for all technologies, used to a large or moderate extent to assess the various categories of learning outcomes from Anderson et al.'s A Taxonomy for Learning, Teaching and Assessing (2001). The total number of respondents who indicated that they used each technology, whether for summative or formative assessment, is tallied for each column, with the highest rating item highlighted in bold font.


Table 2.  Technologies used to target specific learning outcomes for summative or formative assessment.
Answer options Quiz Discussion forum Wikis Blogs Online portfolios Virtual worlds
Recognise 80.7% (46) 28.8% (33) 22.2% (2) 37.5% (6) 44.4% (4) 33.3% (2)
Understand 58.2% (32) 45.4% (54) 66.6% (6) 55.6% (10) 40.0% (4) 33.3% (2)
Apply 45.5% (20) 63.5% (52) 66.6% (6) 55.6% (10) 60.0% (6) 33.3% (2)
Analyse 29.7% (16) 45.3% (53) 66.7% (6) 44.5% (8) 50.0% (5) 33.3% (2)
Evaluate 33.9% (18) 52.1% (62) 60.0% (6) 55.6% (10) 60.0% (6) 60.0% (3)
Create 7.6% (4) 28.0% (32) 80.0% (8) 47.0% (8) 60.0% (6) 33.3% (2)
Critique 35.8% (19) 39.1% (45) 44.4% (4) 72.2% (13) 70.0% (7) 33.3% (2)
Responses 57 121 10 18 10 6

The two most commonly used tools, discussion forums and online quizzes, had the highest response rate for the categories of apply and recall, respectively. Of the 57 respondents using quizzes, 46 indicated that they used them to assess whether students could recognize or recall information, concepts or procedures. Students’ ability to understand or apply information also featured highly. Discussion forums were the most widely used of all options in the survey, with 129 of the total 176 respondents (73.3%) indicating they used them.

There were examples in the sample of respondents using wikis, blogs, online portfolios and virtual worlds for higher order outcomes. Of the 10 respondents using wikis, eight indicated that they targeted creativity as a higher order learning outcome. Metacognitive knowledge, where students were assessed on their ability to critique or evaluate their own performance, featured most highly in the use of blogs and online portfolios, followed by understanding and evaluation. Evaluation was the target with the highest rating for virtual worlds, although it is difficult to draw conclusions from such small numbers of respondents. Creation tasks featured strongly for wikis and online portfolios.

While a wider range of technologies was explored in the survey, only responses for quizzes and forums were used in this analysis because of their use across all levels of learning outcomes. In addition, usage rates for the other technologies were too low for effective statistical analysis. Options for uses of technologies included:

Since some respondents indicated more than one purpose for using the technology, Market Basket Analysis (Kachigan 1991) was used to determine the decision rules to analyse each purpose for using that technology. Based on the decision rules, most respondents used quizzes to focus on content. By rule 1, 85.1% of those respondents who used quizzes focused only on assessing content. By rule 2, the proportion of respondents who used quizzes to focus on the combination of content and feedback was 63.8%. By rule 3, 53% of the respondents who used quizzes did so to assess participation and content. By rule 4, only 42.6% of the respondents who used quizzes focused on all options of content, participation and feedback.

Of those who used forums, 92.8% of the respondents used these tools to encourage discussion among students, and 82.5% used forums at least for both participation and discussion. Only 54.64% of the respondents used forum for assessing content.

Academic confidence in their curriculum design capabilities

Respondents were asked about their level of confidence in their curriculum design capabilities, specifically in designing teaching and learning activities, assessment tasks and choosing technologies to target and assess their intended learning outcomes. They were asked about their levels of agreement, using a five-point scale from To a large extent down to Not applicable. For analysis, the two options of Not at all and Not applicable were combined. Results are presented in Table 3.


Table 3.  Academic confidence in curriculum design.
Please indicate your level of agreement with the following statements n Mean Standard deviation
I am confident in my ability to design the teaching and learning activities in my units to elicit the outcomes I intend 131 4.382 0.588
I am confident in my ability to design assessment tasks to elicit these outcomes 131 4.336 0.59
I am confident in my ability to choose appropriate technologies for use in my units. 131 3.885 0.9
I am confident in my ability to choose technologies to support assessment of my intended learning outcomes 131 3.779 0.897
Average for confidence 131 4.095 0.628

While the majority of respondents agreed or strongly agreed that they were confident in their ability to design teaching and learning activities and assessment to suit their intended learning outcomes, levels of confidence decreased in choosing technologies. In choosing technologies to support assessment of their intended learning outcomes, only two-thirds agreed or strongly agreed that they were confident, and 20% agreed or strongly agreed that they were confident in choosing technologies to assess these outcomes.

Academics’ attitudes towards assessment

Respondents’ attitudes towards assessment were explored in relation to Samuelowicz and Bain's (2002) orientations towards assessment as assessing students’ ability to:

  1. reproduce information presented in lectures and textbooks,
  2. reproduce structured knowledge and apply it to modified situations, and
  3. integrate, transform and use knowledge purposefully.

They were asked about their levels of agreement, using a five-point scale from To a large extent to Not applicable. These results are presented in Table 4 below.


Table 4.  Attitude towards assessment.
Assessment is important to: n Mean Standard deviation
Assess reproduction of information 131 2.817 1.142
Assess application of structured knowledge to modified situations 131 3.664 0.917
Assess integration, transformation and use of knowledge 131 4.55 0.635
Average for attitude 131 3.677 0.664

Over 90% of respondents agreed that assessment played an important or very important role in assessing students’ ability to integrate, transform and use knowledge purposefully. While the majority of respondents rated the assessment of students’ ability to reproduce information from lectures or textbooks as of low importance or not important at all, over one-quarter rated this lower order skill as being of very high or high importance.

Links between confidence, curriculum design and assessment

As suggested by Samuelowicz and Bain (2002), academic attitudes about the role of assessment can influence the types of tasks they design in their units. One question that arose from analysis of the data was whether there were links between respondents’ attitudes about assessment, the types of learning outcomes they intended for their students, the purposes for which they used assessment technologies and their levels of confidence about their curriculum design capabilities. In order to explore these links, statistical analysis was undertaken to determine whether any patterns emerged from the data.

To investigate the relationships among the continuous variables of confidence, attitude, learning outcome and assessment purposes, scatter plots with simple linear regression models were used for any of the pairs from those four continuous variables. For example, analysis based on ‘attitude’ is summarised in Table 5.


Table 5.  Simple linear regression analysis.
Attitude =1.72 + 0.47* learning outcome
Attitude =2.01 + 0.405* assessment purpose
Attitude =3.102 + 0.140* confidence
*denotes p<0.05.

It was concluded that the respondents with intentions towards higher learning outcomes can be predicted as also having attitudes towards assessment that valued integration and transformation over reproduction. Based on these estimated simple linear models, there were positive relationships between learning outcome and attitude, assessment targets and confidence. These positive relationships are significant because the p values for the coefficients of attitude, assessment target and confidence are close to zero.

Cluster analysis

K-means clustering (MacQueen 1967) was then used to choose the number of clusters for the data set to explore whether any predictive patterns were evident between the four elements of confidence, intended learning outcomes, assessment purposes and attitudes towards assessment. Two clusters emerged, as displayed in Table 6.


Table 6.  Differences between Clusters 1 and 2.
Cluster Average for attitude towards assessment Average for intended learning outcome Average for assessment target Average for confidence
1 3.369 3.923 3.878 3.883
2 4.208 4.539 4.536 4.464

From the analysis, two clusters of roughly equal numbers of observations emerged. With the data classified into two clusters, the mean values of attitude, learning outcome, assessment target and confidence for Cluster 1 were all lower than the corresponding mean values for Cluster 2. Those respondents in Cluster 2 reported higher levels of confidence in their curriculum design capabilities and were more likely to target higher order learning in their intended outcomes. Their uses of quizzes and forums were more likely to focus on providing feedback for students on their learning than keeping up with the content. These respondents were also less likely to target the lower order orientations towards assessment, such as reproducing information from lectures or textbooks.

This phenomenon can be also seen from the scatter plot matrix based on these four elements (Figure 1). The clusters are separated by different shapes, with Cluster 1 shown as circles and Cluster 2 as triangles.

Fig 1
Figure 1.  Scatter plot matrix for Clusters 1 and 2, depicting four elements of attitude, intended learning outcome (learning), assessment target (assessment) and confidence.

As depicted in Figure 1, relationships were evident between the four elements. As denoted by the predominance of triangles in the top right-hand quadrants, those respondents in Cluster 2 reported higher levels of confidence in their curriculum design capabilities to develop teaching activities and assessment tasks and to choose technologies to support assessment to target the types of learning outcomes they intend. Those in Cluster 2 were also more likely to target higher order learning, such as analysis, evaluation, creativity and metacognition in their intended outcomes. While most respondents reported using quizzes to assess whether students were keeping up with the content, those who reported higher levels of confidence were more likely to report their use for providing feedback to students on their learning. Cluster 2 respondents were also more likely to use forums for providing feedback for students on their learning than keeping up with the content. In contrast, those respondents in Cluster 1 reported lower levels of confidence in their curriculum design capabilities and were more likely to target lower order learning in their intended outcomes, such as recognition and understanding. The relationship between confidence and attitudes towards assessment is not as strong; nevertheless, the overall trend is still maintained. Cluster 2 respondents were more likely to consider assessment as important in supporting students to integrate, transform and use knowledge purposefully rather than reproducing information presented in lectures and textbooks.

Discussion

The survey explored whether the higher order learning espoused as central to university learning is reflected in the intended outcomes, assessment strategies and technology choices of academics. While some of the respondents intended higher learning outcomes such as evaluation, creativity or metacognition for their students, there are many who continued to target lower order outcomes such as recognition and understanding. Application of knowledge was the most common focus of curriculum designs. The challenging outcomes relating to creativity (create) and metacognition (critique of own performance) had the most divergent responses as indicated by the higher standard deviations. Relatively large numbers of respondents indicated that creativity and metacognition were not important or not applicable. This mirrored findings from the interviews conducted earlier in the study, where academics found these higher order outcomes to be most problematic (McNeill 2011), where academics were unsure of how to design tasks to target these types of outcomes and how to allocate marks to student work. This tendency to avoid a focus on the higher order outcomes, perpetuating the challenges identified in the literature (Astleitner 2002; Burns 2006; Clarkson and Brook 2007; Race 2003), has implications for academic practice, when taking into account that 50% of survey respondents described a postgraduate or final year unit. Given the university's context of mandating capstone units where students are intended to focus more on integrating rather than acquiring new knowledge and skills and identify gaps in their own knowledge (McNeill 2011), this suggests the need to empower academics with the knowledge and skills to make informed decisions. There is a role for academic developers in supporting convenors to adopt more of a program approach to curriculum alignment, with differentiated curriculum targets as students progress through their programs. While understanding and being able to apply foundation principles may be important for students during their learning process, they need to acquire and hone higher order skills as they enter the final stages of their program and prepare for transition into the workforce or onto further study.

The literature suggests that newer social networking technologies may have the potential to overcome some of the barriers in capturing and storing students’ development of higher order skill such as “creative thinking” or “self-reflection”; however, the level of uptake of tools such as wikis, blogs and e-portfolios is relatively low. While newer technologies with greater scope to target higher order learning have become available for academics (Bower et al. 2010), the study suggests that curriculum design practice is slow to change. Those tools such as wikis, blogs and e-portfolios, with greater potential to support the assessment of higher order learning, were used by relatively small numbers of respondents. This highlights the importance of academic development initiatives to build capability for academics to integrate innovations, including technology into their teaching. If academics understand the principles underpinning curriculum alignment and how to select technologies to best suit their intended learning outcomes, they are more likely to make effective choices.

Almost all the respondents agreed that assessment played an important or very important role in assessing students ability to integrate, transform and use knowledge purposefully and use it creatively in novel situations, the highest levels of learning outcomes according to Anderson et al.'s (2001) taxonomy. While this was a positive finding, there was also a strong focus on lower order reproduction. In interviews conducted in previous phases of the study (McNeill et al. 2011), it emerged that many academics were concerned that students gained proficiency in understanding for example foundation principles, which, as Race (2006) suggests, are easier to assess than higher order uses of this knowledge. Despite the high proportion of final year or postgraduate units in the study, 90% of respondents agreed that applying information to structured situations was at least moderately important, and there was little evidence of a progression towards higher order outcomes associated with transition out of university.

This illustrates a potential source of misalignment if academics choose assessment tasks to suit their attitudes about the role of assessment rather than intended higher order outcomes. While higher order outcomes may be intended by many academics, they can inadvertently select technologies to work against these types of outcomes if they are guided by their perceptions of assessment as being important for roles such as assessing whether students can reproduce information, which can encourage a focus on lower order outcomes instead. Examples include the small number of respondents (4) who indicated that they had chosen quiz tools to assess the higher order outcome of creativity. While quizzes may be ideal for testing students’ understanding of foundation principles, other tools such as wikis or blogs are better able to capture the learning journey in developing creativity.

Academics’ confidence in their curriculum design capability emerged as an important factor in the level of alignment between their intentions towards higher order learning and the curriculum choices they make. The results from the cluster analysis suggest that the links McGoldrick (2002) established between academics’ self-efficacy and their targeting of creativity in their curriculum may also extend to other types of higher order learning such as analysis, evaluation and metacognition. While many academics rated themselves as relatively confident in their ability to design their curriculum to target their intended outcomes, their levels of confidence dropped when selecting technologies, especially for assessment purposes. This lack of confidence may contribute to the predominance of the use of even the most prevalent of technologies, discussion forums, for formative assessment purposes. The allocation of marks for summative purposes emerged from the Phase 2 interviews as a source of confusion and uncertainty (McNeill et al. 2011), which reiterated finding from a previous study (Byrnes and Ellis 2006), where academics were found to limit the allocation of grades to lower order tasks when technologies were employed, thereby minimising risks for their students and themselves.

One way to increase the levels of confidence convenors have in curriculum design and decision making about what technologies they include in their units is to equip them with a greater understanding of the principles underpinning higher order learning, such as curriculum alignment and the role of scaffolding and feedback in learning. Frameworks for evaluating technologies are necessary to help academics determine whether the affordances of particular technologies are suitable for their curriculum.

Conclusion

In a university sector under increasing pressure for accountability about what and how students learn, the need to capture evidence of the types of higher order learning typically associated with graduate capabilities is crucial. Although technologies have been heralded as having the potential to address some of these issues, the challenge remains of changing academic practice in adopting and using these tools effectively. The significance of this study is in illuminating current gaps in curriculum designs, between what academics and indeed program leaders intend for their students and what is targeted in the assessment. The study explored the perspectives of those convenors of online units at one Australian university who used the centrally supported LMS. While there may be other views from those using different platforms, the results provide a picture of current uses of technologies to support assessment, useful as a realistic baseline when considering the rhetoric around the potential of new technologies.

Despite the hype that sometimes surrounds the potential of social networking tools such as blogs and wikis, the majority of respondents used the more traditional tools of quizzes and discussion forums, which are available in the LMS and typically focus on assessing lower order outcomes. While these results may indicate the extent of work still to be done in raising awareness amongst teaching academics of the affordances of technology, the study is also significant in affirming the value of academic development work and professional development. The results reinforce the importance of academics’ confidence in informing their curriculum alignment to target their intended learning outcomes with appropriate assessment strategies and technology choices. Given the rapid growth of technologies for possible use in education, this understanding will become increasingly important. As newer technologies become more widely available through centrally managed platforms, there are opportunities for professional development to equip academics with the confidence to integrate these tools into their curriculum to target the elusive higher order outcomes. Strategies currently under development as a result of this study include a series of online and on-campus workshops to scaffold academics through the integration of technologies into their curriculum, beginning with questions of alignment, case studies and showcases with commentary on the uses of tools to support the development and assessment of higher order learning outcomes and faculty-based support teams to provide guidance for individuals and groups of academics. These are conducted as part of a university-wide implementation of a new learning management system, with the aim of using technologies to drive innovation and curriculum enhancement.

The study reiterates the importance of academic development work; however, it also provides a reminder of the complexity of curriculum design and the myriad of influences at play. One of the opportunities for further research is more qualitative exploration of the links between the curriculum elements in specific contexts, such as an investigation of the alignment between assessment tasks and grading criteria to determine a holistic perspective of the types of learning outcomes targeted. While the use of technologies as a powerful option to support assessment of higher order learning, they need to be aligned with the whole curriculum.

References

Anderson, L., et al. (2001) A Taxonomy for Learning, Teaching and Assessing: a Revision of Bloom's Taxonomy of Educational Objectives, Longman, New York.

Arum, R. & Roksa, J. (2010) Academically Adrift: Limited Learning on College Campuses, University of Chicago Press, Chicago, IL.

Astleitner, H. (2002) ‘Teaching critical thinking online’, Journal of Instructional Psychology, vol. 29, no. 2, pp. 53–75.

Bath, D., et al. (2004) ‘Beyond mapping and embedding graduate attributes: bringing together quality assurance and action learning to create a validated and living curriculum’, Higher Education Research & Development, vol. 23, no. 3, pp. 313–328. [Crossref]

Biggs, J. & Tang, C. (2007) Teaching for Quality Learning at University, Open University Press, Berkshire, UK.

Boud, D. & Falchikov, N. (2005) ‘Redesigning assessment for learning beyond higher education’, Research and Development in Higher Education, vol. 28, pp. 34–41.

Boud, D. & Falchikov, N. (2006) ‘Aligning assessment with long-term learning’, Assessment & Evaluation in Higher Education, vol. 31, no. 4, pp. 399–413. [Crossref]

Boulos, M. K., et al. (2006) ‘Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education’, BMC Medical Education, vol. 6, pp. 41. [Crossref]

Bower, M., et al. (2009) Conceptualising Web 2.0 Enabled Learning Designs. Same Places, Different Spaces, Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), Auckland.

Bower, M., et al. (2010) ‘A framework for Web 2.0 learning design’, Educational Media International, vol. 47, no. 3, pp. 177–198. [Crossref]

Brown, G. (1997) Assessing Student Learning in Higher Education, Routledge, London.

Brown, S. & Knight, P. (1994) Assessing Learners in Higher Education, Kogan Page, London.

Bryan, C. & Clegg, K., (eds) (2006) Innovative Assessment in Higher Education, Routledge, Abington, UK.

Burns, M. (2006) ‘Tools for the mind’, Educational Leadership, vol. 63, no. 4, pp. 48–53.

Byrnes, R. & Ellis, A. (2006) ‘The prevalence and characteristics of online assessment in Australian universities’, Australasian Journal of Educational Technology, vol. 22, no. 1, pp. 104–125.

Churchill, D. (2007) ‘Blogs, other Web 2.0 technologies and possibilities for educational applicationsWeb 2.0 and possibilities for educational applications’, 4th International Conference on Informatics, Educational Technology and New Media, Sombor, Serbia, Pedagoski Facultet u Somboru. pp. 317–325

Clarkson, B. & Brook, C. (2007) ‘Achieving synergies through generic skills: a strength of online communities’, Australasian Journal of Educational Technology, vol. 23, no. 4, pp. 248–268.

Creswell, J. & Clark, V. P. (2007) Designing and Conducting Mixed Methods Research, Sage, Thousand Oaks, CA.

Crisp, G. (2007) The e-Assessment Handbook, Continuum International Publishing, New York.

Falchikov, N. & Thompson, K. (2008) ‘Assessment: what drives innovation?’, Journal of University Teaching & Learning Practice, vol. 5, no. 1, pp. 49–60.

Gray, K., et al. (2012) ‘Worth it? Findings from a study of how academics assess students’ Web 2.0 activities’, Research in Learning Technology, vol. 20, pp. 16153. [Crossref]

Hewitt, J. & Peters, V. (2006) ‘Using wikis to support knowledge building in a graduate education course’, Paper presented at the World Conference on Educational Multimedia, Hypermedia and Telecommunications (EDMEDIA), Chesapeake, VA.

Jonassen, D. H. & Reeves, T. (1996) ‘Learning with technology: using computers as cognitive tools’, in Handbook of Research for Educational Communications and Technology, ed D. H. Jonassen, Macmillan, New York, pp. 693–719.

Kachigan, S. (1991) Multivariate Statistical Analysis: a Conceptual Introduction, Radius Press, New York.

MacQueen, J. B. (1967) ‘Some methods for classification and analysis of multivariate observations’, 5th Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, pp. 281–297.

McGoldrick, C. (2002) ‘Creativity and curriculum design: what do academics think?’, Commissioned Imaginative Curriculum Research Study, LTSN, June 2002.

McNeill, M. (2011). ‘Technologies to support the assessment of complex learning in capstone units: two case studies’, in Multiple Perspectives on Problem Solving and Learning in the Digital Age, eds D. Ifenthaler, P. Isaias, J. M. Spector, Kinshuk & D. Sampson, Springer, New York.

McNeill, M., et al. (2010a). ‘Aligning technologies and the curriculum: a snapshot of academic practice’, IADIS International Conference: Cognition and Exploratory Learning in Digital Age (CELDA 2010), Timisoara, Romania, pp. 630–640.

McNeill, M., et al. (2010b). Technologies to Transform Assessment: a Study of Learning Outcomes, Assessment and Technology Use in an Australian University, Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), Sydney.

McNeill, M., et al. (2011) ‘Academic practice in aligning curriculum and technologies’, International Journal of Computer Information Systems and Industrial Management Applications, vol. 3, pp. 679–686.

Northcote, M. (2003) ‘Online assessment in higher education: the influence of pedagogy on the construction of students’ epistemologies’, Issues In Educational Research, vol. 13, no. 1, pp. 66–84.

Philips, R. & Lowe, K. (2003) ‘Issues associated with the equivalence of traditional and online assessment’, 20th Annual Conference of ACSILITE: Interact, Integrate, Impact, Adelaide, South Australia, pp. 419–431.

Race, P. (2003) ‘Why fix assessment?’ in: L. Cooke & P. Smith (Eds) Seminar: reflections on learning and teaching in higher education Chilterns University College Buckinghamshire.

Race, P. (2006) The Lecturer's Toolkit, Routledge, London.

Ramsden, P. (1992) Learning to Teaching in Higher Education, Routledge, London.

Rust, C. (2002) ‘The impact of assessment on student learning: how can the research literature practically help to inform the development of departmental assessment strategies and learner-centred assessment practices?’, Active Learning in Higher Education, vol. 3, no. 2, pp. 145–158. [Crossref]

Samuelowicz, K. & Bain, J. (2002) ‘Identifying academics’ orientations to assessment practice’, Higher Education Research and Development, vol. 43, pp. 173–201.

Winchester-Seeto, T., et al. (2011) ‘Smoke and mirrors: graduate attributes and the implications for student engagement in higher education’, in Engaging with Learning in Higher Education, eds I. Solomonides, A. Reid & P. Petocz, Libri Publishing, Faringdon, UK.