ORIGINAL RESEARCH ARTICLE

Assessing mobile mixed reality affordances as a comparative visualisation pedagogy for design communication

James Birta* and Michael Cowlingb

aFaculty of Society and Design, Bond University, Gold Coast, Australia; bSchool of Engineering and Technology, Central Queensland University, Brisbane, Australia

(Received: 14 August 2018, final version received: 7 November 2018; Published 27 November 2018)

Abstract

Spatial visualisation skills and interpretation are critical in the design professions but are difficult for novice designers. There is growing evidence that mixed reality visualisation improves learner outcomes, but often these studies are focused on a single media representation and not on a comparison between media and the underpinning learning outcomes. Results from recent studies highlight the use of comparative visualisation pedagogy in design through learner reflective blogs and pilot studies with experts, but these studies are limited by expense and designs familiar to the learner. With increasing interest in mobile pedagogy, more assessment is required in understanding learner interpretation of comparative mobile mixed reality pedagogy. The aim of this study is to do this by evaluating insights from a first-year architectural design classroom through studying the impact and use of a range of mobile comparative visualisation technologies. Using a design-based research methodology and a usability framework for accessing comparative visualisation, this paper will study the complexities of spatial design in the built environment. Outcomes from the study highlight the positives of the approach but also the improvements required in the delivery of the visualisations to improve on the visibility and visual errors caused by the lack of mobile processing.

Keywords: mixed reality; mobile learning; augmented reality; virtual reality; design communication

This paper is part of the Special Collection Mobile Mixed Reality Enhanced Learning, edited by Thom Cochrane, Fiona Smart, Helen Farley and Vickel Narayan. More papers from this collection can be found here

*Corresponding author. Email: jbirt@bond.edu.au

Research in Learning Technology 2018. © 2018 J. Birt and M. Cowling. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2018, 26: 2128 - http://dx.doi.org/10.25304/rlt.v26.2128

Introduction

Three-dimensional (3D) spatial design skills (Wu and Chiang 2013) and lighting interpretations (Webb 2006) are critical in the design professions. They allow designers to make informed evaluations regarding designs in terms of quality, character, performance and user-comfort levels. However, these skills are traditionally difficult to teach to novice designers, generally requiring significant experiential development over the course of years (Birt, Hovorka, and Nelson 2015) to move the learner from shallow to deeper learning.

To assist with deeper learning, many advocate the learning benefits of using multiple modes of visualisation delivery (Bernard et al. 2014; Höffler 2010; Moreno and Mayer 2007; Sankey, Birch, and Gardiner 2010). However, often these studies are focused on combinations of more traditional technologies (such as text and images), as well as considering a single media representation, and not a comparison between media (Birt and Hovorka 2014).

There is growing evidence that simulation and mixed reality (MR) visualisation improves learner outcomes for design education (Dalgarno and Lee 2010; Höffler 2010) allowing for multiple visualisation comparisons of the underlying 3D model view and simulation interactions and interpretations (Birt and Hovorka 2014; Birt, Hovorka, and Nelson 2015; Birt, Manyuru, and Nelson 2017). Results from these recent studies highlight the use of comparative visualisation pedagogy in design through learner reflective blogs and pilot studies with experts. However, these studies are limited to expensive physical buildings, desktop-driven virtual reality (VR), augmented reality (AR) and designs familiar to the learners.

With mobile device ownership becoming ubiquitous (Akçayır and Akçayır 2017) and higher education institutions exploring a bring-your-own-device (BYOD) approach to mobile learning (Birt, Moore, and Cowling 2017; Birt et al. 2018; Cochrane 2016), the implications of these tools in design communication education and the development translation of a comparative mobile MR visualisation pedagogy needs more understanding and assessment.

The aim of this paper is to evaluate these insights from the assessment of a first-year architectural design classroom by studying the impact and use of a range of comparative visualisation technologies including traditional method as well as non-mobile and mobile MR approaches. Using a design-based research (DBR) methodology and a usability framework for accessing comparative visualisation, 26 learners completed a series of assessment surveys across two cases studying the complexities of spatial design and lighting interpretations of the built environment in both familiar and non-familiar contexts, addressing the following research questions:

RQ1: How do learners perceive the multiple visualisation modes of presentation delivered through mobile MR pedagogy?

RQ2: How does this comparative MR pedagogy support learners with complex spatial and lighting interpretations in design communication in both familiar and non-familiar environments?

Background

There is ever-increasing potential in considering how technologies can enhance classrooms and enable learning (Kirkwood and Price 2014) especially in higher education (Becker et al. 2017), with a focus on increasing learner reflection (Roy and Chi 2005; Young and Nichols 2017), pedagogical awareness on using technology approaches to enhance learning (Englund, Olofsson, and Price 2017; Rodriguez Triana et al. 2017) and perceptions of technology usefulness in higher education (Henderson, Selwyn, and Aston 2017; Mayer 2017).

Learners are moving away from traditional didactic methods, looking to tackle problems in new and different ways through learner-centred interactive pedagogies (Jones et al. 2010). Learners are expecting to be engaged by their learning environments through simulation, using participatory, interactive, sensory-rich, experimental activities (either physical or virtual), and opportunities for input. It is suggested that these types of new learners are shifting the institutional view of students from didactic consumers to partners in learning (Bryson 2016; Cook-Sather, Bovill, and Felten 2014).

In this way, MR has the potential to provide a new approach for dealing with these learners (Bacca et al. 2014). It is well established that visualisations can enhance the construction of spatial knowledge (Höffler 2010) and provide a framework for multidisciplinary collaborative enhanced learning (De Freitas and Neumann 2009). However, more research attention is required to identify the appropriate technology media affordances (Dalgarno and Lee 2010) and pedagogical learning design associated to the media visualisation (Ocepek et al. 2013).

Compounding this is the assertion that new learners have different learning modalities (Moreno and Mayer 2007), learn better through multiple blended learning modalities (Bernard et al. 2014; Moreno and Mayer 2007; Sankey, Birch, and Gardiner 2010) and prefer technology-enabled learning (TEL) (Becker et al. 2017; Englund, Olofsson, and Price 2017; Kirkwood and Price 2014). With most prior work formed around explanatory words and pictures (Ayres 2015), more research is required on complex learning environments such as interactive visualisations, games and MR simulations.

Recent advances in technology including freely available game engines, VR, AR, 3D printing and mobile BYOD have resulted in the development of new visualisation tools, techniques and instrumentation that allow for effective smartphone-enabled mobile visualisations (Akçayır and Akçayır 2017; Birt, Moore, and Cowling 2017; Birt et al. 2018; Cochrane 2016) across discipline settings and at multiple scales (Magana 2014).

Mixed reality, a continuum of these innovative technologies (Figure 1), provides a framework to position real and virtual worlds (Milgram and Kishino 1994) and allows for the design and implementation of new MR pedagogies across multiple disciplines that could not have been previously envisioned (Birt et al. 2018).

Fig. 1
Figure 1. Reality–virtuality continuum adapted from Milgram and Kishino (1994).

Yet despite the availability of MR to provide a new way to visualise spatial environments, the field of design education has yet to fully adopt this new method. The conventional way of teaching novice students about 3D spatial skills and lighting interpretations is through a series of static two-dimensional (2D) orthographic drawings, photographs, 3D model renders and in situ physical examinations (Descottes and Ramos 2013). While experienced designers are adept at performing 2D to 3D translations there exists a communication barrier from instructor to learner because of this skills gap. In particular, the pedagogical method of using these different types of media with a large focus on only a single media or a focus on only a familiar (real) design means that the learning environment can be lacking in navigation, manipulation and visualisation at human scale (Birt, Hovorka, and Nelson 2015). This paper will therefore detail two case studies that looked to address this using a comparative mobile MR pedagogy for teaching design students.

Methodology

The research intervention was designed in two phases: the first presenting a series of visualisations of a familiar environment to the learners, and the second a design environment that is unfamiliar. The method of visualisation comparison was (1) 2D orthographic and photographic renders; (2) VR on the HTC VIVE™ using a PC; (3) VR on a Samsung S8 mobile smartphone using a Google Cardboard; (4) AR on a Microsoft HoloLens; (5) a physical built environment on the learner’s university campus (familiar phase only). In this way, a comparison between the traditional models (1 and 5), the non-mobile MR model (2); the BYOD mobile model (3); and the new self-contained mobile MR model (4) was achieved.

Given this desire to build the research in two compounding phases, and informed by the work of Bannan, Cook and Pachler (2016), who suggested that design research can help to inform the development of mobile learning pedagogy, a DBR methodology (Anderson and Shattuck 2012) was selected for the project. Specifically, the four steps of the DBR methodology (Figure 2) detailed by Reeves (2006) were followed through the analysis of the problem and design of the simulation solution (as already detailed, being either the familiar or the unfamiliar). This was followed by the iterative implementation of that solution into an architectural design classroom by a discipline expert practitioner, positioned to evaluate the effectiveness of the solution, who provided detailed feedback on the design in a pilot study (Birt, Hovorka, and Nelson 2015) that was informed by earlier work by Birt and Hovorka (2014), which studied the application of comparative MR in a multimedia classroom. It should be noted that ethics approval was sought and granted for this project after assessment from the Bond University Human Research Ethics Committee.

Fig. 2
Figure 2. Design-based research model adapted from Reeves (2006). Design-based research model adapted from Reeves (2006).

Following this, work began with interviews with discipline staff to identify the key problems experienced in teaching architecture students. Through these interviews, the issue of presenting different aspects of architectural design through different media was raised, as well as the need for a connecting experience to bring all these different aspects together under an umbrella design. Supplementing this work were interviews conducted with students to determine those usability aspects that would translate between all designs. Using a model of qualitative data collection through learner reflective blog entries, a set of usability criteria were identified from Birt and Hovorka (2014) and Birt, Hovorka and Nelson (2015) and then correlated with outcomes from Dey et al. (2016) and Mayer (2017) to produce an instrument that could be used as part of the testing and refinement phase. Details and initial DBR testing of this instrument were outlined in Birt, Manyuru and Nelson (2017) with an outline of the instrument shown in Table 1 and a prototype model used and described in the following. It should be noted that the instrument has not been formally validated, rather grounded and refined by previous work.


Table 1. Participant survey questions of visualisation perception.
Question Learner perception
1. Accessibility: The visualisation is readily accessible
2. Learnability: The visualisation is easy to learn
3. Efficiency: The visualisation is efficient to use
4. Satisfaction: The visualisation provides (confidence) in the design
5. Memorability: The visualisation is memorable in support of the design
6. Error free: The visualisation is free from visual and design errors
7. Manipulability: The visualisation variables can be manipulated
8. Navigability: The visualisation allows the user to change their viewpoint
9. Visibility: The visualisation provides clear detail to interpret the design
10. Real world: The visualisation provides a match to the real world
11. Communication: The visualisation aids stakeholder communication
12. Creativity: The visualisation allows user creativity with the design
13. Engaging: The visualisation is meaningful
14. Motivating: The visualisation aids acceptance of the design

Experimental design

Specifically, for this loop round of the DBR process, students in their second week were presented the technology methods demonstrating the affordances of each MR visualisation method to both familiarise them with the technology but also reduce novelty bias in the applied use of the technology. Nvidia Funhouse was used on the HTC VIVE, Robo Raid was used on the HoloLens and a skills training app built by the lead author was used on Google Cardboard in the second week.

The first phase of the research intervention was then provided to the students during the fourth week of a 12-week semester, with the second phase of the intervention provided in the students’ eighth week. The gap between the interventions was to coincide with the learner’s lessons on introductory 3D techniques, allowing a lead into their studio design assessment, where they were required to design their own designs.

The intended thought of the authors is that using a familiar environment as a baseline would lead to improved understanding of the simulation and situate the user within the simulation environment. It was therefore important to have the comparative MR visualisations be as close to accurate as possible, to ground the learners within the familiar context. The non-familiar environment was provided to determine if the learner had the same perceptions about the visualisation methods when presented with a design they were not familiar with and to highlight how the visualisations could be used in their own creative design practice to communicate new design concepts.

During each phase learners were randomised into the visualisation groups and exposed to all visualisation methods. This was to reduce order effect. Students were asked to complete their survey at the end of each visualisation method. Each student was allowed 20 min per visualisation method, which was timed by the lead author. It should also be noted that students were able to talk during the intervention and discuss the visualisation method with their peers to represent a stakeholder engagement exercise.

Comparative mixed reality intervention design

All MR simulation visualisations were built using Rhino (rhino3d.com), Maya (autodesk.com/products/maya/) and Unity3D (unity3d.com) using a series of 2D orthographic plan drawings by an architect teaching at the lead author’s institution. The fundamental lighting theory of Webb (2006) and Descottes and Ramos (2013) was drawn upon for the visualisation design, with affordance and design considerations of MR drawn from Birt and Hovorka (2014), Birt, Hovorka and Nelson (2015) and Birt, Manyuru and Nelson (2017).

Figure 3 represents a sample of the familiar design using 2D orthographic plan drawings and photographic renders, with Figure 4 representing a sample of the non-familiar design. The intent of using the 2D plan drawings was to provide for a baseline of the visualisation method and allow students to experience the current standard method used in industry and education practice. Students were provided 2D plan drawings and renders for the familiar intervention (n = 75) and plan drawings and renders for the non-familiar intervention (n = 20). Students were provided these renders in a folder to peruse during the visualisation time frame.

Fig. 3
Figure 3. Sample traditional 2D orthographic plan drawings and photographic renders of built environment design on the learners’ campus, which is familiar to students.

Fig. 4
Figure 4. Sample traditional 2D orthographic plan drawings and photographic renders not familiar to learners.

Figure 5 represents a sample of the familiar design using VR visualisation and the HTC VIVE powered through a PC, with Figure 6 representing a sample of the non-familiar design. Figure 7 represents a sample of the familiar design using mobile VR visualisation using the Samsung S8 inserted into a Google Cardboard, with Figure 8 representing a sample of the non-familiar design. The intention of the VR simulation is to provide a human-scale representation to improve understanding of the simulation and situate the user within the simulation environment.

Fig. 5
Figure 5. Sample virtual reality visualisation using the HTC VIVE powered through a PC of built environment design on the learners’ campus, which is familiar to students.

Fig. 6
Figure 6. Sample virtual reality visualisation using the HTC VIVE powered through a PC, not familiar to learners.

Fig. 7
Figure 7. Sample virtual reality visualisation using a Samsung S8 inserted into a Google Cardboard of built environment design on the learners’ campus, which is familiar to students.

Fig. 8
Figure 8. Sample virtual reality visualisation using a Samsung S8 inserted into a Google Cardboard, not familiar to learners.

Figure 9 represents a sample of the familiar design using mobile AR visualisation using a Microsoft HoloLens, while Figure 10 represents a sample of the non-familiar design. The intention is that the AR simulation would allow for orientation at scale situated within the backdrop of a real-world environment, which would reduce disorientation and motion sickness.

Fig. 9
Figure 9. Sample augmented reality visualisation using a Microsoft HoloLens of built environment design on the learners’ campus, which is familiar to students.

Fig. 10
Figure 10. Sample augmented reality visualisation using a Microsoft HoloLens, not familiar to learners.

Figure 11 represents the actual physical built environment familiar to the learners on their university campus.

Fig. 11
Figure 11. Sample physical building environmental pictures of the built environment, familiar to learners at their university campus.

For both interventions and all visualisations the conditions chosen can be loosely described as a sunny morning in the summer. For accuracy, the actual coordinates (28.073S, 153.416E), orientation (50W) and date (20 Dec 2016) were used to gather the proper altitude and azimuth of the sun along its path. The real-time simulations cover all 24 h in the day and can be sped up or slowed down to allow users to vary their experience. The simulations also allow learners to switch between natural light conditions and luminance mapping overlays. By visualising the effects of sun through simulated natural light and luminance mapping to visualise light intensity transfer, the simulation enables learners to experience this important comparison in real time in both the human and whole system scale. This in turn allows informed evaluation regarding the design in terms of spatial disposition, function and user-comfort levels. This is further enhanced by allowing users to spatially navigate (move around) both the virtual and physical building designs to experience all aspects of the familiar and non-familiar built environments.

Results

The testing of the interventions was conducted with 26 participants in a 3D modelling subject within a first-year undergraduate architecture design bachelor’s degree. All data was collected and analysed by a researcher who was not in a lead teaching capacity within the course to reduce power dependency bias in line with the granted project ethics.

In the first case study (familiar design environment), learners were provided with five visualisations of a building on their university campus. For the second case study (unfamiliar design), learners were provided with four visualisations, as the actual physical building was not constructed. The description of these visualisations is provided in the previous section. The aim was to compare how the participants’ use of the comparative visualisations would change when they were presented with familiar and non-familiar designs.

The results of the quantitative survey with the learners are presented below, with each item ranked on a Likert scale between 1 and 5, where 1 = strongly disagree; 2 = somewhat disagree; 3 = neither agree nor disagree; 4 = somewhat agree; 5 = strongly agree. In all cases students (n = 26) completed the survey, with no incomplete results.

Results of the first familiar design study survey were analysed using SPSS and are presented in Table 2 with results from the second non-familiar design survey study presented in Table 3. Qualitative comments from students were also collected when reflecting on their spatial and lighting experiences with reflection results of the first familiar design study presented in Table 4 and reflection results from the second non-familiar study presented in Table 5.


Table 2. Spatial and lighting design perceptions within a familiar built environment.
Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14
(1) Traditional 2D orthographic plan drawings and photographic renders
Rng 2.00 2.00 1.00 3.00 3.00 3.00 2.00 2.00 2.00 4.00 3.00 4.00 4.00 4.00
Min 3.00 2.00 3.00 2.00 2.00 2.00 1.00 1.00 2.00 1.00 1.00 1.00 1.00 1.00
Max 5.00 4.00 4.00 5.00 5.00 5.00 3.00 3.00 4.00 5.00 4.00 5.00 5.00 5.00
Avg 3.85 3.12 3.42 3.23 3.00 3.15 1.88 2.04 2.96 2.27 2.54 2.23 2.50 2.65
Med 4.00 3.00 3.00 3.00 3.00 3.00 2.00 2.00 3.00 2.00 3.00 2.00 2.50 2.50
StD 0.54 0.65 0.50 0.59 0.63 0.92 0.59 0.60 0.60 1.12 0.95 1.14 1.07 1.26
Err 0.11 0.13 0.10 0.12 0.12 0.18 0.12 0.12 0.12 0.22 0.19 0.22 0.21 0.25
%AG 77% 27% 42% 23% 12% 27% 0% 0% 15% 15% 15% 15% 15% 23%
%DG 0% 15% 0% 4% 15% 23% 88% 81% 19% 65% 46% 65% 50% 50%
(2) Virtual reality visualisation using the HTC VIVE powered through PC
Rng 1.00 2.00 3.00 2.00 2.00 3.00 2.00 2.00 2.00 2.00 3.00 2.00 2.00 2.00
Min 3.00 3.00 2.00 3.00 3.00 2.00 3.00 3.00 3.00 3.00 2.00 3.00 3.00 3.00
Max 4.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
Avg 3.54 4.15 4.04 4.15 4.19 3.69 4.35 4.23 4.31 4.35 4.19 4.19 4.35 4.46
Med 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.50
StD 0.51 0.46 0.82 0.61 0.57 0.74 0.69 0.65 0.68 0.63 0.85 0.63 0.63 0.58
Err 0.10 0.09 0.16 0.12 0.11 0.14 0.14 0.13 0.13 0.12 0.17 0.12 0.12 0.11
%AG 54% 96% 77% 88% 92% 62% 88% 88% 88% 92% 81% 88% 92% 96%
%DG 0% 0% 4% 0% 0% 4% 0% 0% 0% 0% 4% 0% 0% 0%
(3) Virtual reality visualisation using a Samsung S8 inserted into a Google Cardboard
Rng 2.00 2.00 3.00 3.00 2.00 2.00 2.00 2.00 2.00 3.00 2.00 3.00 2.00 2.00
Min 3.00 3.00 2.00 2.00 3.00 2.00 3.00 3.00 3.00 2.00 3.00 2.00 3.00 3.00
Max 5.00 5.00 5.00 5.00 5.00 4.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
Avg 4.08 4.12 4.12 4.08 4.08 2.88 4.35 4.19 3.96 3.92 4.15 4.00 4.15 4.15
Med 4.00 4.00 4.00 4.00 4.00 3.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00
StD 0.63 0.82 0.86 0.84 0.56 0.71 0.69 0.49 0.45 0.74 0.61 0.94 0.67 0.78
Err 0.12 0.16 0.17 0.17 0.11 0.14 0.14 0.10 0.09 0.15 0.12 0.18 0.13 0.15
%AG 85% 73% 77% 77% 88% 19% 88% 96% 88% 85% 88% 65% 85% 77%
%DG 0% 0% 4% 4% 0% 31% 0% 0% 0% 8% 0% 4% 0% 0%
(4) Augmented reality visualisation using a Microsoft HoloLens
Rng 2.00 2.00 2.00 2.00 3.00 3.00 2.00 2.00 1.00 3.00 2.00 2.00 2.00 2.00
Min 2.00 3.00 3.00 3.00 2.00 2.00 3.00 3.00 3.00 2.00 3.00 3.00 3.00 3.00
Max 4.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 4.00 5.00 5.00 5.00 5.00 5.00
Avg 3.38 4.04 4.19 4.23 4.19 3.38 4.19 4.38 3.92 3.92 4.19 4.23 4.31 4.27
Med 3.00 4.00 4.00 4.00 4.00 3.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00
StD 0.64 0.66 0.69 0.65 0.80 0.75 0.69 0.57 0.27 0.80 0.63 0.71 0.74 0.78
Err 0.12 0.13 0.14 0.13 0.16 0.15 0.14 0.11 0.05 0.16 0.12 0.14 0.14 0.15
%AG 46% 81% 85% 88% 85% 46% 85% 96% 92% 73% 88% 85% 85% 81%
%DG 8% 0% 0% 0% 4% 12% 0% 0% 0% 4% 0% 0% 0% 0%
(5) Physical built environment
Rng 3.00 1.00 4.00 4.00 4.00 4.00 4.00 3.00 1.00 1.00 4.00 4.00 4.00 4.00
Min 2.00 4.00 1.00 1.00 1.00 1.00 1.00 2.00 4.00 4.00 1.00 1.00 1.00 1.00
Max 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
Avg 3.62 4.62 3.50 4.08 3.85 3.31 3.27 3.92 4.46 4.88 4.15 3.23 4.04 3.96
Med 4.00 5.00 3.50 4.00 4.00 3.00 3.00 4.00 4.00 5.00 4.00 3.00 4.00 4.00
StD 0.70 0.50 0.91 0.93 0.97 1.01 1.25 1.06 0.51 0.33 1.08 1.07 0.92 1.04
Err 0.14 0.10 0.18 0.18 0.19 0.20 0.25 0.21 0.10 0.06 0.21 0.21 0.18 0.20
%AG 58% 100% 50% 81% 65% 35% 46% 65% 100% 100% 85% 46% 81% 85%
%DG 4% 0% 8% 4% 4% 15% 31% 12% 0% 0% 12% 23% 4% 8%
Rng, range of variable spread; Min. minimum variable value; Max, maximum variable value; Avg, average mean of variables; Med, median of variables; StD, standard deviation; Err, standard error; %AG, percentage of participants who somewhat or strongly agree; %DG, percentage of participants who somewhat or strongly disagree; HTC VIVE™.


Table 3. Spatial and lighting design perceptions within a non-familiar design.
Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14
(1) Traditional 2D orthographic plan drawings and photographic renders
Rng 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 3.00 4.00 3.00 3.00 2.00
Min 3.00 2.00 2.00 2.00 2.00 2.00 1.00 1.00 2.00 1.00 1.00 1.00 1.00 2.00
Max 5.00 4.00 4.00 4.00 4.00 4.00 3.00 3.00 4.00 4.00 5.00 4.00 4.00 4.00
Avg 3.77 2.92 3.31 3.00 2.88 2.96 2.00 1.69 2.81 2.54 2.88 2.65 2.85 2.69
Med 4.00 3.00 3.00 3.00 3.00 3.00 2.00 2.00 3.00 3.00 3.00 3.00 3.00 3.00
StD 0.59 0.69 0.74 0.63 0.65 0.72 0.75 0.62 0.75 0.95 0.95 0.94 0.83 0.55
Err 0.12 0.13 0.14 0.12 0.13 0.14 0.15 0.12 0.15 0.19 0.19 0.18 0.16 0.11
%AG 69% 19% 46% 19% 15% 23% 0% 0% 19% 15% 19% 15% 15% 4%
%DG 0% 27% 15% 19% 27% 27% 73% 92% 38% 46% 35% 35% 19% 35%
(2) Virtual reality visualisation using the HTC VIVE powered through PC
Rng 2.00 2.00 2.00 2.00 3.00 3.00 2.00 1.00 2.00 2.00 2.00 2.00 2.00 2.00
Min 2.00 3.00 3.00 3.00 2.00 2.00 3.00 4.00 3.00 3.00 3.00 3.00 3.00 3.00
Max 4.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
Avg 3.62 4.19 4.15 4.15 4.42 3.96 4.42 4.54 4.15 4.08 4.27 4.31 4.46 4.46
Med 4.00 4.00 4.00 4.00 5.00 4.00 4.00 5.00 4.00 4.00 4.00 4.00 5.00 4.50
StD 0.57 0.49 0.67 0.67 0.81 0.77 0.58 0.51 0.54 0.56 0.53 0.55 0.65 0.58
Err 0.11 0.10 0.13 0.13 0.16 0.15 0.11 0.10 0.11 0.11 0.10 0.11 0.13 0.11
%AG 65% 96% 85% 85% 88% 77% 96% 100% 92% 88% 96% 96% 92% 96%
%DG 4% 0% 0% 0% 4% 4% 0% 0% 0% 0% 0% 0% 0% 0%
(3) Virtual reality visualisation using a Samsung S8 inserted into a Google Cardboard
Rng 2.00 2.00 2.00 2.00 2.00 1.00 1.00 1.00 0.00 2.00 2.00 2.00 2.00 2.00
Min 3.00 3.00 3.00 3.00 3.00 3.00 4.00 4.00 4.00 3.00 3.00 3.00 3.00 3.00
Max 5.00 5.00 5.00 5.00 5.00 4.00 5.00 5.00 4.00 5.00 5.00 5.00 5.00 5.00
Avg 4.15 4.15 4.23 4.00 4.08 3.23 4.27 4.31 4.00 3.96 4.08 4.19 4.19 4.19
Med 4.00 4.00 4.00 4.00 4.00 3.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00
StD 0.46 0.61 0.71 0.63 0.56 0.43 0.45 0.47 0.00 0.34 0.48 0.75 0.63 0.69
Err 0.09 0.12 0.14 0.12 0.11 0.08 0.09 0.09 0.00 0.07 0.09 0.15 0.12 0.14
%AG 96% 88% 85% 81% 88% 23% 100% 100% 100% 92% 92% 81% 88% 85%
%DG 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%
(4) Augmented reality visualisation using a Microsoft HoloLens
Rng 3.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00
Min 2.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00
Max 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
Avg 3.54 4.00 4.19 4.38 4.27 3.81 4.15 4.38 4.04 4.23 4.15 4.19 4.12 4.31
Med 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00 4.00
StD 0.65 0.57 0.63 0.57 0.67 0.49 0.61 0.64 0.34 0.59 0.46 0.49 0.52 0.62
Err 0.13 0.11 0.12 0.11 0.13 0.10 0.12 0.12 0.07 0.12 0.09 0.10 0.10 0.12
%AG 54% 85% 88% 96% 88% 77% 88% 92% 96% 92% 96% 96% 92% 92%
%DG 4% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%
Rng, range of variable spread; Min, minimum variable value; Max, maximum variable value; Avg, average mean of variables; Med, median of variables; StD, standard deviation; Err, standard error; %AG, percentage of participants who somewhat or strongly agree; %DG, percentage of participants who somewhat or strongly disagree; HTC VIVE™.


Table 4. Learner reflections of spatial and lighting design perceptions within a familiar built environment design.
(1) Traditional 2D orthographic plan drawings and photographic renders
‘2d doesn’t capture the essence of the actual building without rotations and views. Hard to match real world without time, space and sound’, ‘difficult to understand scale’, ‘no indicators and details to understand internal design’, ‘I need a scale model difficult to translate’, ‘cannot associate drawing to walking through building’, ‘complex shapes difficult to translate. No feeling of space, light and flow’, ‘I feel confident in the drawing even given the complexity’, ‘detailed but not clear’, ‘sections confuse me especially with curves’, ‘Not sure what is happening in drawing’, ‘lacks visualisation of light changes and spatial context’, ‘difficult to translate to my understanding of the building’, ‘informative but difficult to translate; understanding curves is difficult and I don’t trust the lighting shown’, ‘hard to relate to the physical space’, ‘confusing’, ‘Unable to remember specific details of drawing’.
(2) Virtual reality using the HTC VIVE powered through PC
‘shows errors that need to be fixed as well as footings which are not shown directly in plans or physical build. Lets you navigate to any point in the space, great for communicating ideas. I really like the lighting especially the real-world example. The heat map lighting was interesting.’, ‘clear understanding of the space and navigation was fine when I got used to the teleport. The lighting was good and I could get a sense of the intensity from the shadows’, ‘memorable easy to understand the building and forms’, ‘get good idea of space and lighting within the design’, ‘accurate visual of the design and materials. The shadows helped with the understanding of the curves and space’, ‘interesting experience I really liked the real-time lighting and navigation’, ‘Took a few minutes to get used to the teleportation mechanics but the space was amazing, easy to understand and move around once you know what to do. The real-time lighting was great and I liked the intensity overlay – very predator’.
(3) Virtual reality using a Samsung S8 inserted into a Google Cardboard
‘improve details and speed of rotation. I liked the lighting but not as good as the Vive’, ‘dizzy but works on a phone WOW’, ‘fantastic; would be better with more interactivity and better lighting’, ‘there was problems with the frame rates but the fact you could navigate and change the lighting simulation on the mobile was impressive. I did like how you could take this with you to a meeting and show people without having to take the pc and vive’, ‘although I cannot share the experience at the same time with my fellow students the ease of using the mobile VR system and just passing it around the room was better than the VIVE’, ‘I felt like the VIVE was more immersive but also more clumsy [and] less portable for design meetings’, ‘I thought this experience was very worthwhile and I would recommend this to my peers. I did not get sick but I do know people who did’, ‘hard to control at first; dizzy spells and motion sickness. But the simulation and interactions were [sic] good’, ‘difficult to use when wearing glasses’, ‘this level of design detail on a phone is amazing’, ‘lets you see almost the same detail as the vive. The navigation is really easy no need to click or tap just stare and move – good visual feedback – although the frame rate was a problem’.
(4) Augmented reality using a Microsoft HoloLens
‘engaging for structural views in any section available as you are the camera and can control the cross section view. I liked that you could view the real world behind the model less sensory deprivation. Awesome!’, ‘Could use some improvement in the real-world connection including textures and better shadows.’, ‘Useful in presenting sections as it strips away the layers of a building at scale. Can view the whole space at once giving focus to the design and the lighting.’, ‘Highly mobile – you are the controller, no need to learn any interactions – lets you view everything – great cross sections of the curved walls and the light and shadowing examples’, ‘The quality of the image is not as clear as the VIVE but very easy to navigate and use. You are in control and can move anywhere with amazing accuracy just change your head position. The lighting was not as good as the VIVE.’, ‘Can visualise areas I've never seen before in the real building. Including cross sections and support footings. I liked how you could see the whole environment and watch the light and shadow in realtime’, ‘not as clear as VR’, ‘Good for sectional view and whole building understanding. I prefer this to VR less isolation.’
(5) Physical built environment
‘Physical building shows great detail if available. But the space changes all the time’, ‘Walking through a space helps consolidate the design compared to the drawing. Drawings cannot translate the spatial feel of the space’, ‘Physical is best for real world but changes and has errors in terms of the finished product – never exactly how you plan it’, ‘Unlike VR cannot see all the construction techniques or tools’, ‘Being inside the building helps gain an in depth understanding of the space and design. The understanding is memorable, but manipulability is poor’, ‘Many errors in the real building, cracking, changing furniture and objects. You cannot get a true sense of the whole design with all the noise. Maybe if the building had everything removed or setup [sic] exactly the same as the other visualisations’, ‘best way to present to client but its [sic] time consuming’, ‘Actual building was the best visual of the design’, ‘The building is amazing but you cannot change anything – the design is set in concrete. The biggest problem is with the inability to move around as you please and see everything as it was when it was first constructed. How can you revise the design? You cant its [sic] already there’, ‘Most detailed, but limited navigation, manipulation and updating’, ‘Errors in the construction of the physical building such as cracking. The furniture is not in the original plans, making it difficult to conceptualise the whole space. But it is the best visualisation in terms of real world and satisfaction of my overall learning’, ‘Most realistic, not the most memorable. There are limits to being restricted to a physical world perspective that are not limited in a drawing or virtual environment. Accessibility is also an issue as you have to be in the building to see it (not always possible)’, ‘Captures the design but does not allow for modification and also the overall full plan of the building is not viewable at once only from perspective views of current position’.


Table 5. Learner reflections of spatial and lighting design perceptions within a non-familiar design.
(1) Traditional 2D orthographic plan drawings and photographic renders
‘lacks interactive light changes and spatial context I cant [sic] navigate the space or get my perspective on the design’, ‘daunting without experience; they are more directed to the builder and I don’t know how they translate’, ‘Just a bunch of pictures’, ‘Conventional, Efficient, Not Interesting’, ‘Cannot change the viewpoint in the image. The scale seems strange’, ‘understandable for design students with experience’, ‘difficult to translate to 3d form’, ‘Not sure I understand the design. The scale seems off in the pictures. The renders are not natural’, ‘Difficult to understand without lecturers interpretation’, ‘2d is better for those that can understand drawing as its [sic] very cost effective. The problem is lack of interactivity’.
(2) Virtual reality using the HTC VIVE powered through PC
‘great walk through of virtual building. I actually understand what the pictures where [sic] trying to communicate’, ‘The VR really helps me in understanding my perspective in the environment. The realtime lighting helps me process how the design might look at different times of the day. I couldn’t do this with the 2D pictures – no interaction’, ‘From clients [sic] view it would satisfy the visual communication between stakeholders. I do challenge the quality of the 3d experience. We need more detail and quality especially from a sales and marketing point of view – not sure this would sell the design’, ‘best you can get without visiting the space. Very efficient way to communicate the idea of the space and place’, ‘I wanted to spend more time in the experience I wish this was constructed in the real world – great pavilion shame they didn’t build it’, ‘I understand why we need VR I just cant [sic] translate the pictures and get a good context to the human scale environment and space’, ‘awesome way to understand design. Good tool to convey design to a client and communicate among groups’, ‘gives user a chance to visualise without visiting physical building’, ‘Who needs a physical building when you have VR!’
(3) Virtual reality using a Samsung S8 inserted into a Google Cardboard
‘The ability to carry your design with you on a phone and present to your client is amazing [and] opens up so many opportunities’, ‘Not as good as the VIVE but the fact you can carry this with you on a phone is better for communicating with your clients’, ‘Mobile VR and BIM would be a future trend for the design and construction industry but more work required on frame rate’, ‘The mobile VR was not as good as the VIVE in teaching me about the pavilion but so much better than the pictures’, ‘The shadows on the mobile device were a bit blocky but the overall visualisation was informative. I just can’t [sic] believe you can do this on a phone’, ‘Although not as visually detailed as the PC VR I think this method is actually better. You can carry your whole design portfolio with you in your pocket. This opens up so many opportunities – imagine I am in a room and someone says can we see the design for this building. You can just present it right there at the table. The cardboard VR viewer can sit in your backpack’.
(4) Augmented reality using a Microsoft HoloLens
‘Expensive but great at giving the whole scale view. Could have improved visuals but very easy to navigate’, ‘I enjoyed it for the cross-section visualisation but not as engaging or immersive as the vr’, ‘Really good for presenting to all stakeholders in the design process. Can easily view and understand the whole design concept. The ability to move around yourself and see the world is a positive. I never felt sick or lost in the design’, ‘Good experience but could be improved with more interaction. I did like the anchoring in the real world [because] less sense of claustrophobia compared to VR’, ‘Overall a very interesting concept/experience, great way to be able to show the whole system and communicate the design. I like that I can see my fellow students and not just hear them like in VR’, ‘Although expensive I see the value in this system. This is very mobile [and] can place in your backpack great for a design meeting to showcase your product concept’.
Building Information Modelling (BIM); HTC VIVE™.

Discussions

Looking at the results in context to the existing literature, several interesting themes can be extracted from both the quantitative SPSS data and the qualitative student comments, addressing RQ1, How do learners perceive the multiple visualisation modes of presentation delivered through mobile MR pedagogy?

Firstly, as might be expected, the mean accessibility score (in the familiar design case study) indicated by students and highlighted in Table 2 is higher for the (1) 2D orthographical plans (3.85 ± 0.54) and the (5) physical building (3.62 ± 0.70) than the score given for the (4) HoloLens (3.38 ± 0.64) and the (2) HTC VIVE (3.54 ± 0.51). This is not surprising given the cost and/or hardware requirements for these two devices, such that students would likely feel that they could only access these during the lesson. This is supported by student qualitative comments from Table 4 relating to cost for the HoloLens such as ‘expensive but great at giving the whole scale view’ and ‘although expensive I see the value in this system’; and for infrastructure for the VIVE with comments such as ‘I felt like the VIVE was more immersive but also more clumsy [and] less portable for design meetings’.

However, interestingly, the mean accessibility score for the (3) BYOD mobile solution (4.08 ± 0.63) was higher than the average of all the other visualisation methods, including the (1) 2D plans and the (5) physical building itself. This suggests that students saw the value in being able to use their own device and to be able to use it anywhere, and this is supported by qualitative comments in Table 4 such as, ‘although not as visually detailed as the PC VR I think this method is actually better. You can carry your whole design portfolio with you in your pocket. This opens up so many opportunities – imagine I am in a room and someone says, can we see the design for this building? You can just present it right there at the table. The cardboard VR viewer can sit in your backpack’. This finding very much supports work by Dalgarno and Lee (2010) that indicated that affordances of MR equipment were important, as well as the growing body of literature supporting mobile learning (Bannan, Cook and Pachler 2016; Cochrane 2016) and changing expectations of learners (Bernard et al. 2014; Englund, Olofsson, and Price 2017; Henderson, Selwyn, and Aston 2017; Kirkwood and Price 2014).

Apart from accessibility, however, in general and reported in Table 2 the scores for the traditional methods (1 and 5) are lower than the comparable scores for the other visualisation options. The exception to this is in the (5) physical environment, where the students identified a higher score for both visibility (4.46 ± 0.51) and real-world (4.88 ± 0.33) attributes in all modes but the (3) mobile, which scored visibility (3.96 ± 0.45) and real-world (3.92 ± 0.74). It would appear that even the best mobile MR system is no substitute for the real-world in terms of actually recognising real-world components. It could be argued that this work therefore challenges the assertion (Becker et al. 2017; Englund, Olofsson, and Price 2017; Kirkwood and Price 2014) that students prefer technology-enhanced learning, but it’s more likely that students still recognise the value of the real world for some aspects of learning, such as visibility, despite a shift towards mobility and TEL.

What is telling regarding the traditional methods is that the efficiency (1: 3.42 ± 0.50; 5: 3.50 ± 0.91), navigability (1: 2.04 ± 0.60; 5: 3.92 ± 1.06) and manipulation (1: 1.88 ± 0.59; 5: 3.27 ± 1.25) scores from Table 2 are particularly low for both the (5) physical and (1) 2d plan–based visualisations. This suggests that the ability to see everything (visibility) is somewhat balanced by the need to navigate the distances in the physical world manually, a point that is supported in Table 4 by several student comments on the use of the VIVE that suggested navigation was easier once ‘they got used to the teleport mechanic’. It’s also interesting to note that despite finding the system easy to navigate, students also found the MR systems isolating, with one student commenting, ‘although I cannot share the experience at the same time with my fellow students the ease of using the mobile VR system and just passing it around the room was better than the VIVE’.

Looking at the individual MR visualisation device scores (2, 3, 4) separately from the more traditional visualisation scores (1 and 5), it’s noteworthy that the score for manipulability for (2) HTC VIVE (4.35 ± 0.69) and (3) mobile BYOD device (4.35 ± 0.57) are higher than the score for the (4) HoloLens (4.19 ± 0.57). We anticipate that this is a result of the control mechanisms for these devices, which allow for a controller to be used for the VIVE and gaze for the Samsung Gear VR. The HoloLens only allowed for physical body movement through the visualisation. However, as noted, this appears to have been balanced by visibility and real-world aspects, which are similar for all devices as they are for the traditional visualisations. This supports the assertion by Ocepek et al. (2013) that pedagogical learning design must be closely associated to the media visualisation and varies based on the visualisation used.

Looking particularly at the scores for the non-familiar environment in Table 3 (where the students did not have a real physical building to compare to), the (4) HoloLens appears to have achieved the highest score for satisfaction (4.38 ± 0.64), perhaps as a result of the real-world aspect students could not get elsewhere, which is also supported by HoloLens having the highest real-world score (4.23 ± 0.59). This is despite HoloLens scoring lowest for accessibility (3.54 ± 0.65).

With regards to all cases, the HTC VIVE appears to have generally higher scores for many of the measures, including navigability (4.54 ± 0.51), visibility (4.15 ± 0.56), creativity (4.31 ± 0.55) and engagement (4.4.6 ± 0.65). It could be that the students, in knowing there was no real building, evaluated the immersive VR environment as being the next best analogue for a real-world engaging experience. This is supported in Table 5 by quite a few of the student comments, such as, ‘best you can get without visiting the space. Very efficient way to communicate the idea of the space and place’, ‘I wanted to spend more time in the experience I wish this was constructed in the real world – great pavilion shame they didn’t build it’, ‘gives user a chance to visualise without visiting physical building’, and ‘who needs a physical building when you have VR!’ It also supports the assertion by Magana (2014) that there is value in constructing visualisations that allow students to easily scale the experience.

Finally, looking at the qualitative student comments in Table 4 and Table 5 also provides additional substantial support for RQ2, How does this comparative MR pedagogy support learners with complex spatial and lighting interpretations in design communication in both familiar and non-familiar environments? It should be noted that little quantitative data was collected on this dimension, but student comments relating the technologies together give insight into how they feel each contributed to their learning. For example, in commenting on one of the MR visualisations, one student said, ‘shows errors that need to be fixed as well as footings which are not shown directly in plans or physical build. Lets you navigate to any point in the space great for communicating ideas. I really like the lighting especially the real-world example. The heat map lighting was interesting’.

Students also appreciated the ability of the MR views to remove components and interact at different layers, with comments such as ‘engaging for structural views in any section available as you are the camera and can control the cross-section view. I liked that you could view the real world behind the model less sensory deprivation. Awesome!’ and ‘useful in presenting sections as it strips away the layers of a building at scale. Can view the whole space at once giving focus to the design and the lighting’ and ‘can visualise areas I've never seen before in the real building. Including cross sections and support footings. I liked how you could see the whole environment and watch the light and shadow in real time’. This is in support of Bacca et al. (2014), Höffler (2010) and De Freitas and Neumann (2009), in suggesting a need for an environment that allows for collaborative learning.

Despite this, students still did value the physical environment especially, making comments such as ‘physical is best for real world but changes and has errors in terms of the finished product – never exactly how you plan it’ and ‘errors in the construction of the physical building such as cracking. The furniture is not in the original plans making it difficult to conceptualise the whole space. But it is the best visualisation in terms of real world and satisfaction of my overall learning’ and ‘captures the design but does not allow for modification and also the overall full plan of the building is not viewable at once only from perspective views of current position’.

Overall, the results demonstrate that students value the different methods for visualisation differently and can see benefits of each method. Specifically, students appreciate the portability of the mobile MR approaches but recognise that immersive VR on a non-mobile device can provide better visualisation to strip away layers. They also acknowledge value in the physical space but note that it can miss some design elements because it is non-changeable and finished.

In summary, this study would appear to support the need for a comparative approach to teaching spatial design using MR, mobility and traditional approaches. The study provides some insights for the construction of a more generalised comparative MR pedagogy and framework for future work and significance validation.

Conclusions

Skills in spatial recognition are essential for beginning design students. Yet despite this, the traditional methods of using 2D plans and the physical building itself are often used, despite the shortcomings. Mixed reality can change this, by introducing a new way to look at spatial design, as well as a mechanism for comparison between different types of visualisation to allow students to fully understand the spatial elements of the design.

This study used a set of MR and traditional visualisations to demonstrate both the value of MR in this space and how it can be used for comparison. It showed that students generally preferred the use of a mobile device for the accessibility and real-world aspects but sometimes preferred the level of detail and interface conventions of a dedicated non-mobile device. In terms of comparison, students strongly identified the ability of the MR visualisations to strip away layers and visualise complexity as a strength, whilst also acknowledging the value of a real physical model. In this way, they endorsed a comparative approach for the task of spatial design, incorporating MR, mobility and real-world components. Finally, 2D traditional plan drawings scored lowest across almost all measured factors, bringing into question the sole use of this method in the classroom and the need to consider the blending of multiple comparative visualisation methods.

Future work should also look at how students can effectively collaborate in a blended physical and virtual MR space. Whilst collaboration is doable with the traditional models, the MR devices can feel isolating in some instances, because even when students are wearing the same device, they are often not participating in the same simulation, as highlighted in the qualitative comments in the discussion. A true blended multiuser MR system would allow students to have this benefit of collaboration and communication whilst still allowing for the identified benefits of the comparative MR system and physical environment.

Acknowledgements

The authors wish to thank Jonathan Nelson (Architect and Computational Design Researcher); Dr Dirk Hovorka (Associate Professor of Information Systems) and Patricia Manyuru (Architect and Game-Based Educational Researcher) for their past contributions to the research framework and simulation design.

References

Akçayır, M. & Akçayır, G. (2017) ‘Advantages and challenges associated with augmented reality for education: a systematic review of the literature’, Educational Research Review, vol. 20, pp. 1–11. doi:10.1016/j.edurev.2016.11.002

Anderson, T. & Shattuck, J. (2012) ‘Design-Based research: a decade of progress in education research?’, Educational Researcher, vol. 41, no. 1, pp. 16–25. doi:10.3102/0013189X11428813.

Ayres, P. (2015) ‘State-of-the-art research into multimedia learning: a commentary on Mayer’s handbook of multimedia learning’, Applied Cognitive Psychology, vol. 29, no. 4, pp. 631–636. doi:10.1002/acp.3142.

Bacca, J., Baldiris, S., Fabregat, R. & Graf, S. (2014) ‘Augmented reality trends in education: a systematic review of research and applications’, Journal of Educational Technology and Society, vol. 17, no. 4, pp. 133–149.

Bannan, B., Cook, J. & Pachler, N. (2016), ‘Reconceptualizing design research in the age of mobile learning’, Interactive Learning Environments, vol. 24, no. 5, pp. 938–953. doi:10.1080/10494820.2015.1018911.

Bernard, R. M., et al., (2014), ‘A meta-analysis of blended learning and technology use in higher education: from the general to the applied’, Journal of Computing in Higher Education, vol. 26, no. 1, pp. 87–122. doi:10.1007/s12528-013-9077-3.

Becker, S. A., et al., (2017) NMC Horizon Report: 2017 Higher Education Edition, The New Media Consortium, Austin, TX.

Birt, J. & Hovorka, D. (2014), ‘Effect of mixed media visualization on learner perceptions and outcomes’, Australasian Conference on Information Systems, Auckland, New Zealand, pp. 1–10.

Birt, J., Hovorka, D. S. & Nelson, J. (2015) ‘Interdisciplinary translation of comparative visualization’, Australasian Conference on Information Systems, Adelaide, South Australia, pp. 1–10.

Birt, J., Manyuru, P. & Nelson, J. (2017), ‘Using virtual and augmented reality to study architectural lighting’, International Conference on Innovation, Practice and Research in the Use of Educational Technologies in Tertiary Education, Toowoomba, Queensland, pp. 17–21.

Birt, J., Moore, E. & Cowling, M. (2017) ‘Improving paramedic distance education through mobile mixed reality simulation’, Australasian Journal of Educational Technology, vol. 33, no. 6, pp. 69–83. doi:10.14742/ajet.3596.

Birt, J., et al., (2018) ‘Mobile mixed reality for experiential learning and simulation in medical and health sciences education’, Information, vol. 9, no. 2, p. 31. doi:10.3390/info9020031.

Bryson, C. (2016) ‘Engagement through partnership: students as partners in learning and teaching in higher education’, International Journal for Academic Development, vol. 21, no. 1, pp. 84–86. doi:10.1080/1360144X.2016.1124966.

Cochrane, T. (2016), ‘Mobile VR in education: from the fringe to the mainstream’, International Journal of Mobile and Blended Learning (IJMBL), vol. 8, no. 4, pp. 44–60. doi:10.4018/ijmbl.2016100104.

Cook-Sather, A., Bovill, C. & Felten, P. (2014), Engaging Students as Partners in Learning and Teaching: a Guide for Faculty, John Wiley & Sons.

Dalgarno, B. & Lee, M. J. (2010) ‘What are the learning affordances of 3-D virtual environments?’, British Journal of Educational Technology, vol. 41, no. 1, pp. 10–32. doi:10.1111/j.1467-8535.2009.01038.x.

De Freitas, S. & Neumann, T. (2009) ‘The use of ‘exploratory learning’ for supporting immersive learning in virtual environments’., Computers & Education, vol. 52, no. 2, pp. 343–352. doi:10.1016/j.compedu.2008.09.010.

Descottes, H. & Ramos, C. E. (2013) Architectural Lighting: designing with Light and Space, Princeton Architectural Press, New York, NY.

Dey, A., et al., (2016) ‘A systematic review of usability studies in augmented Reality between 2005 and 2014’, 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Merida, pp. 49–50. doi:10.1109/ISMAR-Adjunct.2016.0036.

Englund, C., Olofsson, A. D. & Price, L. (2017) ‘Teaching with technology in higher education: understanding conceptual change and development in practice’, Higher Education Research & Development, vol. 36, no. 1, pp. 73–87. doi:10.1080/07294360.2016.1171300.

Henderson, M., Selwyn, N. & Aston, R. (2017) ‘What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning’, Studies in Higher Education vol. 42, no. 8, pp. 1567–1579. doi:10.1080/03075079.2015.1007946.

Höffler, T. N. (2010) ‘Spatial ability: its influence on learning with visualizations—a meta-analytic review’, Educational Psychology Review, vol. 22, no. 3, pp. 245–269. doi:10.1007/s10648-010-9126-7.

Jones, C., et al., (2010), ‘Net generation or Digital Natives: is there a distinct new generation entering university?’, Computers & Education, vol. 54, no. 3, pp. 722–732. doi:10.1016/j.compedu.2009.09.022.

Kirkwood, A. & Price, L. (2014) ‘Technology-enhanced learning and teaching in higher education: what is ‘enhanced’ and how do we know? A critical literature review’, Learning, Media and Technology, vol. 39, no. 1, pp. 6–36. doi:10.1080/17439884.2013.770404.

Magana, A. J. (2014) ‘Learning strategies and multimedia techniques for scaffolding size and scale cognition’, Computers & Education, vol. 72, pp. 367–377. doi:10.1016/j.compedu.2013.11.012.

Mayer, R. E. (2017) ‘Using multimedia for e-learning’, Journal of Computer Assisted Learning, vol. 33, no. 5, pp. 403–423. doi:10.1111/jcal.12197.

Milgram, P. & Kishino, F. (1994) ‘A taxonomy of mixed reality visual displays’, IEICE TRANSACTIONS on Information and Systems, vol. 77, no. 12, pp. 1321–1329.

Moreno, R. & Mayer, R. (2007), ‘Interactive multimodal learning environments’, Educational Psychology Review, vol. 19, no. 3, pp. 309–326. doi:10.1007/s10648-007-9047-2.

Ocepek, U., et al., (2013), ‘Exploring the relation between learning style models and preferred multimedia types’, Computers & Education, vol. 69, pp. 343–355. doi:10.1016/j.compedu.2013.07.029.

Reeves, T. (2006), ‘Design research from a technology perspective’, In, Educational Design Research, Routledge, pp. 64–78.

Rodriguez, T, et al., (2017) ‘Monitoring, awareness and reflection in blended technology enhanced learning: a systematic review’, International Journal of Technology Enhanced Learning, vol. 9, no. 2–3, pp. 126–150. doi:10.1504/IJTEL.2017.084489.

Roy, M. & Chi, M. T. (2005) ‘The self-explanation principle in multimedia learning’, The Cambridge Handbook of Multimedia Learning, pp. 271–286.

Sankey, M., Birch, D. & Gardiner, M. (2010) ‘Engaging students through multimodal learning environments: the journey continues’, Proceedings ASCILITE 2010: 27th annual conference of the Australasian Society for Computers in Learning in Tertiary Education: curriculum, technology and transformation for an unknown future, University of Queensland, pp. 852–863.

Webb, A. R. (2006) ‘Considerations for lighting in the built environment: non-visual effects of light’, Energy and Buildings, vol. 38, no. 7, pp. 721–727. doi:10.1016/j.enbuild.2006.03.004.

Wu, C. -F. & Chiang, M. -C. (2013) ‘Effectiveness of applying 2D static depictions and 3D animations to orthographic views learning in graphical course’, Computers & Education, vol. 63, pp. 28–42. doi:10.1016/j.compedu.2012.11.012.

Young, S. & Nichols, H. (2017) ‘A reflexive evaluation of technology-Enhanced learning’, Research in Learning Technology, vol. 25, pp.1–13. doi:10.25304/rlt.v25.1998.