Exploring mixed reality based on self-efficacy and motivation of users

Kathy Essmiller, Tutaleni I. Asino, Ayodeji Ibukun*, Frances Alvarado-Albertorio, Sarinporn Chaivisit, Thanh Do and Younglong Kim

Educational Technology, Educational Leadership, & Emerging Technologies and Creativity Research Lab, College of Education and Human Sciences, Oklahoma State University, Stillwater, OK, USA

(Received: 31 August 2019; Revised: 13 November 2019; Accepted: 16 November 2019; Published: 21 February 2020)


This study addresses the question of how to facilitate instruction and practice with virtual reality to mitigate the detrimental impact of cognitive load associated with use in simple procedural tasks. The study collected data from 63 college students aged 18 years and above from a university in the southern part of the USA. Each study participant completed a questionnaire that consisted of 22 questions using a seven-point Likert scale. The results show that there are no significant differences between motivation and self-efficacy as it relates to three selected activities: Roboraid, Tutorial and Freeplay. The opportunity for meaningful learning through the use of the mixed reality is enticing; there is value in exploring facilitation of these learning opportunities through redistribution of cognitive load.

Keywords: mixed reality; virtual reality; augmented reality; Microsoft HoloLens; emerging technologies; self-efficacy, motivation.

This article is part of the special collection Mobile Mixed Reality Enhanced Learning edited by Thom Cochrane, James Birt, Helen Farley, Vickel Narayan and Fiona Smart. More papers from this collection can be found here.

*Corresponding author. Email: ayo.ibukun@okstate.edu

Research in Learning Technology 2020. © 2020 K. Essmiller et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2020, 28: 2331 - http://dx.doi.org/10.25304/rlt.v28.2331


The never-ending supply of technological innovations presents us with new and old opportunities as well as challenges in the educational realm. In teaching and learning spaces, new technologies that facilitate virtual reality (VR), augmented reality (AR) or mixed reality (MR) are presenting opportunities to address challenges such as how to make learning more immersive. At the same time, there is a need to address the old issues of not overwhelming the user, which is often broadly viewed as cognitive load. Wearable headsets, such as Microsoft HoloLens, are becoming more common and are an example of new technologies that are ripe for further investigation from an educational research perspective.

The Microsoft HoloLens is a wearable Windows 10 computer that allows interaction with MR. The head-mounted display (HMD) overlays virtual three-dimensional (3D) objects onto the real world, allowing users to interact with these holographic projections through voice commands and gestures (Bach et al. 2018; Furlan 2016; Gordon and Brayshaw 2017; van der Meulen, Kun, and Shaer 2017). Identified as one of the first broadly available AR headsets (Garon et al. 2016), the Microsoft HoloLens offers individual and group interactive immersive AR experiences (Gordon and Brayshaw 2017). Since an activity experienced in virtual environments is designed to be similar to an activity experienced in the real-world environments (Bach et al. 2018; Coppens 2017), it is said that MR with the Microsoft HoloLens may provide beneficial learning and training applications. However, studies indicate that the increased cognitive load associated with the use of the Microsoft HoloLens in simple procedural tasks has a detrimental impact on performance (Baumeister et al. 2017). Bach et al. (2018) found that continued instruction and practice with the Microsoft HoloLens can lead to improvement.

Review of the literature

The literature on immersive technologies such as VR and AR is still developing and continues to change as the technologies develop. However, there is a vast body of work that can speak about its importance in education. In this section, we review the literature on VR and AR as it pertains to HoloLens display technology, applications for use and human–computer interaction (HCI) strengths and limitations.

Microsoft HoloLens

HoloLens display technology

Baumeister et al. (2017) describe three technologies for the display of AR. Those technologies include spatial AR (in which displays are fixed and users need not wear or hold a display device), video see-through and optical see-through. Effectively a portable computer such as the Microsoft HoloLens is an optical see-through HMD, providing for the reflection of projected images without significant visual obstruction of the real world (Qian et al. 2017). HoloLens users experience a stable perception of the overlaid image and can either move the hologram or move themselves freely around the hologram, strengthening the users’ perception of the 3D environment (Bach et al. 2018; van der Meulen et al. 2017). The ability of HoloLens to function independent of an external machine makes it suitable for use in many different applications (Garon et al. 2016). As a head-mounted computer, the HoloLens battery size and thermal efficiency have been adjusted to optimise HCI (Furlan 2016).

The HoloLens uses on-board cameras to map its environment through the construction of triangle meshes for accurate MR display of environmentally anchored virtual objects (Coppens 2017; Furlan 2016; Guan 2016). Projections of virtual 3D objects are adjusted based on the position of the HoloLens in the environment and the ‘user’s head rotations’ (van der Meulen et al. 2017, p. 399). Floating screens indicating open applications may be pinned in place or repositioned in the environment at the user’s discretion (Furlan 2016). A small circle representative of a virtual cursor follows the user’s physical position and gaze to aid in navigation (van der Meulen et al. 2017).

Reality–virtuality continuum

Despite differences, the terms ‘mixed reality’ and ‘augmented reality’ are sometimes used interchangeably. Milgram et al. (1995) created a reality–virtuality continuum classifying the types of reality based on the degree to which real-world and virtual world objects are presented in a single display (Coppens 2017). This continuum has VR, which provides total user immersion into and interaction with a virtual 3D world on one end, and actual reality on the other end (Coppens 2017; Milgram et al. 1995). Edging closer to reality on the continuum, AR is defined as the symbiotic blending of the virtual and real worlds, the augmentation of the real world through the use of virtual objects (Coppens 2017; Ishii and Ullmer 1997; Milgram et al. 1995). Computer-generated images are overlaid onto the real world and viewed through an HMD, a monitor or as a projection.

MR as created by the Microsoft HoloLens falls between VR and AR on Milgram et al.’s (1995) reality–virtuality continuum. Virtual objects in MR are intended to reflect actual placement in the real world. The virtual objects may be anchored to a real-world object, remaining in place regardless of the user’s presence. The goal of MR is the seamless merging of a virtual world with the user’s perceived real world (Guan 2016).

Gesture recognition

Navigation within Microsoft HoloLens applications is accomplished primarily through hand gestures, gaze and voice interaction (Furlan 2016). Muser (2015) identifies continuous and discrete gestures by function, classifying them as ergotic, epistemic and semiotic. Ergotic gestures are used to modify the environment, epistemic gestures are used to gain knowledge from the environment and semiotic gestures are used to convey information (Muser 2015). Microsoft HoloLens applications use specific mid-air gestures similar to those identified as ergotic or semiotic in real-world environments. The HoloLens can segment hand and finger from the surrounding environment (Guan 2016), recognise the gesture or action and execute a programmed response (Funk, Kritzler, and Michahelles 2017; Guan 2016).

Applications for use

Enhanced accessibility

The Microsoft HoloLens is suggested as a resource with which accessibility to the physical world can be enhanced (Stearns et al. 2017). Unique opportunities for the magnification and flexible placement of visual information in HoloLens applications provide assistance for people with visual impairment (Stearns et al. 2017). Research into stroke rehabilitation suggests that VR games can provide motivation, with patients controlling game play through therapy-related exercises (Patil 2017). Despite privacy concerns, one first step study found that patients were open to improved patient care through doctors’ use of HMD (Prochaska et al. 2016). Psychological issues such as phobias may be overcome through the use of VR exposure therapy (Coppens 2017), and the optical see-through display may lend itself to use in surgery (Qian et al. 2017).

Accessibility uses outside the field of healthcare have been identified also. MR experienced via the Microsoft HoloLens can overlay instructions onto objects or physical scenarios involving assembly tasks, providing options other than physically printed manuals. Holographic images can be used to augment perception of roads or buildings by representing pipes, wiring or other subsurface information (Coppens 2017). Collaborative AR, where users have interaction by sharing the same AR view, can improve problem-solving that is dependent on aural description (Coppens 2017).

Education and training

Instruction presented in both pictures and words can lead to more meaningful learning than that presented in words alone (Mayer 2002), suggesting the relevance for use of the Microsoft HoloLens in education and training. The 3D visualisation environment can assist in the understanding of data, particularly spatial data, as found in healthcare, science and engineering (Camba, Soler, and Contero 2017). Qian et al. (2017) identified its potential application of using AR technology in military training as well. The synchronisation of actions implemented with results seen in MR as displayed through the Microsoft HoloLens contributes to a strong sense of presence and creates novel training and learning opportunities (Gordon and Brayshaw 2017; Qian et al. 2017). Medical students using AR applications to learn anatomy had higher academic achievement and reported lower cognitive load in a study (Baumeister et al. 2017). Users can be immersed in realistic virtual environments, facilitating training and education for instances in which necessary real-world training is risky or expensive (Coppens 2017; Gordon and Brayshaw 2017). Learning content can be embedded through MR within the real world, enabling gateway features ‘to support personalized learning pathways, link assessment activities to virtual world activities’ (Gordon and Brayshaw 2017, p. 118), and to facilitate flexibility and personalisation of learning experiences.

Use of mixed reality in research

Three-dimensional visualisation tools prompt reshaping of ways in which research is approached and accomplished (Camba et al. 2017; Patil 2017). Ishii (1997) suggested that researchers and developers explore views beyond the traditional graphic user interface (GUI); these views can explore the presentation and experience of ‘design information, concepts, and outcomes in an immersive manner’ (Camba et al. 2017, p. 6). Broadening research applications into the arts, Golan Levin, recipient of a 2015 Microsoft HoloLens Academic Research Grant, is exploring ways through which MR might be employed for personal expression.

Human–computer interaction in mixed reality

The use of a Microsoft HoloLens is a good example of HCI from a MR device. Effective HCI is key in providing a lifelike experience for users in an immersive environment, requiring unobtrusive and intuitive user interaction (Garon et al. 2016; Guan 2016). The HoloLens as currently developed, incorporates many aspects beneficial to HCI. The simultaneous view of both real reality and VR through the optical see-through display preserves visual experience of the real world, even if the device malfunctions (Qian et al. 2017). Mid-air gestures, gaze and vocal commands reflect a more natural user interaction with reality than is represented by manipulation of a mouse (Hasan and Yu 2017; Muser 2015; Song et al. 2016). User’s sense of presence in MR is enhanced by the HoloLens real-time processing of input (Hasan and Yu 2017). The mobility of the Microsoft HoloLens itself, as a completely self-contained, wearable computer, is an additional strength when considering MR uses and applications.

Elimination of HCI limitations will facilitate maximal engagement with material (Gordon and Brayshaw 2017). Research opportunities are challenged by the inability to access raw data as gathered by HoloLens sensors in real time (Garon et al. 2016). The HoloLens follows the user’s gaze as directed by head rotation, but is unable to discern and record eye-tracking data (van der Meulen et al. 2017). Developers find it difficult to replicate natural and interactive experiences using the limited and specific gestures available (Funk et al. 2017; Furlan 2016). Although a circular headband distributes the weight evenly, the HoloLens remains bulky, heavy and somewhat unattractive (Coppens 2017; Qian et al. 2017; Stearns et al. 2017). The lack of haptic input contributes to potential disruption of the immersive experience when the user tries to touch the hologram (Guan 2016). Moreover, the display presents a limited field of view, impacting gesture visibility and user interaction with MR environments (Baumeister et al. 2017; Funk et al. 2017; Stearns et al. 2017; van der Meulen et al. 2017). These challenges to HCI increase cognitive load and impair user efficacy when using the Microsoft HoloLens (Baumeister et al. 2017).

Theoretical framework

In this section, we address three theories relevant to the use of MR in educational settings: (1) cognitive load, (2) motivation and (3) self-efficacy.

Cognitive load

Cognitive load theory is concerned with the distribution of working memory while learning (Sweller 2010). Cognitive load can be categorised as intrinsic, extraneous or germane. Information complexity contributes to intrinsic cognitive load, delivery of instruction is considered extraneous cognitive load and knowledge acquisition constitutes germane cognitive load. Element interaction, an element being any concept, idea or skill that needs to be or has already been learnt (Sweller 2010), plays a significant role in cognitive load. Cognitive load has been identified as a limiting factor in the use of the Microsoft HoloLens for educational or training purposes (Baumeister et al. 2017).

Reduction of element interaction is a means through which cognitive load might be productively reapportioned (Sweller 2010). Elements contributing to cognitive load challenges include those described by dual processing theory, in which users experience pairing of verbal instructions and images (Baumeister et al. 2017; Clark and Paivio 1991) user mastery of predefined gestures, the novelty of immersion in MR and the limited field of view (Baumeister et al. 2017). Mitigation of the interaction among elements as described presents a challenge, as each of these elements is either fixed or essential to the MR experience. Motivation may play a role in productive redistribution of cognitive load (Sweller 2010).


The presence of achievable goals can facilitate motivation (Elliot and Church 1997; Schunk 2016). Motivation is important to engaging students in activities that facilitate their own learning (Schunk 2016). Learning goals help bring student attention to ‘processes and strategies that help them acquire capabilities and improve their skills’ (Schunk 2016, p. 374). Pursuit of learning goals can generate a growth mindset, resulting in students believing that they can, through their own effort, learn and improve in meaningful ways (Yeager and Dweck 2012). Competence influences intrinsic motivation (Ryan and Deci 2000), in which work on the task itself brings reward. Learning goals support acquisition of new skills and ‘development of problem-solving methods’ (Schunk 2016, p. 393), strategies necessary for users new to MR experiences and the Microsoft HoloLens.


Self-efficacy implies to a person’s perception of his or her ability to control his or her functions in response to circumstances (Bandura 1997). Gilbert, Voelkel and Johnson (2018, p.156) asserted that ‘immersive simulation provides authentic learning opportunities and support pedagogy, allowing skill development in a risk-free environment’. The purpose of MR for learning is to provide effective learning environments that enable learners to perceive the knowledge and necessary skills to perform tasks in real-world and educational settings. MR environments give learners the ability to visit and interact with locations where distance, occasion and safety can be barriers to learning (O’Neil and Perez 2006). The study regarding education by using immersive simulations with AR technology shows that it can help students to improve their self-efficacy (Gilbert et al. 2018). Learners with high levels of computer self-efficacy are less likely to express anxiety and frustration regarding the use of educational technologies (Digregorio and Liston 2018).

Problem statement

Because of the detrimental impact of cognitive load on performance when using Microsoft HoloLens MR applications for simple procedural tasks, more research is needed to discern how to incorporate HCI features of the Microsoft HoloLens and related applications into educational practice.

Research questions

This study addresses the question of how to facilitate instruction and practice with the Microsoft HoloLens to mitigate the detrimental impact of cognitive load associated with its use in simple procedural tasks. The researcher hypothesises that incorporation of learning goals into self-directed learner exploration of the Microsoft HoloLens can reduce element interaction and will facilitate learner motivation to instigate and sustain behaviour, resulting in reduction of extraneous cognitive load and subsequent release of space in working memory to handle intrinsic and germane cognitive load. Specifically, the question the researchers sought to answer was the following: ‘Is there a difference between an individual’s motivation to interact with MR via the Microsoft HoloLens when provided a specific learning goal and an individual’s motivation to interact with MR via the Microsoft HoloLens when provided no specific learning goal?’


Data collection

This study collected data from 63 college students who were over the age of 18 years and enrolled at a high-research university located in the southern part of the USA. Each participant was given extra credit as an incentive for completing the study. Students were invited to visit a campus room that consists of emerging technology tools such as the HoloLens. Upon their random arrival at the room, participants were given to read and complete a consent form approved by the Institutional Review Board (IRB) prior to their MR experience. The activities were assigned in alternating manner and in a fixed order (Activity A, Activity B, Activity C, Activity A, etc.) so that an equal number of participants were expected to complete each activity.

This study consists of three activities. Among the three activities, Activity A was Roboraid, Activity B was a Tutorial and Activity C was Freeplay. Roboraid is a 3D AR game where players defend their homes from a robotic invasion (Microsoft Store page for Roboraid). Activity B is a Microsoft tutorial on how to use the HoloLens that the participants had to access on their own. Freeplay is the activity where the participants are free to explore any content from the HoloLens. They can play any game or simply use gestures or voice commands to navigate around their surroundings while wearing the HoloLens.

Oral instructions were given describing the type of activity through which students were invited to engage in their MR experience with the HoloLens. Microsoft ‘HoloLens’ enables users to experience AR and MR (Kehe 2015). After selecting the activity, a short oral description of the gestures was given to the participants. The length of time spent in the MR experience was at each participant’s discretion, with an average amount of 15 min spent in each experience.


Each participant completed a questionnaire that was approved by the IRB prior to the commencement of the study. The questionnaire consisted of 22 questions using a seven-point Likert scale ranging from 1, implying ‘not at all true’, to 7, implying ‘very true’ (see Appendix) to capture participants’ responses. The participants each completed the survey immediately after engaging with the HoloLens. The survey questionnaire helped identify similarities and differences between the three selected MR activities as well as the overall user experience. Demographic information was not collected in the survey as the purpose was to explore and observe the engagement of the participants in the assigned activities.

Data analysis

The survey deployed by the researchers of the study compared motivation with self-efficacy using 22 questions that generated a set of raw data. The set of raw data is made up of a series of values between 1 and 7 (see Appendix). The data were subsequently uploaded into Qualtrics, an online survey tool. This process allowed for the digitisation of the data. To analyse the raw data, the mean values of each activity were computed from Qualtrics.

Descriptive statistics and comparison of means were carried out using one-way analysis of variance (ANOVA) from SPSS Data Editor on the means of the responses to each of the 22 questions downloaded from Qualtrics. Using the SPSS Data Editor, the means of the responses grouped by activities were sorted according to two associated factors of the study. The first factor, coded as Factor 1, is motivation and the second factor, coded as Factor 2, is self-efficacy. The 22 questions were split into two between the two factors, leading into the findings and results.


The total number of participants is 63, and they answered the questionnaire after finishing an activity. Of the participants, 21 were assigned to Activity A, 22 were assigned to Activity B and 20 were assigned to Activity C (Table 1). In the questionnaire, 12 questions were related to motivation and 10 questions were related to self-efficacy. The result shows that there are no significant differences between the two factors (i.e. motivation and self-efficacy) related to the three activities – Roboraid, Tutorial and Freeplay – with the level of significance greater than 0.05 (Tables 2 and 3).

Table 1. a. Descriptive Statistics.
Descriptrves N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean Minimum Maximum
Lower Bound Upper Bound
1 12 5.5925 .96090 .27739 4.9820 6.2030 4.10 6.62
2 10 5.0010 .92530 .29261 4.3391 5.6629 2.90 6.29
Total 22 5.3236 .97029 .20687 4.8934 5.7538 2.90 6.62
1 12 5.4533 .81742 .23597 4.9340 5.9727 3.95 6.41
2 10 5.5170 .66695 .21091 5.0399 5.9941 4.55 6.45
Total 22 5.4823 .73599 .15691 5.1560 5.8086 3.95 6.45
1 12 5.7125 .63644 .18372 5.3081 6.1169 4.70 6.45
2 10 5.0590 .89115 .28181 4.4215 5.6965 3.64 6.50
Total 22 5.4155 .81452 .1 7366 5.0543 5.7766 3.64 6.50

Table 2. Homogeneity of variances.
Test Of Homogeneity of variables
Levene Statistic df1 df2 sig
RoboraidMean .783 1 20 .387
TutorialMean .439 1 20 .515
FreeplayMean 1.797 1 20 .195

Table 3. Analysis of variance (ANOVA) result.
Sum of Squares df Mean Square F Sig
RoboraidMean Between Groups 1.908 1 1.908 2.137 .159
Within Groups 17.862 20 .893
Total 19.771 21
TutGrialMean Between Groups .022 1 .022 .039 .846
Within Groups 11.353 20 .568
Total 11.375 21
FreeplayMean Between Groups 2.329 1 2.329 4.015 .059
Within Groups 11.603 20 .580
Total 13.932 21


The trend seen from the Means Plots implies there is a difference between Activity B and Activities A and C with respect to both factors - factor 1 (motivation) and factor 2 (self-efficacy) (Figures 1, 2 &3). For example, the patterns of Activity A and C are the same, but the pattern of Activity B is different. Activities A and C seem to encourage higher motivation than self-efficacy to users. On the other hand, Activity B seems to affect higher self-efficacy than motivation.

Fig 1
Figure 1.  Means plot of Activity A: Roboraid against factors.

Fig 2
Figure 2.  Means plot of Activity B: Tutorial against factors.

Fig 3
Figure 3.  Means plot of Activity C: Freeplay against factors.

Limitations of the study

The limitation of the study includes its small sample size which affects the power of the study, but the distribution of the means values that is relatively normal for all the data set corrected the effect of the weak power. Although, the study is demographically limited to both male and female college students aged 18 years and above, the participants’ diverse backgrounds ensure the validity of the data used in the study.


The results of this study suggest that there is relevance for the use of Microsoft HoloLens MR experiences in education and training, which is in agreement with other researchers in the field who state that the AR learning system can be a potential learning tool for learners if used in a systematic way (Yuan-Jen et al. 2011). Although, other researches have shown that challenges to HCI when using the Microsoft HoloLens increase cognitive load and impair user efficacy (Baumeister et al. 2017), cognitive load may be reapportioned as user competence as the Microsoft HoloLens improves.

In future, the research will explore more into learning goals with respect to AR, which may provide motivation for continued user interaction, skills acquisition and development of problem-solving skills when using the Microsoft HoloLens. The opportunity for meaningful learning through the use of the Microsoft HoloLens MR is enticing; there is value in exploring facilitation of these learning opportunities through redistribution of cognitive load.


We are grateful to Dr. Penny Thompson for giving us great advice and guidance through one of her courses, Human–Computer Interactions, which served as the building blocks for the implementation of this project. We would like to thank our friends and volunteers for their cooperation and participation in the study. We would also like to thank the facilitators of the Emerging Technologies and Creativity Research Lab at Oklahoma State University for providing the devices and space necessary for the project’s completion.


Bach, B., et al., (2018) ‘The hologram in my hand: how effective is interactive exploration of 3D visualizations in immersive tangible augmented reality?’, IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, pp. 457–467. doi: 10.1109/TVCG.2017.2745941.

Bandura, A. (1977) ‘Self-efficacy: toward a unifying theory of behavioral change’, Psychological Review, vol. 84, no. 2, pp. 191–215. doi: 10.1037/0033-295X.84.2.191.

Baumeister, J., et al., (2017) ‘Cognitive cost of using augmented reality displays’, IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 11, pp. 2378–2388. doi: 10.1109/tvcg.2017.2735098.

Camba, J. D., Soler, J. L. & Contero, M. (2017) Immersive visualization technologies to facilitate multidisciplinary design education. In Learning and Collaboration Technologies. Novel Learning Ecosystems: 4th International Conference, LCT 2017, Held as Part of HCI International 2017, eds P. Zaphiris & A. Ioannou, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I, Springer International Publishing, Cham, pp. 3–11. doi: 10.1007/978-3-319-58509-3_1.

Clark, J. M. & Paivio, A. (1991) ‘Dual coding theory and education’, Educational Psychology Review, vol. 3, no. 3, pp. 149–210. doi: 10.1007/BF01320076.

Coppens, A. (2017) Merging Real and Virtual Worlds: An Analysis of the State of the Art and Practical Evaluation of Microsoft HoloLens. Master in Computer Science Masters, University of Mons. arXiv:1706.08096.

Digregorio, N. & Liston, D. D. (2018) Experiencing Technical Difficulties : Teacher Self-Efficacy and Instructional Technology. pp. 103–117. doi: 10.1007/978-3-319-99858-9.

Elliot, A. J. & Church, M. A. (1997) ‘A hierarchical model of approach and avoidance achievement motivation’, Journal of Personality and Social Psychology, vol. 72, no. 1, p. 218. doi: 10.1037/0022-3514.72.1.218.

Funk, M., Kritzler, M. & Michahelles, F. (2017) HoloLens is more than Air Tap: Natural and Intuitive Interaction with Holograms. doi: 10.1145/3131542.3140267.

Furlan, R. (2016) ‘The future of augmented reality: HoloLens Microsoft’s AR headset shines despite rough edges’, IEEE Spectrum, vol. 53, no. 6, pp. 21–21. doi: 10.1109/mspec.2016.7473143.

Garon, M., et al., (19–23 Sept. 2016) Real-Time High Resolution 3D Data on the HoloLens. Paper presented at the 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). doi: 10.1109/ISMAR-Adjunct.2016.64.

Gilbert, K., Voelkel, R. & Johnson, C. (2018) ‘Increasing self-efficacy through immersive simulations: leading professional learning communities’, Journal of Leadership Education, vol. 17, no. 3, pp. 154–174. doi: 10.12806/V17/I4/R5.

Gordon, N. & Brayshaw, M. (2017) Flexible Virtual Environments: Gamifying Immersive Learning. Paper presented at the International Conference on Human-Computer Interaction, Springer, Cham. doi: 10.1007/978-3-319-58753-0_18.

Guan, L. (2016) Creating Life-like Experience in Immersive Environment-A Human Computer Perspective. Paper presented at the Center for Pattern Analysis and Machine Intelligence Seminar, Waterloo, ON.

Hasan, M. S. & Yu, H. (2017) ‘Innovative developments in HCI and future trends’, International Journal of Automation and Computing, vol. 14, no. 1, pp. 10–20. doi: 10.1007/s11633-016-1039-6.

Ishii, H. & Ullmer, B. (1997) Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. Paper presented at the Proceedings of the ACM SIGCHI Conference on Human factors in computing systems. doi: 10.1145/258549.258715.

Kehe, J. (2015) Microsoft HoloLens merges the physical world with virtual reality. Retrieved from https://www.wired.co.uk/article/project-HoloLens.

Mayer, R. E. (2002) ‘Multimedia learning’, Psychology of Learning and Motivation, vol. 41, pp. 85–139. Academic Press. doi: 10.1016/S0079-7421(02)80005-6.

Milgram, P., et al., (1995) Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum. Paper presented at the Telemanipulator and telepresence technologies Conference. doi: 10.1117/12.197321.

Muser, S. (2015) Gestures in Human-Computer-Interaction, Ludwig-Maximilians-University of Munich, Munich.

O’Neil, H. & Perez, R. (2006) Web-Based Learning: Theory, Research, and Practice, Lawrence Erlbaum Associates, Mahwah, NJ.

Patil, Y. (2017) A Multi-interface VR Platform for Rehabilitation Research. Paper presented at the Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/3027063.3048421.

Prochaska, M. T., et al., (2016) ‘Patient perceptions of wearable face-mounted computing technology and the effect on the doctor-patient relationship’, Applied Clinical Informatics, vol. 7, no. 4, pp. 946–953. doi: 10.4338/aci-2016-06-le-0094.

Qian, L., et al., (2017) ‘Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display’, International Journal of Computer Assisted Radiology and Surgery, vol. 12, no. 6, pp. 901–910. doi: 10.1007/s11548-017-1564-y.

Ryan, R. & Deci, E. (2000) ‘Intrinsic and extrinsic motivations: classic definitions and new directions’, Contemporary Educational Psychology, vol. 25, pp. 54–67. doi: 10.1006/ceps.1999.1020.

Schunk. (2016) Learning Theories: An Educational Perspective, 7th edn., Pearson, Boston, MA.

Song, H., et al., (2016) Towards Robust Ego-Centric Hand Gesture Analysis for Robot Control. Paper presented at the Signal and Image Processing (ICSIP), IEEE International Conference on. doi: 10.1109/SIPROCESS.2016.7888345.

Stearns, L., et al., (2017) ‘Augmented Reality Magnification for Low Vision Users with the Microsoft HoloLens and a Finger-Worn Camera’, in Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 361–362.

Sweller, J. (2010) ‘Element interactivity and intrinsic, extraneous, and germane cognitive load’, Educational Psychology Review, vol. 22, no. 2, pp. 123–138. doi: 10.1007/s10648-010-9128-5.

van der Meulen, H., Kun, A. L. & Shaer, O. (2017) What Are We Missing?: Adding Eye-Tracking to the HoloLens to Improve Gaze Estimation Accuracy. Paper presented at the Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. doi: 10.1145/3132272.3132278.

Yeager, D. S. & Dweck, C. S. (2012) ‘Mindsets that promote resilience: when students believe that personal characteristics can be developed’, Educational Psychologist, vol. 47, no. 4, pp. 302–314. doi: 10.1080/00461520.2012.722805.

Yuan-Jen, C., et al., (2011) ‘Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of English learning using augmented reality’, IEEE International Conference on Multimedia and Expo, Barcelona, pp. 1–6. doi: 10.1109/ICME.2011.6012177.


Task Evaluation Questionnaire
For each of the following statements, indicate how true it is for you, using the following scale:
1 2 3 4 5 6 7
not at all true   somewhat true   very true
1. While I was working on the task I was thinking about how much I enjoyed it.
2. I did not feel at all nervous about doing the task.
3. I felt that it was my choice to do the task.
4. I think I am pretty good at this task.
5. I found the task very interesting.
6. I felt tense while doing the task.
7. I think I did pretty well in this activity compared to other students.
8. Doing the task was fun.
9. I felt relaxed while doing the task.
10. I enjoyed doing the task very much.
11. I didn’t really have a choice about doing the task.
12. I am satisfied with my performance at this task.
13. I was anxious while doing the task.
14. I thought the task was very boring.
15. I felt like I was doing what I wanted to do while I was working on the task.
16. I felt pretty skilled at this task.
17. I thought the task was very interesting.
18. I felt pressured while doing the task.
19. I felt like I had to do the task.
20. I would describe the task as very enjoyable.
21. I did the task because I had no choice.
22. After working at this task for a while, I felt pretty competent.