This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article reports on an action-research project designed to investigate the effect of a technological intervention on the complex interactions between student engagement, participation, attendance and preparation in a large lecture delivered as part of a compulsory first-year law course, a discipline which has not been the focus of any previous study. The technology used was VotApedia, a form of mobile phone voting, and it was implemented in tandem with constructivist pedagogies such as explicit pre-reading and a prior context of interactive lecturing. Data were collected through observation, via mobile phone voting in class and by an online survey designed to specifically explore the relationship between attendance at VotApedia lectures and factors such as self-reported engagement, attendance and preparation. The findings indicated that student response systems (SRSs) are just as applicable to more Humanities-style disciplines which require divergent questioning, and supported complex interactions between engagement, attendance and preparation. Preliminary findings indicated that, although more work needs to be done, especially on the types of students who prefer to use these systems, there is a clear potential to increase student engagement in large law lectures through the use of SRSs.
Education is not a spectator sport; it is a transforming encounter. It demands active engagement, not passive submission; personal participation, not listless attendance. (Rhodes
Student engagement is an essential precondition for effective learning in Higher Education, especially in a context of increased transparency and quality assurance (Coates
Our research builds on previous work exploring the effect of mobile phone voting on international student engagement (Habel
Contrary to assumptions, the use of SRSs such as clickers is not new; they have been in use since at least the 1960s (Judson and Sawada
One of the more thorough literature reviews, by Kay and LeSage (
Furthermore, while Kay and LeSage (
Our action research design developed directly from the gaps in the literature identified above. To begin with, our work is in a different context to most work in SRSs: although they are being used more in human sciences and business disciplines, there are to our knowledge no applications of their use in humanities or law-related fields. It is particularly important to explore the use of SRSs in the context of divergent questions where the emphasis is on interpretation, discussion and the use of evidence to support claims. This responds to ‘the need for a concerted research effort, one that rigorously explores conditions of use across diverse settings and pedagogies’ (Fies and Marshall
Furthermore, SRSs were used explicitly in conjunction with both attendance and preparation activities. It is not yet common for pedagogical strategies to explicitly address preparation for large lectures (for an exception see Michaelsen and Sweet
The research questions we sought to address were as follows: Can the use of SRSs in a large first-year law lecture promote student engagement? What combination of SRS, constructivist pedagogy and peer-learning is most effective to achieve increased student preparation, participation and engagement in a large first-year law lecture?
We conducted a trial of using mobile phone voting in three lectures in the undergraduate law course Principles of Public Law (enrolment 460, 2nd semester, 2011). VotApedia (a free mobile phone voting system) was employed for each of three lectures throughout the semester: this allowed an iterative and developmental approach to the implementation and avoided the selection bias and contingencies that would occur with a single implementation. It was decided to introduce three lectures using VotApedia in order to moderate the perception of ‘dumbing down’ or the reduction of content coverage that is so common in the use of such systems and to ensure that student expectations regarding lecture format were still being met (D'Inverno, Davis, and White
It is also very important to avoid contrived research design whereby the use of SRSs is simply compared with a traditional, passive lecture format. The lecturer regularly uses interactive formats by having groups of students work on small problems and linking this to the lecture material, so when students were asked to ‘compare the VotApedia lectures to Matthew's normal lectures’ they were reflecting on comparable examples. In this way, students had the opportunity to compare the ‘treatment’ lectures with ‘control’ experiences which were still interactive but lacked the element of voting after group work.
Use of a cyclical action research methodology allowed us to interrogate our second research question, which is directed to the complex question of which combination of techniques is most effective to achieve our desired goal of increased student engagement. This project built on two separate implementations which created cycles of research with increasing levels of rigour. Habel (
The implementation was inspired by the work of Eric Mazur, who pioneered peer-learning pedagogies through using SRSs (Lasry, Mazur, and Watkins
In the lecture, we asked three substantive questions arising from the assigned pre-reading. For each question, students were first given a short lecture introduction to the topic (5–10 minutes), before being split into groups (of approximately three students) to discuss which answer was correct.
Mobile phone voting in action.
In addition to the substantive questions posed, we also asked some questions whose purpose was purely to assess the benefits of our use of mobile phone voting to engage students through explicitly requiring preparation. Thus, at the beginning of each lecture, students were asked if they had completed the pre-reading, and how confident (on a 1–5 Likert scale) they were about their ability to correctly answer the questions posed. At the end of each lecture, students were asked if they found the pre-reading useful to their understanding of the issues addressed. The project implementation and research design is illustrated in Appendix.
This process was conducted in three 2-hour lectures during the course (in weeks 8, 10 and 11 of the semester). In addition, we undertook observational analysis of the learning environment during our classes: the primary author was available solely for observation, while the second author was engaged in delivering the lecture and simultaneously observing. At the conclusion of the final session, students were asked whether (on a 1–5 Likert scale) the lectures with mobile phone voting were ‘more engaging than Matthew's regular lectures’ which, as explained above, regularly involved interactive question-based activities and group work.
After these lectures, we administered an optional on-line survey to elicit further information about students’ attitudes and perceptions of the use of mobile phone voting in lectures. This follows the approach of valuing self-report of engagement and satisfaction in the Australian Survey of Student Engagement by the Australian Council for Educational Research (ACER) (Coates
In the first week of the project, we asked students to answer each question first individually, then to discuss their answer in small groups and vote again. For each of our three questions, as reported in
Impact of group discussion on student performance.
| Question | % Correct | Improvement in group | Responses | Estimated response rate (%) |
|---|---|---|---|---|
|
|
||||
| 1 (Individual) | 74 | 172 | 78 | |
| 1 (Group) | 84 | 10% ( |
153 | 70 |
| 2 (Individual) | 87 | 159 | 72 | |
| 2 (Group) | 94 | 7% ( |
145 | 66 |
| 3 (Individual) | 81 | 159 | 72 | |
| 3 (Group) | 88 | 7% ( |
150 | 68 |
These increases were statistically significant (
The results for student preparation are presented in
Student preparation and engagement.
| Yes (or) % broad agreement | Responses* | Standard error** | |
|---|---|---|---|
|
|
|||
| Week 1 – Did the pre-reading | 64 | 94 | 0.036 |
| Week 1 – Pre-reading useful | 70 | 114 | 0.028 |
| Week 2 – Did the pre-reading | 74 | 119 | 0.026 |
| Week 2 – Pre-reading useful | 80 | 82 | 0.034 |
| Week 3 – Did the pre-reading | 42 | 108 | 0.032 |
| Week 3 – Pre-reading useful | 74 | 81 | 0.038 |
*Based on a total population (lecture size) estimated at 200.
**Based on a confidence interval of 5%.
This positive response in week 2 might generate an expectation of increased preparation in week 3. However, the extent of preparation in week 3 dropped to only 42%. In week 3, our lecture clashed with a major assessment in another course (in which most of our students were enrolled). This clash with assessment was a stronger (negative) influence than the (positive) influence of increased engagement.
In all cases, the standard error of the proportion was within acceptable limits, suggesting that students’ self-reports of completion and usefulness were valid. Given the informal and anonymous nature of the responses there is no reason to suspect that students would misrepresent their views.
The impact of the preparation of assessment items on student attendance has been previously noted (Corbin, Burns and Chrzanowski
Group discussion and voting.
A number of different data sources suggest that mobile phone voting led to increased student engagement in our course. Observational analysis from both authors indicated much more energy and liveliness in the room during mobile phone voting, and particularly sharp spikes when ‘correct’ answers were revealed. In addition, at the end of week 3 students were asked whether they found lectures with mobile phone voting ‘more engaging than Matthew's standard lectures’. There was 84% broad agreement
Student assessment of increased engagement.
| Strongly agree | Somewhat agree | Neither agree nor disagree | Somewhat disagree | Strongly disagree | % Broad agreement | Responses* | Standard error** |
|---|---|---|---|---|---|---|---|
|
|
|||||||
| 47% | 37% | 5% | 7% | 4% |
|
99 | 0.026 |
*Based on a total population (lecture size) estimated at 200.
**Based on a confidence interval of 5%.
This data is supported by the evidence of student perception gained from responses to SELT surveys at the end of the course. Responses to the prompt ‘This person encourages student participation’ (on a 1–7 Likert scale) are reported in
Student perception of encouragement of participation.
| 2010 | 2011 | |
|---|---|---|
|
|
||
| Mean | 5.8 | 6.4 |
| Median | 6 | 7 |
| % Broad agreement | 90 | 98 |
| Responses | 202 | 146 |
Comparing the 2 years, there is a marked increase (8%,
At the end of the semester an online survey was administered to elicit students’ own perceptions of their engagement through the use of mobile phone voting. This survey went well beyond a simple evaluation of the teaching and learning experience: it was explicitly designed to explore relationships between attendance at lectures where VotApedia was used and subjective reports of engagement, using both quantitative and qualitative data. This relationship between attendance and engagement is an under-researched area. Students were asked how many VotApedia lectures they attended; since there were only three the margin of error is small and it can be assumed that their responses to this question were accurate. Based on their response, students were then asked how engaging they found Matthew's ‘normal’ lectures and how engaging they found the VotApedia lectures. Results are reported in
Comparison of engagement experiences.
| Attendance at VotApedia lectures | Number of responses | ‘Normal’ lectures engaging (% broad agreement) | ‘VotApedia’ lectures engaging (% broad agreement) |
|
|---|---|---|---|---|
|
|
||||
| 0 | 28 (14.5%) | NA | NA | NA |
| 1 | 18 (9.3%) | 47.1 | 76.4 | 0.26 |
| 2 | 53 (27.5%) | 50 | 84.6 | 0.027 |
| 3 | 94 (48.7%) | 62.3 | 90.3 | 0.026 |
*Based on a two-tailed probability.
These results indicate significantly higher self-reports of engagement based on the use of VotApedia (
In addition to Likert-style questions, students were asked open-ended questions which provide more nuanced information. These comments were also grouped according to attendance at VotApedia lectures to further explore relationships between attendance and engagement and were subjected to a thematic content analysis based on an initial interpretation of the comments. The themes that emerged from the data included positive, negative and ambivalent assessments of VotApedia; comments that VotApedia made the lecture interactive, engaging or interesting; suggestions that it should be used more in lectures; and comments that it used too much time that would be better spent otherwise. To ensure complete representation of the responses, some responses were double coded: for example, if a statement had a generally positive comment followed by a comment that it took too much time, it was included in both categories (
Attendance and open-ended responses.
| VotApedia attendance |
|
General positive | Interactive/engaging/interesting | Took too much time | General negative | Ambivalent |
|---|---|---|---|---|---|---|
|
|
||||||
| 1 | 15 | 5 (33%) | 11 (73%) | 1 (7%) | 0 (0%) | 5 (33%) |
| 2 | 43 | 18 (42%) | 24 (56%) | 7 (16%) | 3 (7%) | 10 (23%) |
| 3 | 83 | 43 (52%) | 62 (75%) | 19 (23%) | 3 (4%) | 10 (12%) |
Clearly, the overwhelming qualitative response was that students felt that the use of VotApedia was engaging. For example, ‘I'm a student who often finds myself in ‘la la land’ in lectures, no matter how interesting the topic/lecturer, so for me it really helped to make me focus and actually learn something!’ This indicates that SRSs are particularly useful for students who are usually disengaged. Many respondents used adjectives such as engaging, interesting, or interactive, while others referred to group discussion or had more sophisticated ways of expressing a deeper relationship with the material or learning context. Many students had generally positive comments about the use of VotApedia, and some even suggested that the technology helped them retain information for quizzes and the exam, indicating that it may also serve traditional functions of helping memory retention. For example, ‘I have hence noticed a significant benefit in the use of VotApedia in regards to my exam revision—I have retained more information from these lectures in comparison to the ‘normal’ ones’. Several of these positive comments revolved around how VotApedia motivated preparation; for example, one responded that they only found it useful ‘after doing the pre-readings—I found out the hard way!’ This again reinforces the links between attendance and engagement, as facilitated by the use of SRSs.
Despite these overall positive responses, there was also notable ambivalence: several respondents combined positive comments (even about engagement) with criticisms of the time VotApedia took out of lectures. Others felt put upon (one student found it ‘Distracting and quite frankly, oppressive’), or that the material was being ‘dumbed down’, although often while simultaneously noting the positive aspects of VotApedia. A good example was the observation that: ‘Although the voting was ‘engaging’ and interesting, I felt that it took far too much time that would have been better spent simply giving students more information’. This sort of concern is indicative of some deeper student ambivalence about the purpose of lectures, which is becoming more widespread with increased use of technology in general. More particularly, this might be the reverse of a previous finding: that students who actively dislike SRSs are those who traditionally succeed in a didactic lecturing environment.
Indeed, this comment about inappropriate use of time was quite common. A total of 27 students (19% of survey respondents) commented that VotApedia took too much time and recommended fewer instances of it, or lamented the reduction of content or depth in the lectures. Nonetheless, our analysis demonstrates clearly that the vast majority of students (often including those who were dissatisfied with the use of time) found our use of mobile phone voting to be very appealing. Moreover, as we have already observed, part of the time taken was in asking questions necessary for our research (including whether the pre-reading had been done and whether it was useful). We are confident that some of the concerns about the time involved in obtaining and discussing the audience response would be addressed in a 2-hour lecture involving only three substantive questions.
Substantial prior research has demonstrated that SRSs, delivered via constructivist pedagogy, can substantially improve student engagement in learning activities. This research goes beyond the established claims for this impact in hard science (Judson and Sawada
Our trial also confirmed the value of peer-learning techniques in this context (Crouch and Mazur
Our research also confirmed that the use of SRSs is not universally endorsed by students, some of whom felt that the time could have been better spent delivering a larger amount of content – a sentiment which is not new (Van Dijk, Van den Berg and Van Keulen
While research around the use of SRSs in pedagogically sound ways is continuing to develop in line with educational practice, this is only the beginning. More explicit exploration of the relevance of the pedagogy and technology for divergent-type questions and interpretivist disciplines is necessary, and a deeper understanding of the relationships between the various strategies and outcomes of the approach is also needed. In particular, we need a better understanding of how SRSs can help particular students – be they international, ‘non-traditional’, preparatory, or under-engaged – so that our combined use of technology and pedagogy can meet specific student needs.
The authors acknowledge the support of the Faculty of the Professions, University of Adelaide, through the
1. Clearly this is an action research design rather than an experimental one, but we have borrowed from the language of experimental research to describe the rigour built into the project.
2. In the first week, students were given an individual VotApedia question to compare it with their group response, but this was quickly discarded due to redundancy and time constraints. This meant that the peer-learning element of the research design was de-emphasised in the findings.
3. The research complied with institutional requirements for ethics clearance and was exempt from Human Research Ethics Committee review as it was ‘negligible risk’ research involving only non-identifiable data (in accordance with Australia's
4. The lesser significance of the final result may reflect survey fatigue on the part of the participants, or some regression to the mean, or some other unobserved factor, but our dataset does not permit further exploration of this issue.
5. ‘Broad agreement’ is a standard measure of analysis in student evaluation surveys at the University of Adelaide and incorporates responses which either strongly or somewhat agree. Since this unit of analysis is an institutional standard and simplifies the analysis of Likert-style responses, we have used it in this paper.
6. It is possible that there may be additional unobserved factors that contributed to the differences in student perceptions that we report, for which we have not controlled. However, our intervention was the only intentional change to course methodology, and our observations do not indicate to us the influence of any other factors.