ORIGINAL RESEARCH ARTICLE

Engaging the control-value theory: a new era of student response systems and formative assessment to improve student achievement

Mary W. Paula*, Colleen Torgersonb, Susan Traczb, Kimberly Coyb, and Juliet Wahleithnerb

aDepartment of English, California State University, Fresno, CA, USA; bKremen School of Education and Human Development, California State University, Fresno, CA, USA

(Received: 8 May 2020; Revised: 27 July 2020; Accepted: 9 September 2020; Published: 10 December 2020)

Abstract

The use of student response systems (SRS) in the form of polling and quizzing via multiple choice questions has been well documented in the literature (Caldwell 2007). This study addressed the gap in the literature and considered content-generating SRS, such as Socrative and Google Slides, during formative assessment activities in college composition courses. Content-generating SRS display student responses to formative assessment questions, and instructors are able to evaluate and adjust course material and feedback in real-time. Quantitative data measuring student perception using Likert-scale surveys and student achievement using essay scores were collected. The statistically significant results between the treatment and control groups for essay scores are objective measurements of student achievement and have implications for how to support both students and faculty in innovative curriculum design. Content-generating SRS allow for a more robust illustration of student understanding and can be adopted for larger lecture classes.

Keywords: educational technology; student response systems; formative assessment; student achievement; mobile technology

*Corresponding author. Email: MaryPaulEdD@gmail.com

Research in Learning Technology 2020. © 2020 M. W. Paul et al. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Citation: Research in Learning Technology 2020, 28: 2454 - http://dx.doi.org/10.25304/rlt.v28.2454

Introduction

As a growing number of higher education institutions attempt to meet the demands of a new generation of digital learners, faculty adoption of teaching with technology becomes a priority and a challenge (Myers et al. 2004). Higher education institutions have been funding teaching-with-technology initiatives for years, yet adoption of technology in the college curriculum has not met expectations (Weimer 2013). In fact, this conversation has been in existence for multiple decades (Spotts 1999). A potential catalyst for faculty adoption is to view the learning environment from the student lens. Encouraging and supporting student responses during class lectures enhances the classroom experience.

The use of student response systems (SRS) in the form of polling and quizzing via multiple choice questions has been well documented in the literature (Caldwell 2007; Hoekstra 2008). Multiple names exist for this new form of student engagement: SRS, classroom response systems and personal response systems. The most understood form of SRS is the use of clickers (Boyle and Nicol 2003; Caldwell 2007; Noel, Stover, and McNutt 2015). When using clickers, faculty engage students with polling questions at strategically planned moments during a lecture to measure student understanding (Kaleta and Joosten 2007; Kulasegaram and Rangachari 2018). The traditional clicker device has a number from 0 to 9 which students select in order to answer a question. The results are then displayed, or projected, onto the classroom screen to indicate the answers selected and the frequency of each selection (Kaleta and Joosten 2007). The assumption is that students are able to determine their level of comprehension; once the correct answer is displayed on the screen, each student will know whether he/she selected the correct answer. Hoekstra (2008) provided an ethnographic study on the use of clickers in chemistry courses over a 3-year period. The study found through interviews and observations that the clickers provided a more active learning environment.

While the literature is adequate regarding the use of SRS such as clickers and mobile applications used to poll student responses (Boyle and Nicol 2003; Caldwell 2007; Noel, Stover, and McNutt 2015), what is missing from the literature is the discussion of SRS that ask students to generate content: offering text and images rather than selecting multiple choice answers. Recent advancement in educational technologies has transformed SRS into software applications that expand how students respond; these SRS applications move responses beyond polling in that students can generate textual content as a response. The use of applications such as Socrative and Google Slides allows students to offer content that may be more representative of their learning and understanding and allows for a more individualised assessment. For example, if students are beginning a research project, an instructor might use Socrative to have students register their individual research questions for assessment and feedback. Or, if students are working to solve a specific math equation, an image of their work can be posted to a Google Slide for instructor feedback. SRS applications, which allow students to generate content beyond what an instructor suggests, have the potential for a deeper learning experience.

The aim of this study was to better understand the use of content-generating SRS as formative assessment in relation to student achievement. This study builds upon the findings and suggested future research offered by Buil, Catalan and Martinez (2016), in an analysis of students’ achievement emotions while engaged with technology. While Buil, Catalan and Martinez (2016) considered student perception of achievement emotions based upon the control-value theory, this study added student perception of anonymity and actual student achievement based upon essay scores. The use of student perception of achievement is well documented in the literature; however, this study made the connection between technology’s use in the learning environment and improved student achievement.

Content-generating SRS and the teaching and learning environments

Applications such as Socrative and Google Slides alter the traditional SRS such as clickers to poll student responses and allow students to add their own content. Content-generating SRS allow students to add text and images in response to instructor inquiries; these applications also allow students to remain anonymous in their responses. While studies using clickers as SRS have shown improvement with student engagement, clickers do not allow for students to create content. The ability to answer an instructor’s content question with text or images may create a platform for students to think critically about creating the response, potentially providing a deeper learning environment. The ability to remain anonymous may be a catalyst for students who would not normally speak up in class to engage via a digital discourse.

Google Slides offers a more robust platform for student responses beyond the capabilities of polling software such as clickers. Once an instructor creates a slide deck and shares the link to that slide deck with students, students can add a slide to the deck and populate the slide using text, equations, drawings, figures and images. It offers a multi-modal platform for students to display their understanding of the instructor’s formative assessment question. Unless a student chooses to place a name on a slide, the student responses are anonymous. Once students have completed placing information on their slides, the instructor enters display mode and can offer feedback for each slide. An added bonus for students is that the slide deck can be accessed and reviewed or edited for further assessment.

Gauging student understanding using digital technology can address misconceptions instantaneously. Student content-understanding can be displayed on the screen when an instructor pauses in the middle of a lecture to poll the class as to the level of understanding of the lecture material. If an instructor spent 20 min explaining that x equals y, and polling results display students understand x to equal z, then the instructor can further address the misconception. However, traditional clicker SRS ask students to guess from a multiple list of items, whereas content-generating SRS ask students to produce individual responses. The authors also pose the discussion that faculty find fault in students’ attention span due to new technology; students are more inclined to multitask during a lecture rather than focus on the lecture material for an extended period of time (Boyle and Nicol 2003). The use of technology in a purposeful manner may actually keep students on-task. Content-generating student response applications offer the ‘multi’ portion of the learning task. Students can immediately engage with the lecture content.

Understanding that students continuously measure their ability to navigate the learning environment creates the context in which to help students continue towards achievement goals. Formative assessment and instructor feedback affect the learning process at critical intervals. The use of SRS illustrate what students are understanding and when.

The role of anonymity in the learning environment

SRS offer a unique form of assessment and feedback when responses are anonymous; content-generating SRS have this option. Students can offer their content knowledge for display without the pressure of being singled out. The privacy of anonymity allows students to experience personal accountability without fear of a public display of their knowledge. The sense of anonymity also gives rise to voices that are not often heard, as more dominating voices tend to drive class discussions (Boyle and Nicol 2003; Davis 2003; Fies and Marshall 2006). SRS can extend the voice of students who do not usually speak up in class (Laxman 2011). Often when an instructor asks a question, the same handful of students offer a response. The voices of many other students remain silent. There is a tendency for students to avoid the risk of speaking up in class for fear of embarrassment and being judged by peers (Caldwell 2007). Students who can anonymously register their responses remove these constraints (Laxman 2011).

The potential for positive feedback of students’ responses also encourages student self-efficacy; students can receive positive feedback anonymously. Technology in the form of anonymous SRS allows faculty to hear from every voice in the class registered via a digital device; students who may not be comfortable contributing to the learning environment, or singled out in the learning environment, are able to contribute to class discussions (Caldwell 2007). This collaboration and contribution from many students can create a less threatening classroom experience that encourages risk-taking. A sense of anonymity can level the academic playing field. The anonymous responses projected during student-response sessions offer a perception of protection from being judged and singled out.

Theoretical framework and research questions

The theoretical framework for this study looks at student perception of the teaching and learning environment. The research questions address both student perception and actual student achievement, beyond perception.

Control-value theory

Control-value theory offers educators an opportunity to analyse the antecedents and effects of emotions experienced within academic contexts (Pekrun 2006) and provided the framework for this study. With a better understanding of student perception of control over learning activities, the value placed on those activities and reciprocating outcomes and the emotions driving performance motivation, educators may better understand how to improve student learning and achievement. Pekrun (2006) posited that achievement emotions have both a causal and reciprocating effect on student achievement. Control-value theory looks at the level of perceived control a student has over the learning actions and the learning outcomes and how these perceptions relate to achievement emotions. The achievement emotions are determined by different appraisals of antecedents and different appraisals of retrospective outcomes. External factors such as social and cultural antecedents also affect the appraisals of academic control and value and performance motivation (Pekrun 2006). Students are constantly measuring the classroom environment and self-appraising their potential for achievement within that environment.

The control-value theory offers many implications to affect educational practices. The theory weighs heavily on fostering positive influence over student emotions in regard to control over academic activities, or self-efficacy, and by shaping the way students perceive and anticipate activities and outcomes – value and motivation (Pekrun 2006). Injecting positive feedback during key learning moments can influence perception of achievement (National Research Council 2000). Improving the clarity, structure and presentation of instruction may increase students’ sense of control and agency over their learning. Positive values of academic engagement should be fostered (Pekrun 2006). Real-time feedback and an opportunity for immediate formative assessment, such as pausing a lecture to inquire about students’ understanding of the lecture content, is one source to foster such positive engagement.

Pekrun (2006) suggests that the control-value theory reinforces the need for authentic learning activities and a learning environment that engages all students; the learning environment should meet the social needs of students as well as their academic needs to offer value. As students continue to self-assess achievement emotions, they are also assessing how they may value the content and the learning experience. The motivation and energy to perform is enacted when students find value in the learning goals (Wiggins 1993).

The control-value theory allows for better understanding of the emotions that students experience during achievement in the teaching and learning environment. These emotions can be analysed according to the antecedent of the emotions and the subsequent effects of the emotions (Buil, Catalan, and Martinez 2016; Pekrun 2000; Pekrun, Elliot, and Maier 2006). What does an instructor do to affect achievement emotions and what are the effects of those emotions for the student? Achievement emotions are the product of achievement activities or achievement outcomes (Pekrun 2006; Pekrun et al. 2007). The control-value theory also appraises control and value as key perspectives of achievement emotions and relates to the level of control students perceive they have over their learning. Value refers to the perception of importance of the learning activities and the corresponding outcomes (Buil, Catalan, and Martinez 2016; Pekrun et al. 2011).

Purpose and research questions

The purpose for this study was to expand current research to consider the effects of student achievement and engagement using content-generating responses in hopes of informing current pedagogical practices. The study addressed the gap in the literature between SRS, such as clickers that poll students’ understanding by offering a list of questions, and SRS as content-generating applications, which allow students to offer their own content in response to formative assessment. The research questions for this study were:

R1: Within the framework of the control-value theory, do the content-generating treatment groups have higher mean perception of achievement emotions?

R2: Is there a difference in student achievement between the treatment groups using content-generating SRS and the control groups?

Examination of the use of SRS beyond clickers in large lecture classes, and instead, considering content-generating SRS may provide the means to promote positive pedagogical changes in higher education. This study is important for informing the use of technology as it promotes student self-efficacy and inclusion. The results can support faculty adoption of high-impact practices in the classroom.

Methodology

The data collection consisted of quantitative research looking at student perception as well as quantitative data of actual student essay scores.

Data collection and participants

Eight English composition courses with four instructors teaching two sections each: one course using content-generating SRS and one course using traditional student responses such as hand-raising, comprised the study’s groupings. Quantitative data were collected pre and post studying the overall learning environment, and three times during class activities studying student engagement. Scores for three writing assignments were collected.

A non-probability convenience sample of both students and writing instructors was used. Students were selected because they were enrolled in the selected writing classes and were not considered representative of the larger student-body population. The student sample consisted of 124 students (N = 124). Demographics were collected from the institution’s Office of Institutional Effectiveness (OIE) (see Table 1). It was expected that most of the student enrolment would be first-year freshmen; however, inclusion of other grade levels would not affect the study.

Table 1. Frequencies and percentages of demographics of students participating in study.
Demographic Treatment (N = 73) Treatment (58.9) Control (N = 51) Control (41.1) Total (N = 124) Total (100)
Gender
Female 42 57.5 32 62.7 74 59.7
Male 31 42.5 19 37.3 50 40.3
Ethnicity
Asian 13 10.5 3 2.4 16 12.9
Black 1 0.8 0 0 1 0.8
Hispanic 39 31.5 39 31.5 78 62.9
Pacific Islander 1 0.8 0 0 1 0.8
Two or more 5 4 1 0.8 6 4.8
Unknown 0 0 1 0.8 1 0.8
White 14 11.3 7 5.7 21 16.9
Class level
Freshman 64 51.6 46 37.1 110 88.7
Sophomore 7 5.6 5 4 12 9.7
Senior 2 1.6 0 0 2 1.6

Instructors were recruited by the researcher from the pool of English faculty approved to teach those classes, and based upon their willingness to participate in the study. All of the faculty selected for the study were lecturers in the Department of English. Each of the faculty had been teaching in the first-year writing program for over 5 years. The instructors did not offer representation of the faculty population.

The faculty had been trained in a professional development program for teaching with mobile technology and were experienced teaching with technology. Participant faculty were familiar with the use of SRS such as Socrative and Google Slides, thus additional professional training for the study was not necessary.

Procedure

Each instructor taught two sections of the same writing course: one employed a treatment course instruction and the other a control course instruction. In the treatment classes, Google Slides collaborative activities asked students to work in groups to populate an individual group slide which answered the formative assessment question. Google Slides allows students to use multi-modal digital elements placed within a slide, such as pictures, images, videos, drawings and text. Unless students chose to place their names on a slide, the authorship of the slides remained anonymous. Once all students completed their group slide, the instructor would present the entire slide deck through the classroom projector for all students to see all of the class slides. The slide decks were saved for students to access on the course learning management system.

The individual Socrative assessment activities recorded a formative assessment question in a Socrative short-answer format or quiz. The students were able offer an individual, anonymous text response. An inventory of all student responses to the formative assessment question was displayed on the classroom screen for students and instructor review.

The instrument used in this study included all of the 11 subscales from Buil, Catalan and Martinez’s study and included an additional subscale of anonymity (Chua and Jiang 2006; Yoon and Rolland 2012). Carmines and Zeller (1979) report that the 11 subscales used by Buil, Catalan and Martinez all have standardised factor loadings greater than 0.7. This suggests sufficient reliability for each subscale. Composite reliabilities were also greater than 0.7 (Nunnally and Bernstein 1994). The anonymity subscale (Chua and Jiang 2006; Yoon and Rolland 2012) also has a composite reliability scale greater than 0.7 (Nunnally and Bernstein 1994). A pre- and post-survey instrument was offered at the beginning of the semester and the end of the semester via a link posted to the course Learning Management System (LMS): The Learning/Classroom Environment survey (see Table 2).

Table 2. Learning/Classroom Environment survey.
Construct Subscale
Perceived academic control (Jackson and Marsh 1996; Perry et al. 2001) Feedback (Jackson and Marsh 1996)
Anonymity (Chua and Jiang 2006; Yoon and Rolland 2012)
Intrinsic motivation (Guay, Vallerand, and Blanchard 2000)
Perceived self-efficacy (Pintrich et al. 1991) Enjoyment (Jackson and Marsh 1996)
Pride (Pekrun, Gotz, and Perry 2005)
Boredom (Pekrun, Gotz, and Perry 2005)
Perceived value (Pintrich et al. 1991) Extrinsic motivation (Pintrich et al. 1991)
Perceived learning (Hamari et al. 2016)
Satisfaction (Kettanurak, Ramamurthy, and Haseman 2001)

Once the final draft of a writing assignment was submitted by the students, the Assessment/Engagement survey was offered to students during the following class session, for a total of three times. The constructs of the control-value theory comprised the Assessment/Engagement survey (see Table 3). A link to the survey was posted as an announcement on the course LMS for students to access.

Table 3. Assessment/Engagement survey.
Construct Scale item
Control1 (Jackson and Marsh 1996; Perry et al. 2001) The more effort I put into my courses, the better I do in them.
Control2 I consider myself responsible for my performance in my courses.
Control3 I have a great deal of control over my academic performance in my courses.
Feedback1 (Jackson and Marsh 1996) It is really clear to me that I am doing well.
Feedback2 I am aware of how many responses I am performing well.
Feedback3 I know how well I am doing.
Anonymity1 (Chua and Jiang 2006; Yoon and Rolland 2012) During assessment activities, other members are able to identify me.
Anonymity2 When others see or hear the comments that I offer during assessment activities, they recognise me.
Anonymity3 When I participate in assessment activities, I feel exposed.
Intrinsic Motivation1 (Guay, Vallerand, and Blanchard 2000) I find the assessment activities funny.
Intrinsic Motivation2 I find the assessment activities interesting.
Intrinsic Motivation3 I find the assessment activities pleasant.
Self-efficacy1 (Pintrich et al. 1991) I expect to do well in this course.
Self-efficacy2 I expect to receive an excellent grade.
Self-efficacy3 I am confident I can learn interesting concepts.
Enjoyment1 (Jackson and Marsh 1996) I really enjoy the assessment activities.
Enjoyment2 I feel good during the assessment activities.
Enjoyment3 I found the assessment activities extremely rewarding.
Pride1 (Pekrun, Gotz, and Perry 2005) I feel proud if my/group responses are better than others.
Pride2 I am proud of the contribution I have made in my class/group.
Pride3 When I contribute to the assessment activities individually or as a group, I get more motivated.
Boredom1 (Pekrun, Gotz, and Perry 2005) I find the assessment activities fairly dull.
Boredom2 During assessment activities, I cannot wait for the class to end because I feel bored.
Boredom3 I think about what else I might be doing rather than engaging in the assessment activities.
Value1 (Pintrich et al. 1991) I think that what I learn in this class is useful for me to know.
Value2 I think I will be able to use what I learn in this class in other classes.
Value3 Understanding the subject is important to me.
Extrinsic motivation1 (Pintrich et al. 1991) Getting the correct response during the assessment activities is the most satisfying thing for me right now.
Extrinsic motivation2 I would like to have better responses than the other individuals or groups.
Extrinsic motivation3 I want to do well during the assessment activities because it is important to show my ability to my classmates and teachers.
Perceived learning1 (Hamari et al. 2016) The assessment activities were useful for my learning.
Perceived learning2 The assessment activities helped me to understand the material.
Perceived learning3 The assessment activities helped me to learn.
Satisfaction1 (Kettanurak, Ramamurthy, and Haseman 2001) I found the assessment activities valuable.
Satisfaction2 I was very satisfied with the assessment activities.
Satisfaction3 I had a very positive learning experience during the assessment activities.

A 5-point rubric was used to score the essays. High school Grade Point Average (GPA) and Scholastic Aptitude Test (SAT) data were supplied by the OIE, and the reliability was checked by the OIE to ensure that the data were accurate. In an attempt to measure student achievement directly, scores for each of the three writing assignments were collected to measure if a significant mean difference in scores exists between the treatment groups and control groups. It is possible that any one class may have had a majority of students with especially high or low SAT and/or GPA scores (Tables 4 and 5). Two independent sample t-tests were run to determine the mean difference of high school GPA and SAT scores between the control and treatment groups. While SAT and GPA may be broad indicators of prior knowledge, the university uses student high school GPA and SAT scores as predictive of enrolment into first-year composition courses.

Table 4. Descriptive data for student SAT scores.
Source N Mean Std. deviation
Control 50 1020.00 110.82
Treatment 70 1042.00 143.29


Table 5. Descriptive data for student high school GPA.
Source N Mean Std. deviation
Control 51 3.56 0.43
Treatment 73 3.56 0.42

Analyses and results

The analysis consisted of repeated-measures analysis of variance (ANOVA) for student perception data and student essay scores.

Learning/classroom environment

Analysis was a 2 × 3 repeated-measures ANOVA. The independent variables were treatment and time (2×). The dependent variables were the mean of academic control, self-efficacy and value.

Assessment/engagement

The analysis was a 2 × 3 repeated-measures ANOVA with treatment and time (3×) as the independent variables. The dependent variables were the measures of student perception of instructor feedback, enjoyment, pride, boredom, intrinsic motivation, extrinsic motivation, perceived learning, satisfaction and anonymity.

Student achievement

The three assignment scores were analysed using a 2 × 3 repeated-measures ANOVA with treatment and time (3×) as the independent variables. The dependent variable was the assignment score.

Summary of results

R1: Within the framework of the control-value theory, do the content-generating treatment groups have higher mean perception of achievement emotions?

There were 11 constructs in the control-value theory, and anonymity was added. Five analyses had significant, or approaching significant, findings. The significant items are displayed in Table 6.

Table 6. Summary of significant and non-significant (NS) perception variables.
Perception variable Treatment Time linear Time quadratic Interaction linear Interaction quadratic
Academic control NS NS NS
 Instructor feedback NS NS NS NS NS
 Anonymity NS NS NS NS 0.03
 Intrinsic motivation NS NS NS NS NS
Self-efficacy NS 0.052 NS
 Enjoyment NS <0.001 NS NS NS
 Pride NS NS NS NS NS
 Boredom NS NS NS NS NS
Value NS 0.051 NS
 Extrinsic motivation NS NS NS NS NS
 Learning NS NS NS NS NS
 Satisfaction NS NS NS 0.05 NS

R2: Is there a difference in student achievement between the treatment groups using content-generating SRS and the control groups?

A two-way repeated-measures ANOVA was used to test the differences in student essay scores over time between students in the treatment group and the students in the control group (see Table 7).

Table 7. Repeated-measures analysis of variance for student writing scores.
Source Sum of squares df Mean square F Sig. Partial eta squared
Treatment 299.30 1 299.30 11.53 0.001 0.14
 Error 1843.21 71 25.96
Time
 Linear
1035.95 1 1035.95 85.73 <0.001 0.55
 Quadratic 89.27 1 89.27 5.90 0.02 0.08
Time by treatment
 Linear
0.58 1 0.58 0.05 0.83 0.001
 Quadratic 47.65 1 47.65 3.15 0.08 0.04
Error (time)
 Linear
857.93 71 12.08
 Quadratic 1075.01 71 15.14

The statistically significant results between the treatment and control groups for essay scores are objective measurements of student achievement and have implications for how to support both students and faculty in innovative curriculum design. The use of formative assessment is a high-impact practice that asks faculty to better understand student learning and understanding. Students are constantly self-assessing their ability to achieve and persist towards a learning goal (Bandura 1977). The use of content-generating SRS implementation to project student responses during the treatment classes allowed faculty to adjust the lecture and allowed students to view content from everyone in the classroom. The control courses for this study provided student feedback from the selected group of students who chose to raise their hand and offer information. Instructors in the treatment courses had a better perception of what students understood about completing an essay and were able to offer students feedback specifically addressing misconceptions or confusion about a writing assignment.

Discussion

The main purpose of this study was to determine if the use of content-generating SRS as formative assessment in freshman writing courses had any effect on students’ perception of their learning, and on students’ achievement scores on writing projects. This study considered content-generating SRS applications that allowed students to add text and content of their own in response to formative assessment. Content-generating SRS allow students to think critically and self-select the content of a response (Boyle and Nicol 2003).

To improve student achievement is to look beyond the instructor and student and consider the transactional environment of content delivery and content knowledge (Buil, Catalan, and Martinez 2016; Pekrun 2006; Pekrun et al. 2011; Ryan and Deci 2000). The instructor feedback is an important element of formative assessment and promotes student learning, but the student response is critical to instructor learning. The use of content-generating SRS offers a platform with which faculty can assess how their teaching skills are being interpreted in the classroom. This study’s aim was to measure actual student achievement using student essay scores from control and treatment groups over time in a pre-experimental design model.

There is a tendency for students to avoid the risk of speaking up in class for fear of embarrassment and being judged by peers (Caldwell 2007). The culture of anonymity in the treatment courses added a richer element to class discussions as more students participated in the learning community. Students who would not normally engage in the classroom conversation had an outlet, in which to engage without being identified in the treatment courses, thus creating a more inclusive learning experience. Students had the control over self-identification of their responses or not. The ability to remain anonymous was a catalyst for the treatment students to voice their understanding using a digital device; thus, they were able to gain feedback from the instructor to self-assess their learning. Anonymity gives voice to students who would not normally speak up in class.

The engagement with and manipulation of content using a digital device allow students to express their level of learning and understanding that may never be explored using traditional pedagogical practices. While traditional polling software may offer up information as to what percentage of the class understood a particular topic or an overall understanding of segments of the class, content-generating SRS provide a more specific and individualised level of understanding. Students in the treatment courses were responding to formative assessment questions using content-generating applications, which allowed students to provide text, images and drawings to communicate their content understanding. The learning platform was multi-modal, and as the instructors were responding to one individual’s response, all students in the class were benefiting from the feedback.

Pekrun (2006) explained that feedback informs students of probabilities of future success, which impacts a student’s appraisal of academic control and self-efficacy. The ability to influence the process as the assessment is occurring provides a powerful tool to improve student achievement and, as such, requires instructors to acknowledge the process and content of material available for students to self-assess. The findings from this study clarify that the use of technology and formative assessment activities offer students more information with which to assess their understanding and their learning, which positively affected their educational achievement. The findings from this study do not pinpoint the exact source of improved self-efficacy, but it is important to consider how students self-assess their confidence in their ability to learn during formative assessment activities. There is a need for faculty in higher education to become knowledge facilitators rather than content deliverers (Laxman 2011). Peer-to-peer collaboration is an aspect of formative assessment. Allowing students to work in groups provides students another source of self-assessment to potentially foster improved self-efficacy.

Instructors know how to focus on teaching content knowledge, but learning knowledge is often overlooked (Conley and French 2014). Students are constantly measuring their ability to navigate the learning environment. Adopting a student lens means allowing students a safe and inclusive environment to self-assess. Collaborative interaction with both the instructor and peers creates a comfort zone where ideas can be offered, discussed and tested (Ndoye 2017). This is where technology plays a pivotal role. While students are collaborating and self-assessing among peers, the use of technology to display all student understanding of the learning experience allows a broader field with which to self-assess. Each student in the treatment courses had an opportunity for instructor’s feedback as to their individual understanding anonymously. In the control courses, only the students who chose to raise their hand and offer a verbal explanation of their understanding received feedback, which was shared amongst the class. The treatment courses in this study had more information available to self-assess their learning and understanding of specific assignments. The findings suggest that this expanded content information improved student writing scores.

Understanding that students continuously measure their ability to navigate the learning environment creates the context, in which to help students continue towards achievement goals. Formative assessment and instructor feedback affect the learning process at critical intervals. The faculty in this study scaffolded formative assessment activities and corresponding feedback throughout the semester in both the control and treatment courses. The significant findings for improved essay scores over time for the treatment groups speak to improved student achievement with the use of content-generating SRS, but the limited significant findings of the student perception surveys illustrate that the students did not identify with the improved writing ability; this aspect of the study’s findings suggests an area for faculty professional development.

Educators want students to be successful, but they also want students to know they are successful. If educators offer students more information with the use of technology, then instructors need to be responsible for promoting a positive sense of academic control, self-efficacy and value of the learning environment. Technology affords instructors the ability to hear every student’s voice. Each of those voices deserves a response that encourages both academic success and self-efficacy realisation. Instructor feedback during formative assessment activities should consider how the students are self-assessing and adjust feedback accordingly.

The primary finding in this research is the use of content-generating SRS to improve student achievement. Implementing the use of this technology utilised with formative assessment activities in university instruction can easily transform the teaching and learning environment. Through adding anonymity and continuous formative assessment feedback as provided through faculty intervention, instruction or student collaboration can lead to improved achievement, adding a high-impact practice for student success at universities.

Limitations

The faculty for this study were not specifically trained for the assessment activities. The instructors were familiar with the use of Google Slides and Socrative, but a formal training and review for the study were not included. Instructors will have different teaching styles that will affect how students perceive the learning environment and any activities offered from the instructor. It would be useful to design a future research study where the researcher actually observed the formative assessment activities and the student responses. A rubric measuring student engagement and instructor feedback could be designed to record the observation data.

The faculty in this study accepted a challenge to alter the way they teach by using technology and formative assessment activities. As each faculty member taught one control course in the traditional lecture format and one treatment course using technology, it would be useful to understand the student perception of each instructor’s teaching style. Future research might have students evaluate instructors; these evaluative data would offer insight as to whether instructor teaching style was having an effect on student perception of the learning environment as well as their achievement.

This study looked at student perception and student achievement. A qualitative study on the instructor experience would also inform university teaching. As this study’s aim was to inform current practice, understanding the instructor lens would benefit innovative curriculum design.

Educational technology is very dynamic; as such, new SRS platforms are constantly evolving. This study considered only two content-generating SRS. Future studies comparing the use of polling SRS to content-generating SRS may offer a better understanding of advantages and limitations to such technology.

Acknowledgements

This article is the dissertation study of the author Mary W. Paul. The additional authors listed were the amazing committee members whose support and wisdom were unrivaled.

References


Bandura, A. (1977) ‘Self-efficacy: toward a unifying theory of behavioral change’, Psychological Review, vol. 84, no. 2, pp. 191–215. doi: 10.1037/0033-295x.84.2.191

Boyle, J. T. & Nicol, D. J. (2003) ‘Using classroom communication systems to support interaction and discussion in large class settings’, Research in Learning Technology, vol. 11, no. 3, pp. 43–57. doi: 10.3402/rlt.v11i3.11284

Buil, I., Catalan, S. & Martínez, E. (2016) ‘Do clickers enhance learning? A control-value theory approach’, Computers & Education, vol. 103, pp. 170–182. doi: 10.1016/j.compedu.2016.10.009

Caldwell, J. E. (2007) ‘Clickers in the large classroom: current research and best-practice tips’, CBE-Life Sciences Education, vol. 6, no. 1, pp. 9–20. doi: 10.1187/cbe.06-12-0205

Carmines, E. & Zeller, R. (1979) Reliability and Validity Assessment (Quantitative applications in the social sciences; no. 07-017), Newbury Park, CA. doi: 10.4135/9781412985642

Chua, Z. & Jiang, Z. (2006) ‘Effects of anonymity, media richness, and chat-room activeness on online chatting’, Proceedings of 14th European Conference on Information Systems (ECIS), Sweden, vol. 153, pp. 2336–2348, [online] Available at: https://pdfs.semanticscholar.org/5524/b4e0ba77491091af62fd8c47900bafb6402a.pdf

Conley, D. T. & French, E. M. (2014) ‘Student ownership of learning as a key component of college readiness’, American Behavioral Scientist, vol. 58, no. 8, pp. 1018–1034. doi: 10.1177/0002764213515232

Davis, S. (2003) ‘Observations in classrooms using a network of handheld devices’, Journal of Computer Assisted Learning, vol. 19, no. 3, pp. 298–307. doi: 10.1046/j.0266-4909.2003.00031.x

Fies, C. & Marshall, J. (2006) ‘Classroom response systems: a review of the literature’, Journal of Science Education and Technology, vol. 15, no. 1, pp. 101–109. doi: 10.1007/s10956-006-0360-1

Guay, F., Vallerand, R. J. & Blanchard, C. (2000) ‘On the assessment of situational intrinsic and extrinsic motivation: the situational motivation scale (SIMS)’, Motivation and Emotion, vol. 24, no. 3, pp. 175–213. doi: 10.1023/a:1005614228250

Hamari, J., et al., (2016) ‘Challenging games help students learn: an empirical study on engagement, flow and immersion in game-based learning’, Computers in Human Behavior, vol. 54, pp. 170–179. doi: 10.1016/j.chb.2015.07.045

Hoekstra, A. (2008) ‘Vibrant student voices: exploring effects of the use of clickers in large college courses’, Learning, Media and Technology, vol. 33, no. 4, pp. 329–341. doi: 10.1080/17439880802497081

Jackson, S. A. & Marsh, H. W. (1996) ‘Development and validation of a scale to measure optimal experience: the flow state scale’, Journal of Sport and Exercise Psychology, vol. 18, no. 1, pp. 17–35. doi: 10.1123/jsep.18.1.17

Kaleta, R. & Joosten, T. (2007) ‘Student response systems’, Research Bulletin, vol. 10, no. 1, pp. 1–12. doi: 10.4135/9781483346397.n281

Kettanurak, V. N., Ramamurthy, K. & Haseman, W. D. (2001) ‘User attitude as a mediator of learning performance improvement in an interactive multimedia environment: an empirical investigation of the degree of interactivity and learning styles’, International Journal of Human-Computer Studies, vol. 54, no. 4, pp. 541–583. doi: 10.1006/ijhc.2001.0457

Kulasegaram, K. & Rangachari, P. K. (2018) ‘Beyond “formative”: assessments to enrich student learning’, Advances in Physiology Education, vol. 42, no. 1, pp. 5–14. doi: 10.1152/advan.00122.2017

Laxman, K. (2011) ‘A study on the adoption of clickers in higher education’, Australian Journal of Educational Technology, vol. 27, no. 8, pp. 1291–1303. doi: 10.14742/ajet.894

Myers, C. B., et al., (2004) ‘Emerging online learning environments and student learning: an analysis of faculty perceptions’, Journal of Educational Technology & Society, vol. 7, no. 1, pp. 78–81, [online] Available at: https://www.jstor.org/stable/pdf/jeductechsoci.7.1.78.pdf?casa_token=Bk1fzvlHmrgAAAAA:7sq6ovuEaXKUHhOPgwXCvlGXsgzlUuwm6zbTpYEzfJKU96Ly7zY4vXLVV6mR9Dr7Fr9BPuIR_x1KprYKvGAWMYtLIjrenx0bDVWZ0KgzWxoqiL5PGw

National Research Council. (2000) ‘Inquiry and the national science education standards: A guide for teaching and learning’, National Academies Press Washington, DC.

Ndoye, A. (2017) ‘Peer/self-assessment and student learning’, International Journal of Teaching and Learning in Higher Education, vol. 29, no. 2, pp. 255–269, [online] Available at: https://files.eric.ed.gov/fulltext/EJ1146193.pdf

Noel, D., Stover, S. & McNutt, M. (2015) ‘Student perceptions of engagement using mobile-based polling as an audience response system: implications for leadership studies’, Journal of Leadership Education, vol. 14, no. 3, pp. 53–70. doi: 10.12806/v14/i3/r4

Nunnally, J. C. & Bernstein, I. H. (1994) Psychometric Theory. Series in Psychology, vol. 3, McGraw-Hill, New York, NY.

Pekrun, R. (2000) ‘Motivational Psychology of Human Development’, Oxford, England: Elsevier.

Pekrun, R. (2006) ‘The control-value theory of achievement emotions: assumptions, corollaries, and implications for educational research and practice’, Educational Psychology Review, vol. 18, no. 4, pp. 315–341. doi: 10.1007/s10648-006-9029-9

Pekrun, R., Elliot, A. J. & Maier, M. A. (2006) ‘Achievement goals and discrete achievement emotions: a theoretical model and prospective test’, Journal of Educational Psychology, vol. 98, no. 3, p. 583. doi: 10.1037/0022-0663.98.3.583

Pekrun, R., et al., (2007) ‘The control-value theory of achievement emotions: an integrative approach to emotions in education’, in Emotion in Education, Amsterdam: Academic Press, pp. 13–36. doi: 10.1016/b978-012372545-5/50003-4

Pekrun, R., et al., (2011) ‘Measuring emotions in students’ learning and performance: the Achievement Emotions Questionnaire (AEQ)’, Contemporary Educational Psychology, vol. 36, no. 1, pp. 36–48. doi: 10.1016/j.cedpsych.2010.10.002

Pekrun, R., Götz, T. & Perry, R. P. (2005) Achievement Emotions Questionnaire (AEQ), User’s manual, Unpublished Manuscript, University of Munich, Munich. doi: 10.1037/t21196-000

Perry, R., et al., (2001) ‘Academic control and action control in college students: a longitudinal study of self-regulation’, Journal of Educational Psychology, vol. 93, pp. 776–789. doi: 10.1037/0022-0663.93.4.776

Pintrich, P., et al., (1991) A Manual for the Use of Motivated Strategies for Learning Questionnaire (MSLQ), The University of Michigan, Ann Arbor, MI, [online] Available at: https://files.eric.ed.gov/fulltext/ED338122.pdf

Ryan, R. M. & Deci, E. L. (2000) ‘Intrinsic and extrinsic motivations: classic definitions and new directions’, Contemporary Educational Psychology, vol. 25, no. 1, pp. 54–67. doi: 10.1006/ceps.1999.1020

Spotts, T. H. (1999) ‘Discriminating factors in faculty use of instructional technology in higher education’, Educational Technology & Society, vol. 2, no. 4, pp. 92–99, [online] Available at: https://www.jstor.org/stable/pdf/jeductechsoci.2.4.92.pdf?casa_token=KiQWblZnwv4AAAAA:gGiXxOe25zkKk9QSJyj_b1ftzmXhP2fwmd4UvOlSP-mlnW6OsZkSuAWp9Gc63-uZE1ZvtlMGTeCQQP4nE676ihIa58QrTohXgkr6v4qIY7cDo-y9pg

Weimer, M. (2013) Defining Teaching Effectiveness, [online] Available at: https://www.teachingprofessor.com/topics/for-those-who-teach/defining-teaching-effectiveness/

Wiggins, G. P. (1993) Assessing Student Performance: Exploring the Purpose and Limits of Testing, Jossey-Bass, San Francisco, CA.

Yoon, C. & Rolland, E. (2012) ‘Knowledge-sharing in virtual communities: familiarity, anonymity and self-determination theory’, Behavior & Information Technology, vol. 31, no. 11, pp. 1133–1143. doi: 10.1080/0144929x.2012.702355