In a conventional classroom setting, instructors rely on perceived engagement to adapt their teaching strategies. With some experience, they become adept at “reading the room,” but what each instructor understands by “engagement” will vary. When teaching online, engagement is assessed in less obvious ways, and it becomes important to be more precise about what is being assessed. In what follows, I will talk about what engagement parameters are measurable and give examples from three studies.
What is measurable?
The nature of engagement is different depending on the learning activity. In recent literature, the following categories can help narrow down what can be measurable:
- Affective engagement: emotional attitude; for example, being interested in a topic and enjoying learning about it
- Academic engagement: getting along with teachers, participating in class, time on tasks, not skipping classes
- Behavioral engagement also draws on the idea of participation, and includes extra-curricular activities, staying focused, submitting assigned tasks, and following the instructor’s instructions.
- Cognitive engagement refers to the thoughtfulness and willingness to exert the effort necessary to understand complex ideas and to master difficult skills. It involves attention, focus, memory, and creative thinking.
- Emotional engagement encompasses positive and negative reactions to teachers, classmates, and content.
- Psychological engagement refers to the sense of belonging and relationships with teachers and peers.
Of these, affective, behavioural, and cognitive are most frequently investigated. Engagement measurements can also be 1. internal to the subject, which are typically measured through self-reporting and interviews, and 2. external and observable, such as performance metrics based on interaction with technology, perceptible facial features, postures, gestures, speech, actions, time spent on task.
Let’s look at three studies and examine how they’ve measured engagement.
In a paper entitled “Beyond engagement: Learning from Students as Partners (SaP) in curriculum and assessment” by Love and Crough of Griffith University in Australia, the authors present a study in which students were involved as partners in the curriculum design. The SaP approach promotes active student engagement by involving students in decision-making processes and inviting them to contribute their perspectives and ideas. What was measured? The measurement criterion, “partnership” or student participation in the creation of content, is believed to offer certain advantages, such as empowerment and agency, improved collaboration, and more focus on the subject that students are now invited to be active stakeholders in rather than mere observers. There are also problems with this approach—the major one being an inherent power imbalance between students and instructors, who are simply not equal partners on a number of important levels. How was engagement measured? The results of this qualitative study are mixed, and it appears that the authors learned more about how students self-organize in online environments than the students did about the subjects they were partners in.
Our next example comes from four US-based researchers, in a study entitled “Asynchronous video and the development of instructor social presence and student engagement.” The problem Collins et al. (2019) are confronted with is the rapidly growing enrollment in online learning (circa 2019), accompanied by equally high attrition rates. For them, the question is how to keep students engaged so that they can achieve better outcomes and at the same time reduce the feelings of isolation that frequently accompany online learning. What was measured? Here, engagement takes on the form of instructor presence (instructor social presence). For these researchers, instructor presence and social interaction between learners and instructors are the most important factors of student engagement. More specifically, they looked at two kinds of interactions: text-based and asynchronous video communications from the instructor that were posted twice weekly. How was it measured? This was a quantitative study of online video engagement. The researchers measured the length of discussion forum posts as an indicator of engagement and used a questionnaire to evaluate the feeling of social presence. Since they were trying to combat attrition, or alienation, they reasoned that time spent on the discussion task prompted by video or by text would indicate the potential for each of these two modalities to improve engagement. What they found was that students responded significantly more to the text-based communication than to the video-based one, though both methods elicited some form of response. However, this is to be expected given the modality of communications. Their findings do not necessarily indicate engagement as a measure of cognitive involvement, but rather as a measure of time spent on task.
The last example is from a study that looks at online learning analytics. In “Beyond engagement analytics: Which online mixed-data factors predict student learning outcomes?,” the author attempts to glean whether academic successes can be correlated to student identities. What was measured? More specifically, the researchers looked at age, gender, and culture (by which the author means ethnic background), and to what degree these were predictors of students’ success in online learning. Successful engagement was defined by how actively students participated in forum postings, how well they scored in assigned activities, and their overall interactions with the LMS. How was it measured? This is a mixed-methods study combining both qualitative and quantitative methods that sought to identify, through rigorous statistical analysis of a fairly large sample of students, if the engagement analytics chosen could serve as predictors of online learning outcomes. They found that these analytic factors were not significant predictors, except for the number of logins. However, other secondary factors emerged as important indicators, and those were of interest to me as an instructional designer. Particularly, students who were comfortable with the interface and were able to navigate the course with ease did much better. This means an intuitive and well tested design that minimizes distractions, as well as a clean graphical interface. The takeaways are that comfort, predictability, uniformity, and in general the aesthetics of design are highly valued by the students. Another thing that emerges from that study is that engaging students in online quizzing, as in knowledge checks and not so much as in quiz scores, is an excellent predictor of success. Students who did really well overall did not necessarily achieve the highest quiz scores, but they learned much in the process of being engaged with knowledge checks. Lastly, this study mentions that the continuous stimulation of recall comes from the presence of the instructor whose role is to encourage students to return online more frequently and to complete more knowledge checks and quizzes. For this reason, the number of logins correlates with success.
These are some of the current methods for measuring engagement. As far as the future of engagement measurements goes, automated methods and AI will begin to play an increasingly important role in informing the creation, design, or facilitation of course content.
References
Collins, K., Groff, S., Mathena, C., & Kupczynskı, L. (2019). Asynchronous video and the development of instructor social presence and student engagement. Turkish Online Journal of Distance Education, 20(1), 53-70.
Love, C., & Crough, J. (2019). Beyond engagement: Learning from Students as Partners in curriculum and assessment. In Proceedings of the 3rd EuroSoTL Conference (pp. 296-303).
Strang, K. D. (2017). Beyond engagement analytics: Which online mixed-data factors predict student learning outcomes? Education and information technologies, 22(3), 917-937.
Image credit: Anna Shvets via Pexels
Theo Stojanov, PhD, is a culture analyst, media producer, and researcher based in Montreal. His doctoral thesis examines the sociology of cultural production. His research involves a critical examination of the creative industries, media production practices, policies, and people.
- This author does not have any more posts.



