McGill Teaching and Academic Programs

What is automatic engagement detection for online learning?

Have you ever set out to assess engagement with students online, only to find out that you are unsure how to go about it? Perhaps you want to evaluate the success of a course you have created, or perhaps you have felt a disconnect with your students and want to put your finger on the problem by looking at class analytics. A peek at LMS stats might be a good starting place, but will it provide enough information?

In what follows, I will go over some of the methods used to detect engagement in online learning and focus on recent developments.

How is engagement measured?

The detection and measurement of engagement can be classified into three principal methodological approaches. Manual methods are the earliest, and consist of observation, self-reporting, questionnaires, and focus groups, mostly qualitative in nature. These are also the least common when it comes to educational technology. Semi-automatic methods involve the analysis of student performance based on length of posts, the time it takes to complete tasks, and generally, activity logged by the LMS. The automatic method is the most recent, and relies on big data collection and the building of complex statistical models that could combine log data from the LMS with various sensors, such as wearables, microphones, and cameras. This last method brings us into the world of AI-mediated engagement analysis, and we can expect more of that in the near future.

A diagram showing the three methodological approaches to measuring class engagement: manual, semi-automatic, and autmoatic.

There may be several reasons to measure engagement, for instance, to account for challenges in learning outcomes, to assess interest in a topic, to estimate participation and enthusiasm, or to get a sense for the rapport between a student and their instructors or peers. The most frequent reason given to measure engagement, however, is to improve the retention of students in online environments.

Automated engagement detection

Recent studies (e.g., Dewan et al., 2019) chart a course towards an increased use of sensors, specifically cameras, which they hope will one day help to develop an AI that can detect “perceived engagement” in a similar way that a teacher can perceive the engagement of students in a classroom. The implications are that, just as a teacher readjusts their instruction methods based on perceived engagement, so will a system. Consequently, based on what it perceives, a system may adjust its instructional strategies to offer personalized instructional design, ensuring optimal learner experience.

A composite image of a man's face overlayed with nodes and lines to suggest the man's features are being examined by a program. Arranged around the man are words describing his potential emotional state, such as bored, interested, or engaged.

One of the ways proposed to detect engagement in an automated way is through computer vision-based methods. Cameras record actions, which are analyzed and labelled by humans. Humans look at the video feed, code the different facial expressions as “bored,” “interested,” “neutral,” “confused,” “engaged,” and “frustrated,” and in this way, they train an AI to recognize similar actions in the future. There are a number of potential advantages to this, namely the potential for personalized instruction. This would take the idea of universal design to a completely new level by opening up the possibility for generating inclusive spaces and experiences that accommodate diverse individuals of all abilities. There are also disadvantages. For instance, it is difficult to interpret facial expressions from one person to the next. It is also difficult to establish a correlation between what this method perceives as “engaged” and academic performance because subjects who may appear disengaged may in fact do rather well in a class. Furthermore, there is a cultural aspect: engagement looks different across cultures, age groups, and all sorts of demographic variables. Frequently, this kind of research insists on describing itself as non-intrusive, but it leaves out the definition of what that means, which can be problematic from a privacy perspective.

Data ownership and digital sovereignty

Course creators, instructional designers, and instructors frequently find themselves underequipped to deal with data analytics. The tools currently at their disposal rarely permit a high-resolution impression of student engagement. Accessing this information, especially with third party tools, complicates things even further. The main reason for this is copyright, since data and analytics are in most cases proprietary information. Likewise, while an LMS provides some basic information for each student, or for a particular video, it is much more difficult to see the big picture, to view how all components work together across time and for different groups of students. In these cases, we have found that the tried and tested methods of focus groups and questionnaires provide a much more immediate and fairly accurate impression of the kinds of engagement students are experiencing. Although qualitative studies offer a much more low-resolution approach, their simplicity and efficacy cannot be understated when well designed.

In a more general sense, the lack of transparency and access to data generated by the students should prompt questions about who this data actually belongs to – the university who pays for the technologies, the technology providers, or the students themselves. The conversation of data sovereignty and privacy is critical. Researchers who pursue automatic engagement detection methods are onto something when they suggest that large-scale multi-dimensional analysis of engagement metrics could lead to more meaningful interactions with technology. Yet, I invite you to decide whether you agree that these methods can have a positive impact on the work of education or whether they just happen to be used in an educational environment. In the context of Ed Tech, the majority of studies related to engagement have been quantitative. They can be highly informative in showing whether a certain instructional strategy or a tool is effective or not, but there remain many other student engagement indicators that are more difficult to measure this way. There is much room to study the affective and cognitive domains of engagement, and while it is arguably more difficult to do so, measurements can take the form of well-designed surveys, ethnographic observations; and other forms of qualitative data can help formulate the less observable aspects of engagement, such as affect, emotion, and behaviour.

With online learning, we are at the nexus of technology and student engagement, yet as researchers show, there is no guarantee of student engagement as a result of using technology (Bond et al., 2020). Tech can amplify but not replace teaching and teacher engagement. Teacher engagement is a topic in its own right. We frequently speak of attrition rates and student alienation, but the conversation is not complete until it factors in instructor engagement and alienation as well.

In some ways, describing what makes good engagement is akin to asking what makes a good movie or a good book. Engagement is the ingredient added to the exchange of information that brings it into the realm of unique experiences. We may think of engagement in terms of economies of scale, and apply statistics and large-scale analytical models, but for students, it is an individual and very personal experience, just like watching a show or reading a novel.

References:

Bond, M., Buntins. K., Bedenlier, S., Zawacki-Richter, O., & Kerres, M. (2020). Mapping research in student engagement and educational technology in higher education: A systematic evidence map. International Journal of Educational Technology in Higher Education, 17(2), 1-30.

Dewan, M. A. A., Murshed, M., & Lin, F. (2019). Engagement detection in online learning: A review. Smart Learning Environments, 6(1), 1-20.

Image credits:

Main image by peoplecreations via Freepik
In-text images by the author

Theo Stojanov, PhD, is a culture analyst, media producer, and researcher based in Montreal. His doctoral thesis examines the sociology of cultural production. His research involves a critical examination of the creative industries, media production practices, policies, and people.

Leave a Reply

JOIN 1,288+ SUBSCRIBERS 
AND FOLLOW US

Sign up to receive notifications of new posts by email.

Join 505 other subscribers
Shopping Basket

Discover more from Teaching for Learning @ McGill University

Subscribe now to keep reading and get access to the full archive.

Continue reading

Skip to content