Many executive courses and capacity-building programs don’t employ tests, exams, and formal assignments, making it trickier to monitor progress and success during the courses. We’ve learnt several lessons in ensuring your students are equipped with what they need, and to instil confidence in their output. Here are our five key learnings from our experience in creating a rigorous measurement process.
- You need to know what you’re measuring.
While this will sound painfully obvious, it’s easy to get it wrong. For example, what does engagement mean to you, and how do you quantify it, both in a live or digital format?
You need to ask: what is important to you? Is having a high pass and completion rates more critical than the volume of students who complete the course? By having high barriers to entry, you can increase your completion rates, but the total number of students trained may then be lower.
end up measuring students’ enjoyment and perceptions of learning rather than their performance.
A crucial trap to avoid: you end up measuring students’ enjoyment and perceptions of learning rather than their performance. It feels rewarding if students say they learned and grew during the course, but we need to test this against a real-world yardstick.
You’ll also need to take care not to mistake behaviours and measures. A student who’s an introvert may never appear to be engaged in the course if your standard for engagement is how often someone comments, talks or ask questions.
- Don’t fall into the trap of believing what your students tell you.
While questionnaires and post-course surveys are an integral part of online education, you need to keep in mind the different biases which play a role: confirmation bias, sample bias and interviewer bias.
As an example, students on our course identify weekends as their busiest time. The data tells a very different story, as weekends are the least busy time. However, an hour on a Saturday is a bigger sacrifice than three hours during the week, and that’s why they remember the hour spent on Saturday afternoon.
they usually fail to gain responses from disengaged students
Another example is the feedback from Zoom calls, where the facilitators or subject matter experts report the sessions where they perceive the students were interactive. What they fail to acknowledge is that they’re deeply embedded in that call, and are busy answering questions, speaking and listening. The issue here is that they can mistakenly project their interaction with a limited number of students onto the entire student base.
Surveys can also offer biased results, as they usually fail to gain responses from disengaged students or those who have dropped off.
While it’s important to gauge student sentiment as it tells you what they feel, it has to be validated against objective data.
Factor in the effect of external influences.
Unfortunately, you will have no real control of these elements, as it covers the students’ work and life balance, their inherent personalities, their academic strength, and their motivation.
external factors can affect their drive and the effort
All of these external factors can affect their drive and the effort they place into their learning. When analysing surveys or data, it’s essential to consider the influence of these external factors.
- Focus on the data.
The guideline here: consider what students do far more than what students say. The advantage of digital learning is that the underlying data gives insight into the actual behaviours of students.
The potential measures are endless, here are just a few: the length of time spent consuming content, the number of attempts on a quiz, the days and times of access, the performance, and access during the length of the course. These provide tremendous insight into what students are actually doing.
- Invest invaluable, in-depth independent research. This recommendation is linked to point 4, as you need to have access to quality information regarding your course and the value you’re providing your students.
Objective qualitative research can give excellent insights into the student experience and the performance of the course. This research can include observing students engaging with the content, in-depth interviews conducted by independent specialists, and observers on Zoom calls.
This research will need to be done by specialists outside of your company so that there’s no bias in the results.
Following up with students at regular intervals after the course can give excellent insights into the long term impact of the learning.
you can’t rely on feedback from students
The bottom line: you can’t rely on feedback from students to accurately test the success of your digital course. Student’s simply don’t know what’s best for their own learning. Nor is simply looking at completion or pass rates.
Start by being clear in what is essential and how to measure it. Become a data-driven organisation that identifies patterns, issues and opportunities; and combine this with independent research. The biggest takeaway here: measurement should be an iterative process that’s continually refined and adapted to the needs of the students and the learning environment.
Gateway forms part of the Digital Frontiers Group, whose primary professional development activities include training and facilitating associations of professionals. Our training activities consist of the design, creation, maintenance and delivery of online executive education courses and seminars through a built-for-purpose digital campus.
Gateway helps organisations overcome the challenge of building capacity in low and middle-income countries, using a blended approach of online and face-to-face learning experiences. Gateway can help your organisation design, build and deliver a course, and offers access to our marketplace of courses. **