Measuring training effectiveness can intimidate everyday practitioners. Our expertise is in designing and delivering learning experiences, not pivot tables! Plus, data is slippery, and some commonly cited training metrics lead teams astray.
Don’t worry! Training metrics don’t have to be complicated, overly mathematical, or reliant on fancy systems. Once you understand the basics and collect a few months’ worth of data, measurement will feel second nature.
That said, it is important not to take data at face value. This article will cover:
- What training metrics are
- How to choose them
- Common training evaluation metrics
- Common pitfalls about each metric and ways to avoid them
What are training metrics?
Training metrics are data points collected over time to help teams measure the success of their training. These metrics evaluate programming, make changes, and prove value.
Metric Category
Metrics fall into endless categories depending on how they are tracked, why, and what they tell us. However, these five broad categories can help you determine whether a measurement is worth monitoring.
- Health metrics establish a baseline and raise alarms when they veer outside anticipated ranges.
- Actionable metrics inform decisions and identify problems.
- Goal metrics (KPIs / OKRs) are numbers we hold ourselves accountable to achieve.
- Informational metrics provide interesting but non-essential data that may distract us from more meaningful measurements.
- Vanity metrics make us feel good to report but don’t actually speak to success or help us improve.
Meaningful training metrics fall into the first three categories. They allow trainers to target improvements, calculate Return On Investment, and report value to leadership and learners.
Leading and Lagging Indicators
You can also categorize metrics as leading or lagging indicators.
- Leading indicators help you make predictions about future outcomes.
- Lagging indicators measure outcomes that have already occurred.
For example, let’s say you build a how-to course. Your goal is to increase product usage. Since you hypothesize that learners who complete the course will use the product more, you assume that higher course completion will lead to higher product use. You track course completion as a leading indicator.
A few weeks later, you measure product usage for those learners. This is a lagging indicator, helping you determine whether the course had the impact you wanted.
How do I choose metrics?
1. First, identify what you are able to measure. Sometimes, information is unavailable or inaccessible. Determine what data you can safely, legally, and quickly collect.
2. Then, consider what you want to measure within those constraints. Do you want to test hypotheses? Are there different training formats or audiences you need to consider?
3. Then, gather a mix of leading and lagging indicators. Doing so will help you determine:
- Whether your programming is reaching and engaging the right audiences.
- Whether your programming has the intended impact.
4. Additionally, prioritize Health, Actionable, and Goal metrics. To discern, ask yourself: Does this information help me:
- Make a decision?
- Prove value?
- Assess health?
5. Finally, think critically about your metrics. Consider all of the factors that might impact those numbers. This will prevent you from making decisions based on misleading data.
5 common training success metrics (and things to watch out for!)
Training metric 1: Enrollment/Access
Keep an eye on the number of learners who enroll in courses or access training content. This can be a great health metric to help you track your program’s t reach. It can also provide a baseline to compare against other indicators (like abandonment or completion rates).
Caveat
We’re wired to think bigger is better, so we want to say, “Wow! Enrollment is going up! That’s great!” But that may not be true. Perhaps enrollment is mandatory, which frustrates learners. Or maybe your course is reaching a large audience, but it isn’t the audience you want.
Measure better
A great way to combat this is to track the percentage of intended learners enrolling in your course or accessing content. Quantify the audience for whom the training was built, then track the percentage of that audience that enrolls.
Doing so will help you determine whether your intended audience:
- Knows about available training
- Finds it compelling enough to enroll
For example, imagine you create a course to support your Customer Service team. You track enrollment at 400 learners. That number looks impressive, but you dig in and discover that only 2% of your Customer Service team is enrolled! Almost every other enrollee hails from the Sales department.
Now you have actionable information: you know your course is not reaching your audience but is compelling to a different role. You can begin to explore the discrepancy.
You could also set a goal metric for course completion. Lay out the percentage of intended learners you want to complete your course by a set date. Then, use percent enrollment to help you achieve that goal.
For instance, if you set a goal for 75% of Sales team members to complete a course within one month and enrollment numbers fail to reach half by week two, you might need to take early action to understand the team’s disinclination. Perhaps the training is not well-marketed, or managers need to make more space for learning.
Training metric 2: Completion
How many learners who enrolled in courses, joined classes, or accessed content participated until the end? You might track this as a percentage of enrollees (recommended) or as an overall number.
Caveat
Completion rates matter. But they don’t necessarily mean what you think they mean. For example, if you track completion rates over time, you may see that rates are much lower at busy times of the year. Or perhaps low completion rates coincide with a new course that needs improvement.
Measure better
Measure completion as a percentage of enrollment (so that enrollment dips are accounted for). Then, think of completion rates as a health metric: consistently within a baseline range. Unless the rates dip or rise out of that range, you can simply watch them. But if and when they shift beyond the expected range, run some comparisons to understand why.
Look at:
- Learner feedback – What are learners saying in their assessments? Was the training irrelevant or too complex? Was the content challenging to access
- Organizational context – Did your organization restructure or make a change impacting learners’ time and energy? Are you using a new platform that may be tough for learners to navigate?
- Timing – Is it a major holiday? End of the quarter? If there’s a one-time drop-off vs a trend, consider that low completion rates may have nothing to do with your programming.
Training metric 3: Time spent on training
The amount of time learners spend participating in courses or interacting with content is a valuable number to watch.
Caveat
Many trainers use time-based metrics as a proxy for learner engagement. This makes intuitive sense: people spend time on things they care about.
However, time spent doesn’t necessarily correlate to engagement when it comes to training. Perhaps learners spend more time with a piece of content because the information is difficult to find or understand. Perhaps they repeat courses because they fail irrelevant knowledge checks.
Furthermore, when pressed, most trainers want learners to get out there and conquer their jobs. They want to deliver efficient, impactful training, not time-consuming training.
Measure better
Don’t make assumptions about what changes in time spent on training indicate. It may mean that learners love your courses and interact with them! Or it may mean something less sunny.
Either way, seek qualitative feedback to help you understand the drivers behind time-based metrics.
Which brings us to our next recommended metric…
Training metric 4: Learner satisfaction
Whether you ask learners to rank training on a five-point scale, write comments, or mark a simple thumbs up/down at the end of a comment, gauging learner satisfaction is critical. It’s also actionable. If a pattern emerges in which learners tell you about a problem or a need, you have data-backed decision-making power. If you make a feedback-driven change, further feedback can help you determine whether the change was effective.
Caveat
Subjective data can be hard to quantify. It can be even harder to consider objectively.
Measure better
Wherever possible, attempt to turn feedback into quantitative data.
- Ask learners to rank training.
- Break comments into sentiment patterns and topic categories
But don’t let the need for objectivity overshadow valuable insights. Qualitative data can help you understand variability in other metrics (course enrollment/completion, etc.) And even imperfect data can provide guidance. If dealing with surveys and polls is too much, reach out to a few learners each quarter to get a sense of how your training is hitting.
Training metric 5: Assessment pass/fail rates
Do you use tests or knowledge checks as part of your training program? If so, consider tracking learners’ performance on those tests.
Caveat
Pass rates show how many learners know specific topics. They don’t capture whether audiences learned that knowledge from the training. Perhaps a team’s pass rates were high because learners already knew the information in the knowledge check. If that’s the case, the training may have felt like a waste of time, leading to future training disengagement. Or perhaps pass rates were low because the assessment was filled with “gotchas” about irrelevant details in the training, another trigger for future disengagement.
Measure better
If you want to track pass rates as a training metric, assess learners before they complete the training in question. As time-conscious trainers, you may hesitate to ask learners to complete an additional assessment. However, pre-training knowledge checks serve your learners by:
- Establishing a baseline that allows you to determine whether training truly impacts their knowledge
- Highlighting knowledge gaps that help you build targeted, future training
- Allowing them to test out of learning programs when they already have a solid grasp of the content
Pre-training assessments ensure you don’t waste learners’ time on unneeded training. They also – when combined with post-training assessments – help measure training effectiveness.
Think critically about training metrics
Each of the training metrics above is valuable. They can also backfire. No matter which measures you choose to track, think critically about how those numbers are derived and whether they truly tell the story you think they tell. Remember to ask yourself what each metric helps you decide, prove, or assess, and consider letting go of vanity metrics and distractions.
At the end of the day, always tie your training metrics back to your Why: Why are you creating training content? What outcomes do you want to show?
If you’re looking for more ways to measure impact meaningfully, Learn To Win’s ROI Guide and ROI Calculator breaks down various metrics to track training effectiveness and prove value.