Have you ever wondered if conducting Level 1 evaluations are worth the effort? Or if you should stop using them altogether? If you’ve had these thoughts, you’re not alone. According to a 2019 Association for Talent Development (ATD) research study, 83% of organizations evaluate some learning programs at Level 1. Yet, only 35% view the data they collect as having either high or very high value.
So, is there something you can do to start getting more valuable results from your Level 1 evaluations? The answer is a resounding “Yes!” Start by including predictive questions in your Level 1s. Predictive questions forecast the results a learning program is likely to achieve. They also begin to answer the question business executives and L&D professionals both want to be answered: “Is this program delivering value?”. These predictions aren’t proof that specific program outcomes are inevitable but rather a forecast that certain results are likely.
In this highly informative, thought-provoking session, participants will learn how to create three predictive measures: a Level 2 learning gain score predictive metric, a Level 3 training transfer predictive metric, and a Level 4 business results predictive metric.
Ken Phillips is the founder and CEO of Phillips Associates and the creator and chief architect of the Predictive Learning Analyticsâ„¢ (PLA) learning evaluation methodology. He has over 30 years of experience designing learning instruments and assessments. He also has had more than a dozen of them published.
He regularly speaks to Association for Talent Development (ATD) groups, university classes, and corporate learning and development groups. Since 2008, he has spoken at the ATD International Conference on the measurement and evaluation of learning topics. Since 2013, he has also presented at the annual Training Conference and Expo on similar issues.
Before pursuing a Ph.D. in organizational behavior and educational administration at Northwestern University, Ken held management positions with two colleges and two national corporations. In addition, he has written articles for TD magazine, Training Industry Magazine, and Training Today magazine. He also is a contributing author to five books in the L&D field and the author of the recently published ATD TD at Work publication titled Evaluating Learning with Predictive Learning Analytics.
As a pilot pioneer, Ken earned the Certified Professional in Learning and Performance (CPLP®) (now CPTD®) credential from ATD in 2006 and has recertified five times, most recently in 2021.Â
This webinar is sponsored by HRDQstore.com and is based upon research of our published training tools. For more than 40 years HRDQ has been a provider of research-based training resources for classroom, virtual, and online soft-skills training. We offer learning resources to help retain employees and clients, make better decisions, improve performance, and much more.
Learn more at HRDQstore.com >>
0:03
Hi, everyone, and welcome to today’s webinar, Add Muscle to Your Level One Evaluations With Predictive Questions, hosted by HRDQ-U and presented by Ken Phillips.
0:14
My name is Sarah, and I will moderate today’s webinar. The webinar will last around one hour. If you have any questions or comments, please type them into the question box on your GoToWebinar control panel, will be having markers throughout today’s session where we’ll be opening it up to answer your questions, today. So please do just type those in as as, as you think of them, as they come your way.
0:37
And make sure that you download the handouts for today’s session, as well. You can find those under the “Handouts” drop-down on your control panel. You’ll find a copy of the slides, as well as an article.
0:49
That you can use it to look alon… read along with today’s session.
0:55
And today’s webinar is sponsored by HRDQstore. HRDQ is based upon research at our published training tools. For more than 40 years HRDQ has been a provider of researched based training resources for classroom, virtual, an online soft skills training. We offer learning resources to help you train employees and clients make better decisions, improve performance and much more. You can learn more at HRD key store dot com.
1:21
I’d like to welcome today’s presenter, Ken Phillips. Ken delivers all programs and workshops in his signature style, professional, engaging, and approachable. Ken is the Founder and CEO of the Philips Associates and the Creator and Chief Architect of the Predictive Learning Analytics Learning Evaluation Methodology. He has more than 30 years experience designing, learning instruments and assessments, and has authored more than a dozen published learning instruments.
1:47
Ken also regularly speaks for two Association for Talent Development Groups, University classes and corporate L&D groups. Since 2008, he has presented at the ATD International Conference. And since 2013, at the annual Training Conference and Expo on topics related to measurement and evaluation of learning. Thank you so much for joining us today Ken.
2:11
Thank you, Sarah, and thank you for that exhaustive introduction.
2:19
Probably more stuff, than what people wanted to know, but appreciate it any way.
2:25
So, welcome, everybody, and to our session this afternoon on Add Muscle To Your Level one evaluations with predictive questions.
2:35
And I want to just re-iterate something that Sarah said before we get started. Here is the handouts that Sarah mentioned that you can download. There are two of them.
2:47
1 of 1 of the handouts is an article that I wrote that was published in ATV’s TV Magazine, and it served that articles really served or serves as the, the content for this presentation.
3:06
So, I’m because we’re gonna go through, and I’m going to show you, not only survey questions, but then how to, you know, to collect the data and analyze the data. So, there’s going to be some math involved, although it’s fourth grade level math.
3:22
So, it’s not complicated math, but what I want to do is to assure you that, you know, you don’t have to take extensive notes and try to capture all the information. Because it will be in that article. And everything will be covered in there that I’m going to talk about.
3:38
And the other handout is, actually, it will be, these slides, are a PDF of the slides, so, if there were particular slides that you want to go back and look at, I am referred to, Yeah, in reviewing the material. You know, you, you’ll be able to do that.
3:53
So, just sit back, put your feet up, enjoy the webinar, and, you know, don’t worry about taking extensive notes.
4:04
So, I’m gonna start with this question.
4:07
And then you can just type in the question box, your answer either yes or no.
4:13
So, have you ever wondered if conducting post training Level one evaluations are worth the effort, and if you should even just stop using them altogether.
4:23
So, OK, so we’ve got some responses coming in, So yes, yes, OK.
4:33
Let me see what else?
4:35
Yeah. It looks like overwhelmingly overwhelming 100% of people are saying, yes, yes, yes, yes, yes, yes, yes. Yes, OK, well, you’re all on the right spot then, so this is good. This is good.
4:47
And.
4:50
There we go. No, went the wrong way.
4:53
Come on.
4:55
Hold on.
5:00
There we go. So let me just run through the agenda and what we’re going to cover in our session today. So three things that you’ll be able to do after the session is over.
5:11
one is I’m going to begin by sharing some facts with you from a recent ATD research study around level one evaluations.
5:23
And if you’re interested in benchmark information to either share in your own organization, or to if you’re a consultant and are sharing want to share it with clients, this research study that HUD has done, I’ll say more about it when we get to it.
5:41
But it is the most comprehensive research study that, that I’m aware of that’s out there, around the whole use of measurement and evaluation of training, and I’ll say more about that when we get to it.
5:54
And I’m just cherry picking just a few things from the, the Level one information that they collected.
6:01
So, but I do want to just run through that.
6:04
Second thing is, I’m going to share with you, and provide you with predictive questions, that you can actually include in your Level one evaluations that will enable you to forecast these three things.
6:20
You’ll be able to forecast Level two, Participant Learning.
6:25
You’ll be able to forecast a Level three training transfer, and be able to forecast the likelihood of Level four improve business results. So you might ask additional questions in your Level one evaluation, beyond these.
6:42
But these are the predictive questions that you can use to start to forecast information that is not only of interest to business executives, but likely would be of interest to you and be able to make these forecasts right at the end of the training program. And I’m gonna say more about what predictive questions are, and in a second here.
7:05
The last thing we’re going to focus on, then, is, I’m going to show you how to calculate three predictive metrics.
7:13
And the first one is a learning gain score. And that’s what I call it.
7:18
The second one is a training transfer likelihood score.
7:22
And the last one is an improved business results likelihood score.
7:26
So we’ll give you the questions, and then show you how to do the math.
7:30
And as I said, you don’t need to take extensive notes on either of these because it’ll be available in the handouts.
7:37
So let me just take like NaN here.
7:41
You’re probably, maybe all of you are familiar with the five level evaluation model, but in case somebody isn’t.
7:49
I’m just going to take and do a 92nd overview here of the model.
7:54
So this five level evaluation model is the most popular evaluation model in the world when it comes to evaluating training programs and training events.
8:07
And so, that first four levels of this model, the level one reaction and level two learning, The level three behavior, and the level four results. That model was actually developed back in the late 19 forties.
8:23
By a guy by the name of Raymond Quetzal, then you’ll see his name there to the right of the model. And so, Raymond was a training and development guide. Back in the four late forties, mid forties, late forties.
8:36
Developed this model, published it in some journal, and it kinda just lingered there and no, and nobody even knew anything about it until along came Don Kirkpatrick, And you probably have heard this referred to as the Kirkpatrick four Level Evaluation Model.
8:57
And Don gets all the credit in the world for popularizing it because, in 19 54, he was doing his PHD dissertation around evaluating supervisory training programs.
9:10
And so he ran across this model and used this model in his dissertation.
9:16
And when he was so he could collect data around evaluating the supervisory training programs.
9:23
And so, Don completed his dissertation and got his PHD in 19 54, and then in 19 59, along came a T D, or back, then, it was known as a STD.
9:37
And they contacted Don, and they said, hey, Don, we know heard about your dissertation And Around this four level evaluation model and you know, would you be willing to write an article for, you know, our monthly trade magazine?
9:53
And I think back, then, it was no one.
9:54
It’s like the Training and development Journal, and so, Don, being the marketing guy that he was, said, I’ll do you one better than writing one article. I’ll write four articles, one on each one of the levels of evaluation.
10:10
So in 19 59, November and December of 1959, and then January, in February of 1960, Don had these four articles published.
10:22
And so, A but wrote an article on level one, reaction level two, learning, level three, behavioral before results. So it was really those articles that popularized this training this training evaluation model. And as I said, it’s now and has continued to be for years and years.
10:40
The most popular training evaluation model in the world, so Don gets all the credit for, for for actually popularizing it.
10:51
About 25 years after Dunn wrote his articles.
10:55
Along came a guy by the name of Jack Phillips, same last name, but no relation as far as, either here. Either here, I’ve ever figured out, and he said, The four level model for a level evaluation model is spot on, It’s systematic, you know, it’s really useful. And he said, But it’s missing one thing.
11:16
And that’s level five, or ROI, or return on investment.
11:21
So, in the night, early, 1980 is done.
11:26
Or Jack, I’m sorry, came along and added this fifth level of evaluation.
11:31
So this is now the known as the five level evaluation model, and where we’re focusing our efforts today and our time today will be on Level one reaction.
11:42
So we’re basically trying to capture data to determine whether participants found the training, favorable, engaging, and relevant to their jobs. And those are maybe some of the other questions you might be asking in addition to these predictive questions.
11:57
Because now I would add in there that we’re also trying to capture data for these in these level ones so that we could forecast Level two, learning, Level three, training transfer and Level four business results.
12:13
So let me share the, Some facts from this ATD research study that I alluded to earlier. HUD has actually done three of these research studies, the first one was in 2009.
12:25
The second one was in 20 15, and this is the most recent one in two thousand and nineteen. What’s?
12:31
What’s interesting about these and useful about these three research studies, is they basically asked the same questions over that 10 year period.
12:43
So you’re able to go in and see, you know, what was happening out there in the world of measurement evaluation and trade in training in 2009, and see what happened in 20 15, and if there were any differences, and then the same thing with 2019.
12:59
So you’d provide you with a real, a comprehensive look of what’s going on out there.
13:05
And the, let me just say a word about the research studies.
13:09
If you were a member of the national ATD as part of your member benefits, depending on what what you choose you can get these research studies for free.
13:21
If you’re not a member of the national ATV organization, I can tell you that it’s cheaper to join national ATD than it is to buy the research study so that, because they charge a hefty price for them.
13:37
But the other thing I want to mention about them is, what I like about them is they there was a concerted effort to collect data from organizations of different sizes.
13:48
So you would find if you looked at the the and they have all this background information in these research studies, they collected information from organizations as small as 150 employees, all the way up to organizations that had tens of thousands of employees. So it was really a broad cross-section of organizations of different sizes.
14:09
And the other thing they purposely did was to make sure that they had a variety of organizations from different industries, so the results and the data that they’ve collected isn’t dominated by, you know, like financial services, organizations or manufacturing organizations. But they got, you know, organizations in all different industries. So that’s the other.
14:32
Those are the other two things that makes this data really valuable.
14:37
It’s because they really, you know, made a concerted effort to make sure that it wasn’t dominated by any particular group.
14:46
But let me just share with you some of the facts here that they found this JD. Just level one facts, and it’s not all the level one facts that are in the research study. I just cherry picked some here that I thought might be interesting.
14:58
So, what they found was, I know there were there were about 100 organizations that participated in this research. And so that was the other thing I wanted to mention. So, what they found was about 100 or so organizations that participated in the research study, 83% of them are a little more than four out of five.
15:20
Evaluate some programs at Level one.
15:23
So virtually, you know, all the organizations that participate in the research evaluate. Some programs at Level one, probably not surprising.
15:32
The second thing they found is, is those organizations that were evaluating some programs at level one.
15:39
On average, they were evaluating about half, 54%, exactly, but a little more than half.
15:47
Of all the programs they offered at level one, So four out of five organizations are collecting level one evaluation data with for some programs, and if they are collecting level one evaluation data, on average, they evaluate about all that. About half of all the programs they offer.
16:03
And the other interesting fact in yeah, and this ties back to our initial question that I started with at the beginning, is only 35% of the organizations are one out of three, felt like the data that they collected with their level one evaluations had high, or very high value.
16:25
So, that tied back to our question about, if you ever wondered about, you know, doing level one evaluations, and whether or not you should just keep doing it, or maybe even stop altogether.
16:34
So you’re not alone, there’s, that basically, most of the organizations that, you know, participated in the research, here, we’re gonna, you know, felt the same way about their Level one evaluation data.
16:47
So, let’s talk about why the disconnect. Yeah. You know, why? so many organizations doing level ones, I’m doing with about half their training programs.
16:55
And yet only a third of them, feeling like they collect any data that has any value, because you might wonder, you know, If you’re not getting any value out of this, why do you keep doing it?
17:05
Which was part of my question. So there’s four reasons here.
17:09
For the disconnect.
17:10
one is, oftentimes, level one, evaluation. Data is not viewed as valuable.
17:18
You know that, and especially when you, when you look at it from the perspective of the business executives, That, that we might be supporting.
17:27
You know, with, with a training program that they’ve wanted designed and developed. And implemented, you, know, asking questions around, whether people liked the donuts or whether this particular, you know, training activity was was relevant or valuable, or whatever. That’s not data that most business executives care. Anything about now, Some of that stuff is probably information that we would like to know about as a learning and development professional. So that’s why I said these predictive questions are not the end all.
18:02
When it comes to your level one evaluations, there might be other things you want to ask about that are perfectly appropriate, because it would be information that would be useful for us as L and D people.
18:14
Probably not stuff you’d want to necessarily want to share with a business executive.
18:19
So that’s one reason for the disconnect.
18:21
Second reason is, I find, at least in a lot of organizations that I’ve worked with around measurement and evaluation, that Level one, evaluation data is rarely rarely, rarely, systematically analyzed.
18:38
Looking for trends or patterns.
18:41
You know, where you would have, maybe, an ongoing training program that is going to be offered multiple times, and looking at the data that you’ve collected and systematically analyzing. Yeah. Are there differences? Are there any major differences between one program, and, you know, and that’s and another offering of that program. And so looking for trends and patterns.
19:02
And then also can be used for program comparisons, where you can look at the data that you’ve collected for one particular program, even, even, and it might have been offered multiple times, and compare that to a different program. But that rarely ever happens. So not to say that people don’t know, glance through the data and look at it, and say, oh, yeah, that’s kinda interesting. But it typically isn’t systematically analyzed.
19:28
The third reason is a few LLD leaders.
19:33
Again, it’s been my experience, you know, do they, they know they should do level one evaluations, but they don’t necessarily have any specific objective in mind, we’re collecting Level one evaluation data.
19:44
And so that’s another reason that a lot of the data that gets collected isn’t seen as being very valuable.
19:51
And the last reason is many L&D professionals really just lack the knowledge and skills needed to create valid survey items.
20:02
I taught for several years and a master’s program, the Masters in Training and Development Program, at a school here in Chicago, where I’m from, called Roosevelt University. And you know there and they took the Masters program. You had all these different courses that you took.
20:18
But the vast majority of the courses, we’re all around. You know, designing, developing, and facilitating, and technology, and so on. I taught the, I had the only course that dealt exclusively with Measurement and Evaluation.
20:34
Now, that’s not to say that, you know, that the other professors didn’t talk about measurement evaluation in their classes, but it wasn’t the main focus.
20:44
So, you know, that’s, it’s oftentimes that people will come out of, you know, educational programs, and, and, you know, have some understanding about measurement evaluation, but not really, you know, haven’t developed and refined the skills needed to create valid survey items and then to know how to analyze that data and so forth.
21:11
So what’s the solution, And that’s the reason we’re here today. And that’s to include predictive questions in your level one evaluations. So that’s what we’re going to spend the rest of our time on.
21:21
So let’s talk briefly about what are predictive questions, I said I would define that for you.
21:26
So they forecast the results a learning program is likely to chew to achieve.
21:33
So, we’re not suggesting that this is scientific proof.
21:37
We’re using these questions to forecast what we think are going to be, you know, the level two learning, the level three, training transfer, and the level four business results. What what those results are likely to be.
21:53
And as I said, they’re not proof that scientific or that specific outcomes are inevitable, but rather a forecast. And I like the analogy I like to use is like a weather forecast, you know, when you’re watching a weather forecast and you’ll see the weather person talk about well. We looked at these different computer models that we have the European Model and we had the name as my idol and we had this model and I you know, took the data from all those models. And. and then just also based on my own experience. You know, I think what’s going to happen is tomorrow it’s going to be like this.
22:26
And, or, you know, two days from now, it’ll be like this.
22:30
And so, we’re, that’s the kind of, that’s, what we’re doing here, is we’re using data to build a forecast.
22:37
So it’s not just personal opinion, And it’s not anecdotal, but it’s based on data that we’ve collected and, and we’re and we’re making these forecasts.
22:51
And the data begins to answer the question that, the business executives.
22:56
And those of us, the noise, L&D professionals, what we want answer to is this program delivering value.
23:03
That’s what the business executives want to know, and we also want to know that, so that’s, that’s where these questions come into play.
23:13
And that’s the kind of data that the business executives would be eager to see and to share, have you share with them, and not the stuff around donuts and, you know, particular training activities and so forth.
23:28
So, three types of predictive measures.
23:31
I mentioned in the agenda, we’re gonna do a level two learning gains score. I’m gonna give you the question is how you capture the data, show you the math, how to calculate that gain score.
23:40
The second one will be level three training transfer, likelihood score, give you the questions, show you the math, and then the last one will be, Level four. Improve business results, likelihood score. Again, give you the questions, show you the math. And, again, all this is in the handout, so you don’t have to take copious notes here.
24:00
So, let’s start with predictive, metric number one, and we’ll pause at the each one of these predictive metrics. So, if there are questions you have, we can answer those before moving on to the next metric. So, this is predictive metric number one, calculating a level two, learning gain score.
24:17
It involves asking, too, parallel learning based survey questions.
24:23
And these are the two questions, you know, how much did you know about the material tautness program before attending?
24:31
And then right after that, the second question will be, how much do you know about the material tautness program after attending?
24:40
And you can customize these, these survey questions if you want to, and make them more specific. Instead of talking. Instead of saying, How much did you know about the material taught in this program, you could say, taught in the, you know, conflict resolution program are taught in the, You know, the selling skills program or taught in whatever you can, even make it more specific. So, but you want to make sure you’re focusing on what did you know before? And then the second question is, how much do you know now after?
25:10
So those are the two questions, and here’s how you calculate a learning gains score. I’m going to just walk you through the process, then show you the math.
25:20
So you’re going to compute an average before a score and an average after score for those two survey items.
25:29
And then you’re going to subtract the before score from the from the afters score and the result or the difference is going to be a learning gain score.
25:40
So that’s just the process. Now let me walk you through the math.
25:44
So we have a man, we have to have here in our sample. We’ve got 10 participants, and they’re in that left column there.
25:54
And, so, then, the second column here, is Question one, How much did you know about the material taught in this program before attending, and these responses here, what they reflect, is that you saw with the survey items, say, there were seven point response scales, and so, this is what you see here.
26:13
So, Participant 0, 1 7, you know, checked the five for that first question for the What They Knew Before, and is Participant 0 to 0, check five as well, And so on, down the line, so you can see what they how they responded to that first question, and you can also see then see the total at the bottom.
26:35
So we’ve totaled up the those responses, and then we go to the second question, how much did you know after, and you can see the responses there.
26:46
Then down at the bottom, you’ll also see the Total.
26:49
So we’ve now got to our two totals After we’ve aggregated all the responses. And now we’re going to put them into these calculations. So calculation one is, how much did you know before? So we knew before, we had a total of 44, at 10 participants that we collected the data from. And so our average preprogram knowledge level is 4.4.
27:13
In calculation two, this was knowledge after the program number.
27:17
It was 59, when we total all the responses, invited by the 10 participants.
27:22
And so it came out to an average post program, knowledge level score of 5.9.
27:28
So now, we’re going to put it into this last calculation here.
27:31
So here’s the average post by zero point nine minus the pre 4.4, and so the learning gain was 1.5.
27:42
Now, let me say the first time you do this with a training program.
27:46
This 1.5 is kind of interesting, but it doesn’t have a great deal of meaning.
27:54
because the business executive might then say to you, so is 1.5 good?
28:00
Or is it bad?
28:02
And about the only thing you can say is that this at this point, is that 1.5 is better than 1.4, but it’s not as good as 1.5.
28:15
So, what will make this more informative and insightful is, if you do this process and have that training program that you know, you’re going to offer it multiple times, and so you collect this data after each session.
28:32
Then what you’re able to do is to create a program, norm, or standard. You add that learning gave. All the learning gains scores together.
28:41
And so now, when you do this, again, what you’ll do is you’ll be able to have a basis for comparison. sake, I will say, the learning gains score for this particular offering of this program was 1.5.
28:53
And that compares to the norm that we developed for this training program. Where we’ve offered this, you know, same training program, 12 times, and the average learning gains score was, you know, 1.3. So we can see now that the learning gain in this particular with this particular group was better than the average.
29:15
And so that’s where you can use the comparison.
29:18
So it’ll be much more insightful when you, when you’re able to collect data for, you know, with with a program being offered multiple times, and if you want to make comparisons with other programs, you can do the same thing.
29:32
Or you can collect a norm, collect ****, collect this data, create a norm, and then look at the norms for the different training programs, different type of training program topics, and see if there are any major differences.
29:45
So there’s lots of stuff you can do with the data once you collect it.
29:50
So any questions about the learning gains score?
29:55
They have had a couple of questions come through.
30:00
The first question that we had is, what do you think about rating, How well the objectives are met?
30:09
Oh, that’s OK. That would be another question you could ask.
30:13
Yet, these, these, these, these predictive learning questions aren’t the end all for your Level one evaluation.
30:20
So there might be, you know, other questions that you’ll want to include on your, on your level one evaluation and nothing wrong with that.
30:27
My only caution would be you don’t want to create a survey that’s going to get too long or what you’ll run into is people, you know, not completing the survey because they can, you know, run out of energy.
30:41
So, there is something called survey fatigue and so you you want to be cautious with that, but sure, you could go ahead and ask that.
30:51
And this next question is from Sonya asking How do we adjust for overestimation of knowledge prior to taking a course? I hear from participants that they didn’t know what they did.
31:06
Also asked me again Sarah, what was the question?
31:10
How do we adjust for overestimation of knowledge prior to taking the course?
31:22
Well, that’s, that’s an interesting question.
31:29
I guess, I’m not sure which I’m not sure how to answer.
31:32
If I, if your, if your question is, how do we know that we’re getting honest responses from people in terms of, you know, how they answer the, what they knew before were, how much they knew before, and how much they knew after.
31:46
No, I think there’s a way to mitigate that. You’re not, you don’t necessarily eliminate that from happening. But by, by spending some time, prior to when people start to fill out the survey is explaining, you know, the why, you really want honest responses.
32:08
And that, you know, because you’re going to use this information in you in, you know, finding the course or the program, so that you really would, you really do want honest responses to all your survey questions.
32:25
And then the other thing would be that, you know, ensure and ensure assuring people that no one person’s individual responses are going to be, you know, pulled out. In other words, you will, all the only data that you’re going to share with business executives, will be summarized data, so that no one person’s responses will be, will be known.
32:51
And then, one more question here, from Andrea asking, What’s the benefit of learning gain score over a percent change, over, what was the last word, Sarah?
33:01
Over percent change, percent change. Millimeter hmm.
33:10
That is, the issue has to do with mixing percentages with real numbers. Because we’re with a scales were 1, 2, 3, 4, 5, 6, 7, and with, you know, it was, and now, we’re using real numbers there, and now, we’re trying to convert that to percentages.
33:29
And there are some issues with that, from a statistics standpoint.
33:34
And that’s why, you know, I, in the example, I mean, I stick with, with real numbers, as opposed to trying to change it to a percentage.
33:52
Anything else?
33:54
We actually do have a couple more coming in here.
33:58
Sherrilyn asks, I use comparative, quite a comparative questionnaire and my post survey that asks them to answer questions pre, then post, Can I still do the gain store, gain score math, but this method?
34:12
Yup, yeah. I, yeah, I think so.
34:16
I mean, from, from, what you said with the question, yeah, if there’s any, if they have any, if any of you have any.
34:25
No questions we did about that, we can take it offline. I’m gonna give you my contact information here at the end. So reach out to me and, you know, we can.
34:37
We can, you know, spend more time around it.
34:41
Great. And then this other question here is coming from, from Nikki, and Nicky asks, Is there ever a situation where the difference is a negative?
34:53
God, you hope not, Because that would say that they, that they are coming out of your training program dumber than what they, when they started.
35:02
I’ve never seen that happen, I would guess that it’s not likely to happen.
35:09
And if it did happen, boy, that would be like a big red flag around it.
35:14
So, what went on in that training program to cause that to happen?
35:21
Not likely.
35:23
Great, and that concludes all the questions, OK, good.
35:27
Well, we’re gonna push on here, So, and we will have more time at the end, too, if we keep moving here. So. So we’re gonna go to predictive metric number two.
35:36
So this is calculating a level three training transfer likelihood score.
35:43
So we’re going to forecast whether or not people are likely to take what they’ve learned, assuming they learned something new.
35:50
And we saw in our example that there was an improvement from the 4.4 to 5.9, looking at the group as a whole.
35:59
So now, what we’re going to focus on is, now that we know when they’ve learned something, how likely are they to apply it back on the job?
36:07
So we’re going to ask five training transfer survey questions here to collect the data that we need to do to make our forecast.
36:17
And so, let me run through the questions.
36:20
So here’s question one out of the five, and it’s Irrelevancy question, So how relevant was this program to you, and the tasks and requirements of your work?
36:32
Again, a seven point scale.
36:34
Question hashtag two has to do with confidence.
36:37
So how confident are you in your ability to apply the new information you learned in this program back on the job? Again, not at all confident, extremely confident.
36:48
Question three is around an opportunity to apply.
36:52
So how likely are you to have an immediate opportunity to apply the new information you learned in this program, Back on the job?
37:02
And just to, just to comment on this one item, we all know that the longer the time lag between someone attending a training program and then having an opportunity to apply it, the longer the time lag, the less likely they are to do it.
37:15
So, this is an important question to collect information around, because how often, I used to do a lot of performance management training. And part of that would be around or around conducting performance appraisal discussions.
37:29
And I would invariably run into people, Brian, participants, who, you know, maybe aren’t, aren’t going to do or conduct a performance appraisal discussion for months after the training, and so we know just in time training works. And the longer the time lag, the less likely they are to apply what they learned back on the job. So that’s an important question.
37:54
Question four, out of the five is manager support.
37:58
How likely is your manager to actively engage you in a discussion regarding your use of the new information you learned in this program?
38:06
And we can know, managers play a critical role in whether or not the people they send a training are going to apply. What they’ve learned. And part of that, what they managers need to do, is to provide, actively provide active support for the training.
38:21
And if they don’t provide that, then it reduces the likelihood that the participant is going to apply it. So, let me just say a word about these first four questions.
38:30
There’s multiple research studies behind each one of these questions that I have found that all these questions are positively correlated with training transfer.
38:41
So, these four questions, I just, didn’t, just dream up one night, while having a beer, you know, these are, there’s multiple research studies behind all of them. And they were, they were selected for that reason.
38:53
Because they, they do, they do very much, you know, enable us to forecast this training transfer.
39:04
Because all these questions, I have a positive correlation with training transfer.
39:10
So the fifth question, and you will see how we use this in the math.
39:14
Would be, what obstacles, if any, might keep you from applying what you learned in this program back on the job.
39:20
And this one is just an open-ended question, and we’ll show you how we deal with that.
39:25
So, let’s run through the process for doing the math.
39:28
You’re going to compute a total score for each of the first four training transfer, predictive questions.
39:36
Then you’re going to sum the four total scores together and divide that result by the number of pro program participants and then divide that number, bye for the bike for representing the four questions, and the result is a training transfer, likelihood score. So let me just run through the math.
40:00
So here we’ve got our same 10 participants.
40:03
So here was question one relevancy question to confidence.
40:08
Question three: opportunity to apply, in question, for managers support, you see the responses from each one of our participants to those questions.
40:18
And then down at the bottom, you see the totals, So we have a total 49 for relevancy, 53 for confidence, before for opportunity to apply in 44 for manager support. Now, we’re gonna put these into the formulas.
40:35
So we’ve got relevancy total 49 plus confidence total 53 Opportunity to apply 44 manager support 44 Total is 190 Now, we’re going to take that 190 divide by the number of participants.
40:50
that’s 10 and then divide that number, buy for the number of survey items, and so the training transfer likelihood score is 4.8.
41:04
Now in terms of, you know, whether that’s good or bad, you can use this these guidelines.
41:09
So a score of six or greater indicates that program training transfer is likely to be high, Score between 3 and 6 indicates that the training program transfer is at risk and a score of less than two indicates that training transfer is likely to be low.
41:32
So now, let’s talk about that fifth question. So what if your training, like training transfer likely score is below six?
41:41
Well, that’s why we asked those, that last fifth question about obstacles, so we want to add, we’ll analyze the obstacles identified in Question five To identify, were targeted corrective actions can be taken to increase training transfer.
41:59
So recognize that identifying obstacles to trading transfers only half your job and making sense out of them is the other half because now we’re dealing with qualitative data, not quantitative.
42:11
And so it’s hard to, you know, wrap our heads around lots of qualitative data that makes sense about us that sense out of it. So I’m going to give you a process that you can use to take qualitative data and quantify it.
42:26
So here’s how you can take your training transfer obstacles and make them actionable. And I would say, anytime you’re collecting qualitative data, you can use this same methodology to take all that qualitative data and make it actionable by quantifying it.
42:43
So this process will work not just with obstacles, but any kind of qualitative data you’re collecting.
42:50
So where do you want to start, is, you want to take all your obstacles, and then begin to analyze them, and look through them for themes and patterns, what kinds of things are people saying? Now they may not be using the exact same words, but basically they start to identify the same obstacle. And so you begin to see themes and patterns that begin to emerge, you know, from the data that you’ve collected.
43:17
And then what you want to do is to consolidate all the like-minded obstacles into clusters or Another analogy would be to create some buckets.
43:28
And you’re gonna put all these like-minded obstacles into these buckets. And we’ve got different buckets with like minded obstacle. And I’ll show you some real live data, and what this looks like. Then you’re going to count the number of obstacles in each cluster, or in each bucket.
43:45
And then lastly, we’ll place the clusters or buckets into numeric order from highest to lowest.
43:52
And so here’s some real live data that was collected around obstacle data.
43:58
So you’ll see here are the buckets or the the that we that we identified.
44:05
So we had management, policies and procedures, communication, personal, lack of time and resources and so on. So, those are the buckets we created.
44:14
Now, there’s no magic formula for, you know, naming these things, but come up with a name that you think best represents, you know, what the, uh, obstacles are, that you’ve clustered together in that area.
44:31
And what I’ve included here, you’ll see, are some examples, Because if you’re gonna share this data with an executive, they’re not gonna know what you mean by these buckets or by these cluster names.
44:43
And so, you’ll wanna share some examples for that you’ve, that you’ve put under each one of these.
44:51
But then the next thing we did was we counted the number of obstacles that were in each of our buckets.
44:58
And so, that’s what these numbers represent here.
45:00
So you can see that there were 11 obstacles, 11 different obstacles that fell into the management bucket, 10 into the policies and procedures bucket.
45:11
And so on down the line, and so now we put these in numeric order from highest to lowest.
45:17
And so now, what we can do is to make these obstacles actionable, we can say, OK, these, and you decide where you want to draw the line. Here.
45:28
We could have included personal example, the personal bucket as well, but I just selected, you know, the three highest. And so I said if we’re, we’re, and we’re dealing with a business executive talking with a business executive. And we’re in, what we’re going to say is, look, if we want to increase training, transfer with this particular program, here Are some obstacles. Here are the obstacles. The three most common obstacles that were identified by the participants that were getting in the way and would likely inhibit or prevent them from applying what was learned in this training program.
46:04
And the other important thing to point out is, two of these, in particular, like management and policies, and procedures, are not in your control. That you are willing to partner with the executive to help come up with targeted corrective actions, to mitigate or eliminate these.
46:22
But they’re not in your control. So, the business executive can’t say, well, you go fix these, because that’s not in your area of responsibility or authority. Now, communication. Yeah, we might be, please take a more active role there.
46:36
Because, you know, we could design a communications training program, or do some kind of organizational development intervention around, you know, communication within a department or something like that.
46:49
So there’s some things that we could get involved in there, much more so than the first two, but that’s the way we want to take these obstacles and be able to make them actionable.
47:01
So, now, we’re able to do that.
47:04
So, questions around training transfer, likelihood score?
47:09
So, we had one question come through, and it’s going back to the first calculation. What do you think about using a pre quiz so that participants can guide their prior knowledge more, realistically?
47:24
Oh, you’re going back to the, going back to the, back, we were doing the learning gain score.
47:31
I believe so, yes. Yeah. You can just type Yeah.
47:36
Yeah, no, no, there’s nothing. You know, if you want, if you want to get collect, scientific more, more scientifically, sound data, then the, the easiest way to do it’s not easiest way, but the way to do that would be, right. Create some kind of a knowledge test. You know, around the content of your training program, administer it to people, the participants before they attend the training. And then, after the training is over, you can take that same knowledge test.
48:06
I would, I would re-order all the questions and things, and I would also wait for a couple of weeks after the, the training, before administering the Knowledge Test.
48:16
But you can then administer the knowledge test, you know, same same test question just re-order and randomized differently. And then you have a way to compare the knowledge level before the training versus the knowledge level after the training. That’s much more scientifically sound than what we were doing here, like how much did you know before, and how much did you know after. It also involves a lot more work and requires a lot more time. And also requires a lot more time on the part of the participants.
48:46
So, while you get more scientifically sound data, you may get pushback from business executives around, you know, taking, in all this extra time in, But, if you can get, you know, support for that, hey, go for it. Because it’s, you know, it’ll be provide better data.
49:06
Then, what you’re going to get with the predictive question is more credible, more scientifically sound data.
49:11
That, that’s what I mean by better, I mean, I think the forecast, you know, are OK, but they’re not as good as doing that pre and post knowledge test.
49:23
Then, one more question here from Jan, Do the top three or so obstacles to training transfer translate to potential additional training programs that may need to be developed and delivered?
49:37
Could be, Could be, yep. Depends on what comes up.
49:41
But you’re, you’re spot on. Yeah. You know that, that would be like the communication one, the third one that we had on the list.
49:48
You know, there might be other things that the business executive needs to do with the, you know, with the supervisors in the department or whatever to, in, you know, increase or improve communications with the, you know, with the employees.
50:02
But, it might also include some training. You might offer that as well.
50:07
So, sure, that, you know, that would be kind of the, that’s kind of the ideal solution.
50:13
Because then, you can take a more active role in helping the business executive, you know, mitigate or eliminate those obstacles.
50:21
Whereas, the management stuff, and the policies and procedures, you’re probably going to have, you know, little involvement with that.
50:31
But, yeah.
50:33
Great. And that answers all the questions. OK, well, we gotta let us go push through the last when there’s that much stuff on, on that one. So, we can still get out on time, and have time for Q and A So, predictive question, predictive.
50:47
Metric number three, you’re going to calculate a level four improved business results likelihood score.
50:55
And so here, and there are going to be two questions, parallel questions that you’re going to ask to collect this data.
51:03
And here are the two questions.
51:07
First one is, how likely are any of your departments crucial business metrics to improve, because of you applying the information you learned in this program?
51:15
And the second question is, how confident are, are you in your response to the previous question, where zero equals no confidence, and 100% equals high confidence?
51:28
And let me also say with this first question, if the if the training program and you were asked to design and develop this training program.
51:38
Because because there was a specific business metric that was that have been tracked that you know we had gone off the rails and and would need and needed improvement. Instead of asking them and just saying a crucial business metric, I would list it there.
51:56
You know, if the training program does have some strong connection with the, with this business metric, make that clear, you know that.
52:06
So you’re asking about this, these business metrics, and this training program, and the connection here, between the two.
52:15
So, I would, I would add that, and then the second comment would be, if the training program you’re looking at here has no connection, what? So ever, to any business results, then don’t ask these two questions.
52:31
Because it just frustrates people because they said, Well, I don’t know, I don’t know what the debate or oni business metrics that this thing relates to, and so if there is no connection, like, maybe you taught people how to, you know, create spreadsheets in Excel.
52:47
I guess maybe there might be a business connection there, but probably, it would be real obtuse.
52:52
And so, I would say, you know, not to ask these questions, if there isn’t any, any, any connection at all, between the content and the training program, and Level four business results.
53:06
So, here’s the process.
53:09
So, you’re going to multiply each participant’s response to question one by their confidence percentage, from question two.
53:19
And then divide that total, buy 100.
53:24
And so, you’ll see that in the math.
53:27
Then, you’re going to add the adjusted responses and divide the total by the number of participants done that before, so that won’t be different.
53:37
And then the result is and improve business results likelihood scores.
53:40
So, here’s our same 10 participants, and this is their, These are their responses to question one.
53:50
And here’s their responses to question to the confidence level, so we’re going to take those two.
53:56
And then, what we’re going to do, two, get to the 1.2, is we’re going to multiply the 60 that’s the confidence level times the two.
54:08
So they’re only 60% confidence. So we know there’s some potential error in their response to that question one.
54:15
And so, we multiply this 60 times 2, that’s 120.
54:20
Divide by 100, because we want to convert the percentage into a real number.
54:25
And so, our adjusted response, because the person was only 60% confident, is now 1.2.
54:34
And we do that for the second one.
54:36
So, the, this person responded with a seven to question one, they’re 90% confident, but there’s still a little bit of error, and there are potential error.
54:49
So, what we do is multiply the 90 times 7, that’s 630 divide by 100.
54:55
That’s 6.3.
54:58
Now, the only place where you don’t need to do, the, any division is here because there are 100% confidence. So you just move their response to question one over too bad, that there are that last column there.
55:16
So, now, what we want to do is add all these together.
55:19
So, these are the adjusted responses.
55:24
Just use me.
55:26
These are the adjusted responses and the other thing we’re trying to do with these is we’re trying to take the most conservative estimates here.
55:36
So that is why when we we we, when we multiplied the, the 60 times the two, we wanted to make sure that we were having doing the most Conservative estimates. So, we don’t want one of the business executives to say that.
55:50
You know, we were trying to inflate the data here.
55:55
So, now we take that 34.6 divided by the 10.
55:59
And, now, we have an improved business results likelihood score. I just rounded it here of 3.5.
56:05
And we can use the same guidelines here that we had with the training transfer, improve likelihood training transfer.
56:15
So six or greater indicates, improved business results are likely, 3 and six, between 3 and 6, he indicates they’re at risk and less than two indicates that they’re unlikely.
56:31
Let me just wrap this up. And then we will go to the questions there, Sarah, OK, and instead of just doing this one.
56:38
Sounds good.
56:38
OK, so let’s launch a poll here, because I wanted to want to get your reaction to the.
56:51
So, whether you, whether you, what do you think of the, these predictive questions on your Level one evaluation, and whether you see it as no nifty or interesting, or thought provoking, or you’re not sure.
57:05
The poll launched yesterday Votes streaming in there.
57:10
You can take NaN or so to submit your answer, Yeah.
57:15
We had a comment come through from Brian that said, thought provoking is not enough. I love this. Well, then you could have responded with Nifty. Let’s get those are the results up on the screen now. You see those on your side can?
57:33
yep, I can see him. Yep.
57:35
So 834, OK. Alright, so yeah.
57:41
So those of you in the not sure, if you, if there are specific questions, you have that, you want to, that you want to ask about, you know, and get clarified, then, you know, reach out to me and we’ll, we’ll, we’ll talk about it.
57:56
Let me just do the last few slides here, and then we’ll open it up to other questions. I’m willing to stay around. I know, we’re getting close on time here, but if you have the time and you want to stay around, and I’m happy to stay around and answer questions.
58:12
But I like, this is a good way to summarize this whole predictive question level one predictive question topic. You know, And this is from James Blankenship. Who’s the former CEO of Netscape and I, I really like this. I use it in a lot of my in a lot of my measurement evaluation presentations. You know, if we have data, let us look at the data. If all we have our opinions, let’s just go with mine now.
58:38
A lot of business executives may not say this to you, but you can pretty well bet that if you’re coming in with opinions, and you don’t have data to back it up, that, more than likely, this is what they’re thinking now. And some of them probably will even say it.
58:54
But, so that’s the reason I like this, and I like to include it in here. So it’s all about the data. Let’s look at the data. And if you’ve got data, I’m willing to look at it and talk about it.
59:04
If you’ve just got your opinions or anecdotal stories, then, yeah, let’s not waste a lot of time on it.
59:13
And let’s get through the last ones here.
59:19
There’s some other resources that are available to you that are all free. This is, these are all articles that I have on my website. And you can see the link is down there at the bottom, So if you download the slides, you can just click on that link.
59:32
It’s, I’ve made it an interactive link, so But these are 3, 3 articles I’ve written on level one evaluations. The middle one, they’re predictions.
59:43
And probabilities is the one that was published in TD Magazine here just back last year. And this is the one that that served as the content for this presentation.
59:54
So, but that’s there, and there’s other articles as well, and then I’ve got articles on Level two, Level three, Level four.
1:00:03
So there are a bunch of different articles by bunch of two e-books that are on my website. This first one, the Sad State of Measurement and evaluation, I mentioned, the ATD research studies. This is a summary of those three research studies that AC ATD did in 2009, 2015 and 2019. It’s not, it’s not everything in the research studies. I just did a quick summary of it.
1:00:30
And the title gives way the what you will find is that, when you look at what’s happened over the decade, from 2009 to 2019, nothing has changed.
1:00:41
The last article, I’m sorry, the last e-book, is all around predictive learning analytics. And it’s a new methodology that I’ve been working on for the last couple of years.
1:00:54
And applies predictive analytics to learning. And goes even beyond what we’ve talked about here with Level one.
1:01:02
So there’s, you might be interested in that.
1:01:06
I am doing A a online certificate course for training wive, plus online.
1:01:15
And it’s starting on June sixth, and it’s a series of four online sessions. And we’ll deal with each one of the levels of the of evaluation.
1:01:24
So it covers one session is on level 1, 1 sessions on level two. Level three, level four gets into all the nitty gritty stuff and writing questions. How do you analyze the data, all that stuff.
1:01:36
So you can, I think there’s still early bird registration that you can sign up and get a discount, if you want to.
1:01:44
And then I’m also going to be offering a predictive learning analytics or certification program that I’ve developed. This is the one I mentioned earlier that e-book. And so I’m going to offer this program in the fall, and the link is down there at the bottom if you want to find out more about it.
1:02:04
And, but it goes into the whole process of applying predictive analytics, two learning and four online sessions.
1:02:14
Same way as the as the, the other one with training plus online.
1:02:20
And there’s my contact information with all the links, so you can, you can contact me. I can run, but I can’t hide.
1:02:28
So go ahead and reach out.
1:02:31
And Sarah is going to close up here, and then we’ll ask, Open it up for questions if you wanna stay around.
1:02:41
Today’s webinar is sponsored by HRDQstore, and you can always learn more about it at HRDQstore.com. And we can open it up for questions Here. we had a couple come through that will answer before we wrap up today’s webinar. And the first question’s coming here from Sonia. Who says, do you suggest we use a 1 to 7 scale, set up a 1 to 5 scale?
1:03:03
Oh, Sonya. I wondered when somebody was going to ask that question.
1:03:08
Yes. Here’s the reason why. Nothing wrong with five point, measurement scales, liquored scales.
1:03:16
If you’ve done nothing, this is important if you’ve done nothing, to influence the way people are going to respond to your survey items.
1:03:26
So, for example, if you’re going to do a survey on, you know, a customer satisfaction survey, and you’ve done nothing to influence the way customers are going to respond to other than the normal day-to-day stuff.
1:03:42
Or you’re going to do a, an employee satisfaction survey.
1:03:47
And you’ve done nothing to influence the way people are going to respond. In those instances where you were, there hasn’t been anything done to influence the responses. five point scales are perfectly appropriate. That’s not our world in training and development when we’re talking about evaluating training programs. Because what we’ve done is we’ve done everything possible to create, to design and develop and implement the very best training program possible.
1:04:16
And so assuming we were at all successful, what happens with a five point scale is you get no one’s and two’s, maybe a smattering of threes.
1:04:26
And basically all your responses are fours and fives.
1:04:30
So you’re in essence, what happens is your five point scale becomes a two or a three point scale at best.
1:04:39
And a lot of people struggle giving accurate what they feel are accurate responses to 2 and 3 point scales. They feel like it doesn’t give them enough response options to provide an accurate response.
1:04:54
So, if you go to a seven point scale, we add one more response option at the high end of the scale.
1:05:02
And so we spread the scores out a little bit more, plus we provide people with more response, option opportunity, so they feel better about providing, or feel like they provided a Mac your response. And, I even recommend, and you’ll see that.
1:05:16
And, in some of those Level one articles, that I mentioned earlier, I even recommend going to possibly even a nine point, or if you really think you want to, you can go to an 11 point scale.
1:05:30
Because then, you, again, you start to spread all those responses that are clustered together. You can spread them out. You can get more granular data, and plus you give people more response options, and they feel better about the responses that they’re providing.
1:05:46
Great, and then this final question here from Jan is asking, How many companies measure level four results?
1:05:54
Um, I don’t remember off the top of my head, those research studies that ATD’s done have that information in there.
1:06:05
And I, it’s it’s the it’s small.
1:06:08
I mean, you could when you look at the number of organizations that use level ones versus level twos level threes, I mean that the trend is steadily down.
1:06:17
You can just look at it, I don’t remember what the exact numbers are, but it’s, it’s going to be pretty small, and it gets even smaller when you get to level five ROI.
1:06:29
Great. And then, yeah, if you would like to refer back from any of the content that you listened to today, you can head over to HRDQU.com and join our free membership. You’re able to watch the replay that recording. We did see a couple of questions about that, and with that here, that brings us back to the end of our webinar today. Thank you so much for your time, and for this really informative session today.
1:06:52
You’re welcome, Sarah. And thank you, everybody, for attending, and as they said, feel free to reach out.
1:06:57
I’m happy to know what to do one-on-one with you and answer your questions, and, you know, look at surveys, you’ve created and stuff like that.
1:07:05
So, just think of me as a resource.
1:07:10
Great, And, yes, thank you all for participating in today’s webinar, Happy Training.
1:07:16
Bye, everybody.
Share
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.