Knowing whether or not participants apply on the job what they learned in a training program is a critical issue for both L&D and the executives L&D supports. Demonstrating how that learning is applied speaks directly to L&D’s ability to be viewed as a credible business partner. However, according to an ROI Institute research study, only 11% of CEOs reported receiving this type of information.
So, why don’t more Leadership Development departments conduct Level 3 evaluations? A frequent answer is because it’s too complicated. A second reason is that many L&D professionals lack the knowledge and skills needed to collect and analyze Level 3 data.
In this informative, highly engaging session, participants will learn how to implement a revolutionary new, easy-to-use Level 3 evaluation methodology that produces credible, actionable data.
This is a collection of downloadable and customizable courses for virtual and classroom instructor-led training, plus self-study learning. Each title includes digital document files for facilitator and participant materials provided in unlocked Microsoft Office format. You can download, customize, and deliver training today!
Get 10% off the entire RTL Collection!
Use Code KENP10
Ken Phillips is the founder and CEO of Phillips Associates ⇗ and the creator of the Predictive Learning Analytics™️ evaluation methodology. He is also a measurement and evaluation master, having spoken and gotten rave reviews—at the ATD International Conference on measuring and evaluating learning issues every year since 2008. He also has presented at the Annual Training Conference and Expo every year since 2013 on similar topics.
Ken has pooled his measurement and evaluation knowledge and experience into a series of presentations explicitly designed for L&D professionals. The presentations are highly engaging, practical, and filled with relevant content most L&D professionals haven’t previously heard. In short, they are not a rehash of traditional measurement and evaluation theory—but fresh ideas and solutions.
Email Ken with any questions at ken@phillipsassociates.com ⇗
This event is sponsored by HRDQ. For 45 years, HRDQ has provided research-based, off-the-shelf soft-skills training resources for classroom, virtual, and online training. From assessments and workshops to experiential, hands-on games, HRDQ helps organizations improve performance, increase job satisfaction, and more.
Learn more at HRDQstore.com ⇗
Coming soon.
Discover more HRDQ-U performance management webinars
Access more valuable HRDQ-U content by Ken
Explore HRDQ Store’s performance management training resources ⇗
Level 3 Evaluations Made Simple
Level 2 Knowledge Tests: Can you spot a valid test question?
Creating Level 2 Quizzes and Tests That Actually Measure Something
Sign up for more as a member of HRDQ-U
HRDQ-U offers much of its learning content free to visitors, including live and select webinars, blog posts, and more, with new events and posts shared every week. However, there is much more learning available. You can access our complete library of on-demand webinars and other training events and content by simply signing up.
© 2023 HRDQ – All rights are reserved.
All content on our site is copyrighted and may not be reproduced or shared in any form without the express written permission of HRDQ-U.
HRDQU.com contains affiliate links. If you purchase via these links, HRDQ may receive a commission at no cost to you.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
2 Responses
Question: It looks like we have time here from for one more question. Becky said, what if what they learned cannot be done with 30 days to respond to the evaluation?
Answer: Yeah, that’s a great question, I would say. Probably, this is a little bit risky, but you can try it, I would say well then wait until they’ve had an opportunity to apply what they’ve learned. Because otherwise, your calculation, you’re going to come up with training transfer will be affected by the fact that people haven’t had an opportunity yet to apply it. So there, you know, the magic in the 30 days was because of the Ebbinghaus forgetting curve, but you can wait longer. So if you need to wait, you know, 45 days or 60 days or something like that. That’s okay. But again, remember that at the beginning of the the focus groups, if you’re using that methodology, you need to review the content of the training and what was covered in order to get credible answers to those three questions.
Question: The first question is coming from Kelsey. Kelsey said, my organization has 600 people and often courses will only have 20 to 30 participants, is there a percentage we should aim for to scale that data collection?
Answer: Um, well, I guess not I mean, if the if the people attending the training are from all the, from represent all the or most of the different departments, you know, you could get by with doing, you know, collecting data with just one of the groups, one of the cohorts. Because if you’ve got 20, you know, that’s, that’s dangerously close to 25 or 30. If they if they represent the population of people who would attend that training program. Otherwise, I would wait until you’ve maybe done two or three programs, and then randomly select the 25 or 30, from, you know, from that pool of, you know, 60, or whatever number you might have, you can wait and do it after, you know, you’ve got 100 If you want, but you don’t really need to wait that long, as long as the population of people who’ve attended those programs represents the, you know, the, what the larger group would look like.