Level 3 Evaluations Webinar Image

Level 3 Evaluations Made Simple, Credible, and Actionable

SHARE
SHARE
SHARE
EMAIL
PRINT

Overview

Knowing whether or not participants apply on the job what they learned in a training program is a critical issue for both L&D and the executives L&D supports. Demonstrating how that learning is applied speaks directly to L&D’s ability to be viewed as a credible business partner. However, according to an ROI Institute research study, only 11% of CEOs reported receiving this type of information.

So, why don’t more Leadership Development departments conduct Level 3 evaluations? A frequent answer is because it’s too complicated. A second reason is that many L&D professionals lack the knowledge and skills needed to collect and analyze Level 3 data.

In this informative, highly engaging session, participants will learn how to implement a revolutionary new, easy-to-use Level 3 evaluation methodology that produces credible, actionable data.

Attendees will learn

  • To use facts from a recent ATD research study to benchmark the use of Level 3 evaluations in an organization.
  • How to conduct focus groups using three key questions to collect the data needed to conduct a Level 3 evaluation.
  • How to calculate the best-case, worst-case, and most likely-case training transfer percentages using the estimation technique.

Special offers from our sponsor

This is a collection of downloadable and customizable courses for virtual and classroom instructor-led training, plus self-study learning. Each title includes digital document files for facilitator and participant materials provided in unlocked Microsoft Office format. You can download, customize, and deliver training today!

Get 10% off the entire RTL Collection!
Use Code KENP10

Presenter

Ken Phillips is the founder and CEO of Phillips Associates  and the creator of the Predictive Learning Analytics™️ evaluation methodology. He is also a measurement and evaluation master, having spoken and gotten rave reviews—at the ATD International Conference on measuring and evaluating learning issues every year since 2008. He also has presented at the Annual Training Conference and Expo every year since 2013 on similar topics.

Ken has pooled his measurement and evaluation knowledge and experience into a series of presentations explicitly designed for L&D professionals. The presentations are highly engaging, practical, and filled with relevant content most L&D professionals haven’t previously heard. In short, they are not a rehash of traditional measurement and evaluation theory—but fresh ideas and solutions.

Email Ken with any questions at ken@phillipsassociates.com  ⇗

Available on Amazon

L&D Book related to level 2 evaluation - Level 3 evaluation

Four Levels of Training Evaluation Book related to level 2 evaluation - Level 3 evaluation

L&D in practive Book related to level 2 evaluation - Level 3 evaluation

Sponsor

HRDQ
Training Tools for Developing Great People Skills

This event is sponsored by HRDQ. For 45 years, HRDQ has provided research-based, off-the-shelf soft-skills training resources for classroom, virtual, and online training. From assessments and workshops to experiential, hands-on games, HRDQ helps organizations improve performance, increase job satisfaction, and more.

Learn more at HRDQstore.com ⇗

Related HRDQstore training resources
More topics from HRDQ-U
Career development
Career
Development
decision
Decision
Making
Diversity and inclusion webinars
Diversity &
Inclusion
Business coaching webinar
Coaching
Skills
Webinar customer service
Customer
Service
Creativity and innovation skills training
Creativity &
Innovation

2 Responses

  1. Question: It looks like we have time here from for one more question. Becky said, what if what they learned cannot be done with 30 days to respond to the evaluation?

    Answer: Yeah, that’s a great question, I would say. Probably, this is a little bit risky, but you can try it, I would say well then wait until they’ve had an opportunity to apply what they’ve learned. Because otherwise, your calculation, you’re going to come up with training transfer will be affected by the fact that people haven’t had an opportunity yet to apply it. So there, you know, the magic in the 30 days was because of the Ebbinghaus forgetting curve, but you can wait longer. So if you need to wait, you know, 45 days or 60 days or something like that. That’s okay. But again, remember that at the beginning of the the focus groups, if you’re using that methodology, you need to review the content of the training and what was covered in order to get credible answers to those three questions.

  2. Question: The first question is coming from Kelsey. Kelsey said, my organization has 600 people and often courses will only have 20 to 30 participants, is there a percentage we should aim for to scale that data collection?

    Answer: Um, well, I guess not I mean, if the if the people attending the training are from all the, from represent all the or most of the different departments, you know, you could get by with doing, you know, collecting data with just one of the groups, one of the cohorts. Because if you’ve got 20, you know, that’s, that’s dangerously close to 25 or 30. If they if they represent the population of people who would attend that training program. Otherwise, I would wait until you’ve maybe done two or three programs, and then randomly select the 25 or 30, from, you know, from that pool of, you know, 60, or whatever number you might have, you can wait and do it after, you know, you’ve got 100 If you want, but you don’t really need to wait that long, as long as the population of people who’ve attended those programs represents the, you know, the, what the larger group would look like.

Comment

Your email address will not be published. Required fields are marked *

Log In