Search
Generic filters
Exact matches only
Search in title
Search in content
Search in excerpt

Level 3 Evaluations Made Simple, Credible, and Actionable

Level 3 Evaluations Webinar Image

60 minutes

Knowing whether or not participants apply on the job what they learned in a training program is a critical issue for both L&D and the executives L&D supports. Demonstrating how that learning is applied speaks directly to L&D’s ability to be viewed as a credible business partner. However, according to an ROI Institute research study, only 11% of CEOs reported receiving this type of information. So, why don’t more L&D departments conduct Level 3 evaluations? A frequent answer is because it’s too complicated. A second reason is that many L&D professionals lack the knowledge and skills needed to collect and analyze Level 3 data. In this informative, highly engaging session, participants will learn how to implement a revolutionary new easy-to-use Level 3 evaluation methodology that produces credible, actionable data.

Attendees will learn

  • To use facts from a recent ATD research study to benchmark the use of Level 3 evaluations in an organization.
  • How to conduct focus groups using three key questions to collect the data needed to conduct a Level 3 evaluation.
  • How to calculate the best case, worst-case, and most likely case training transfer percentages using the estimation technique.

Who should attend

  • Managers and supervisors
  • L&D professionals
  • Training and HR professionals

Presenter

Professional - Chief Executive Officer

Ken Phillips is the founder and CEO of Phillips Associates and the creator and chief architect of the Predictive Learning Analyticsâ„¢ (PLA) learning evaluation methodology. He has over 30 years of experience designing learning instruments and assessments, with more than a dozen of them published.

He regularly speaks to Association for Talent Development (ATD) groups, university classes, and corporate learning and development groups. Since 2008, he has spoken at the ATD International Conference on measurement and evaluation of learning topics. Since 2013, he has also presented at the annual Training Conference and Expo on similar issues.

Before pursuing a Ph.D. in the combined fields of organizational behavior and educational administration at Northwestern University, Ken held management positions with two colleges and two national corporations. In addition, he has written articles for TD magazine, Training Industry Magazine, and Training Today magazine. He also is a contributing author to five books in the L&D field and the author of the recently published ATD TD at Work publication titled Evaluating Learning with Predictive Learning Analytics.

Connect with Ken on LinkedIn and at www.phillipsassociates.com.

Sponsor

L&D
Coaching Skills Inventory

The Coaching Skills Inventory is designed to assess a leader's ability to use the skills needed for conducting effective coaching meetings. After all, the purpose of a coaching meeting is not to reprimand an employee or threaten dismissal if their performance does not improve. Rather, the goal of coaching is to help redirect an employee’s behavior to improve future performance while continuing to build a relationship of mutual trust.

Learn more at
https://hrdqstore.com/products/coaching-skills-inventory.

Watch the video

Play Video

Coming soon.

Related events

Share:

2 Responses

  1. Question: It looks like we have time here from for one more question. Becky said, what if what they learned cannot be done with 30 days to respond to the evaluation?

    Answer: Yeah, that’s a great question, I would say. Probably, this is a little bit risky, but you can try it, I would say well then wait until they’ve had an opportunity to apply what they’ve learned. Because otherwise, your calculation, you’re going to come up with training transfer will be affected by the fact that people haven’t had an opportunity yet to apply it. So there, you know, the magic in the 30 days was because of the Ebbinghaus forgetting curve, but you can wait longer. So if you need to wait, you know, 45 days or 60 days or something like that. That’s okay. But again, remember that at the beginning of the the focus groups, if you’re using that methodology, you need to review the content of the training and what was covered in order to get credible answers to those three questions.

  2. Question: The first question is coming from Kelsey. Kelsey said, my organization has 600 people and often courses will only have 20 to 30 participants, is there a percentage we should aim for to scale that data collection?

    Answer: Um, well, I guess not I mean, if the if the people attending the training are from all the, from represent all the or most of the different departments, you know, you could get by with doing, you know, collecting data with just one of the groups, one of the cohorts. Because if you’ve got 20, you know, that’s, that’s dangerously close to 25 or 30. If they if they represent the population of people who would attend that training program. Otherwise, I would wait until you’ve maybe done two or three programs, and then randomly select the 25 or 30, from, you know, from that pool of, you know, 60, or whatever number you might have, you can wait and do it after, you know, you’ve got 100 If you want, but you don’t really need to wait that long, as long as the population of people who’ve attended those programs represents the, you know, the, what the larger group would look like.

Comment

Your email address will not be published. Required fields are marked *