Level 2 Knowledge Tests: Can You Spot a Valid Test Question?

Many gray question marks resting on a flat surface with one red question mark in the center
Share This Post:
Rate this post

Creating effective multiple-choice test questions may seem deceptively simple, considering how many of them we encounter in our lives. However, it’s a skill that requires more than meets the eye and often eludes many learning and development professionals. Crafting Level 2 knowledge tests demands both art and science, but it’s a skill that can be acquired and refined. In this blog post, we’ll delve into the art of designing valid, scientifically sound Level 2 multiple-choice test questions, addressing common pitfalls that L&D professionals often stumble upon. These pitfalls include questions that inadvertently provide clues to the correct answers, questions that become overly challenging or tricky, and questions that merely test the recall of facts. With this in mind, understanding these challenges and their solutions, you’ll be better equipped to construct tests that accurately gauge knowledge and contribute to effective learning assessments.

Don’t miss this intriguing
webinar from HRDQ-U

Don’t miss this intriguing webinar from HRDQ-U

Creating Level 2 Quizzes and Tests That Actually Measure Something

The Complexity of Knowledge Test Creation

We’ve all taken lots and lots of multiple-choice tests, right? And that makes us pretty good writers of test questions, right? Nope. There’s more to creating valid, scientifically sound Level 2 knowledge tests than meets the eye. Writing test questions requires art and science and is something not possessed by many L&D professionals. However, it is a learnable skill.

Common Mistakes in Test Creation

Three examples of common mistakes made by L&D professionals include:

  1. Creating test questions that contain clues as to the correct answer.
  2. Creating questions that are overly difficult or tricky.
  3. Creating questions that measure the mere recall of facts and information.


In each case, the result is a test that doesn’t measure what was intended. Further, it can result in misinformation where it appears either learning occurred when it didn’t or learning didn’t occur when it did.

Test Yourself

The following three multiple-choice test questions contain a commonly made test creation error. See if you can identify what’s wrong with each one. Answers are provided afterward.

Question #1

Effective listening is defined as:
A. Attempting to understand the person speaking from their perspective, not your own.
B. Encouraging the other person to talk.
C. Consolidating the conversation.
D. Creating a win/win conversation.

Question #2

The ADDIE model is used primarily as an:
A. Instructional design tool
B. Measurement and evaluation tool
C. Change management tool
D. Process improvement tool

Question #3

Which communication technique consolidates a discussion and moves the focus to another topic?
A. Powerful Goals, Frequent Team Parties, Aligned Goals, and Engaged Team Members
B. Powerful Purpose, Aligned Goals, Inclusive Leadership, and Engaged Team Members
C. Powerful Purpose, Aligned Goals, Including Leadership and Engaging Meetings
D. Powerful Purpose, Directive Leadership, Aligned Goals, and Engaged Team Members

Answers to the Three Questions

Do you know where the test writer went wrong in each example? Here is an explanation of the problem with each test question above and a tip on avoiding the error.

Answer to Question #1

Savvy test takers know that if one of the response options contains many more words than the other choices or sounds like a definition, it’s likely the correct answer.

Make sure all the response alternatives contain nearly the same number of words and sound similar.

Answer to Question #2

Savvy test takers know that the correct answer to this question must be “A” because it’s the only response alternative that begins with a vowel and is grammatically valid with “an” at the end of the stem (the test item).

If the correct answer begins with a vowel, end the question with a(n). Placing the “n” in parentheses enables any of the response choices to be correct.

Answer to Question #3

Savvy test takers know that response choices A and B are incorrect because it’s common knowledge that arguing and interrupting are not effective communication techniques. Consequently, the probability of guessing the correct answer, even without having mastered the program material, increases from 25% to 50%, thus reducing question validity.

Solution: Develop only plausible response alternatives.

The above three test creation errors are just a sample of the different mistakes that are possible. Knowing the complete list and the solutions for overcoming each will help you write multiple-choice test questions that measure what you intend and will produce valid, scientifically sound results.

Elevating Test Question Quality

Test creation is an intricate process that demands meticulous attention to detail. Common pitfalls, such as inadvertently tipping off the correct answer or creating unnecessarily tricky questions, can hinder the effectiveness of your assessments. By recognizing these issues and implementing practical solutions, you can elevate the quality of your Level 2 multiple-choice test questions. Crafting tests that genuinely measure what’s intended enhances the accuracy of your assessments and fosters meaningful learning outcomes. Avoiding these pitfalls and becoming proficient in test creation is a valuable skill that empowers L&D professionals to effectively contribute to learners’ growth and development. So, the next time you’re tasked with crafting a knowledge test, remember these insights to ensure your questions are valid and scientifically sound.

Headshot of Ken Phillips
Ken Phillips

Ken Phillips is the founder and CEO of Phillips Associates and the creator of the Predictive Learning Analytics™️ evaluation methodology. He is also a measurement and evaluation master, having spoken and gotten rave reviews – at the ATD International Conference on measuring and evaluating learning issues every year since 2008. He also has presented at the Annual Training Conference and Expo every year since 2013 on similar topics.

Ken has pooled his measurement and evaluation knowledge and experience into a series of presentations explicitly designed for L&D professionals. The presentations are highly engaging, practical, and filled with relevant content most L&D professionals haven’t previously heard. In short, they are not a rehash of traditional measurement and evaluation theory – but fresh ideas and solutions.

Connect with Ken on LinkedIn and by email at ken@phillipsassociates.com

Follow Ken on LinkedIn ⇗

Recommended Webinar
Creating Level 2 Quizzes and Tests That Actually Measure Something

Uncover the impact of your training programs with Level 2 evaluation. Learn how well-designed test questions can reveal participants’ mastery of learning.

Creating level 2 quizzes and training questions
More HRDQ-U Blog Posts

The ROI Methodology®, the most recognized approach to ROI evaluation, provides organizations with a process that can cut across organizational

High-performing teams are essential for the success and growth of any business.

Related Topics
Career development
Career Development
Business coaching webinar
Creativity and innovation skills training
Creativity and Innovation
Webinar customer service
Customer Service
Level 2 Knowledge Tests: Can You Spot a Valid Test Question?
Decision Making
Diversity and inclusion webinars
Diversity and Inclusion
Level 2 Knowledge Tests: Can You Spot a Valid Test Question?
PM webinars
Project Management
Log In