Turning Negative Results into Positive Change

Turning Negative Results into Positive Change | HRDQ-U Blog
Share This Post:

Chief learning officers often must evaluate their key learning programs, collecting several types of data – reaction, learning, application, impact, intangibles, and maybe even return on investment. What if the evaluation produces disappointing results? Suppose the application and impact were less than desired, and the ROI calculation is negative. This prospect causes some learning executives to steer clear of this level of accountability altogether.

For some CLOs, negative results are the ultimate fear. Immediately, they begin to think, “Will this reflect unfavorably on me? On the program? On the function? Will budgets disappear? Will support diminish?” These are all appropriate questions, but most of these fears are unfounded. In fact, negative results reveal the potential to improve programs. Here are 11 ways to address negative results and use them to facilitate positive transformations.

Don’t miss this intriguing
webinar from HRDQ-U

Don’t miss this intriguing webinar from HRDQ-U

What Caused It? Connecting Programs to Results

Recognize the Power of a Negative Study

When the study results are negative, there is always an abundance of data indicating what went wrong. Was it an adverse reaction? Was there a lack of learning? Was there a failure to implement or apply what was learned? Did major barriers prevent success? Or was there a misalignment in the beginning? These are legitimate questions about lack of success, and the answers are always obtained in a comprehensive evaluation study.

Look for Red Flags

Indications of problems often appear in the first stages of initiation – after reaction and learning data have been collected. Many signals can provide insight into the program’s success or lack of success, such as participants perceiving that the program is not relevant to their job. Perhaps they would not recommend it to others or do not intend to use it on the job. These responses can indicate a lack of utilization, which usually translates into negative results. Connecting this information requires analyzing data beyond overall satisfaction with the program, the instructor, and the learning environment. While important, these types of ratings may not reveal the value of the content and its potential use. Also, if an evaluation study is conducted on a program as it is being implemented, low ratings for reaction and learning may signal the need for adjustments before any additional evaluation is conducted.

Lower Outcome Expectations

When there is a signal that the study may be negative, or it appears that there could be a danger of less-than-desired success, the expectations of the outcome should be lowered. The “under-promise and over-deliver” approach is best applied here. Containing your enthusiasm for the results early in the process is important. This is not to suggest that a gloom-and-doom approach throughout the study is appropriate, but that expectations should be managed and kept on the low side.

Look for Data Everywhere

Evaluators are challenged to uncover all the data connected to the program – both positive and negative. To that end, it is critical to look everywhere for data that shows value (or the lack of it). This thorough approach will ensure that nothing is left undiscovered – the fear harbored by many individuals when facing negative results.

Never Alter the Standards

When the results are less than desired, it is tempting to lower the standards – to change the assumptions about collecting, processing, analyzing, and reporting the data. This is not a time to change the standards. Changing the standards to make the data more positive renders the study virtually worthless. Without standards, there is no credibility.

Remain Objective Throughout

Ideally, the evaluator should be completely objective or independent of the program. This objectivity provides an arms-length evaluation of its success. It is important not only to enter the project from an objective standpoint but also to remain objective throughout the process. Never become an advocate for or against it. This helps alleviate the concern that the results may be biased.

Prepare the Team for the Bad News

As red flags arise and expectations are lowered, it appears that a less-than-desired outcome will be realized. It is best to prepare the team for this bad news early in the process. Part of the preparation is to make sure that they don’t reveal or discuss the outcome of the program with others. Even when early results are positive, it is best to keep the data confidential until all are collected. Also, when it appears that the results are going to be negative, an early meeting will help develop a strategy to deal with the outcome. This preparation may address how the data will be communicated, the actions needed to improve the program, and, of course, explanations as to what caused the lack of success.

Consider Different Scenarios

Standards connected with the ROI methodology are conservative for a reason: The conservative approach adds credibility. Consequently, there is a buy-in of the data and the results. However, sometimes it may be helpful to examine what the result might be if the conservative standards were not used. Other scenarios may actually show positive results. In this case, the standards have not changed, but the presentation shows how different the data would be if other assumptions were made. This approach allows the audience to see how conservative the standards are. For example, on the cost side, including all costs sometimes drive the project to a negative ROI. If other assumptions could be made about the costs, the value could be changed, and a different ROI calculation might be made. On the benefit side, a lack of data from a particular group sometimes drives a study into negative territory because of the “no data, no improvement” standard. However, another assumption could be made about the missing data to calculate an alternative ROI. It is important for these other scenarios to be offered to educate the audience about the value of what is obtained and to underscore the conservative approach. It should be clear that the standards are not changed and that the comparisons with other studies would be based on the standards in the original calculation.

Find Out What Went Wrong

With disappointing results, the first question usually asked is, “What went wrong?” It is important to uncover the reasons for the lack of success. As the process unfolds, there is often an abundance of data to indicate what went wrong. The follow-up evaluation will contain specific questions about impediments and inhibitors. In addition, asking for suggestions for improvements often underscores how things could be changed to make a difference. Even when collecting enablers and enhancers, there may be clues as to what could be changed to make it much better. In most situations, there is little doubt as to what went wrong and what can be changed. In worst-case scenarios, if the program cannot be modified or enhanced to add value, it may mean that it should be discontinued.

Adjust the Storyline

When communicating data, negative results indicate that the storyline needs to change. Instead of saying, “Let’s celebrate – we’ve got great results for this program,” the story reads, “Now we have data that show how to make this program more successful.” The audience must understand that the lack of success may have existed previously, but no data were available to know what needed to be changed. Now, the data exist. In an odd sort of way, this becomes a positive spin on less-than-positive data.

Drive Improvement

Evaluation data are virtually useless unless used to improve processes. In a negative study, there are usually many items that could be changed to make it more successful. It is important that a commitment is secured to make needed adjustments so that the program will be successful in the future. Until those actions are approved and implemented, the work is not complete. In worst-case scenarios, if the program cannot be changed to add value, it should be terminated, and the important lessons should be communicated to others. This last step underscores that the comprehensive evaluation is used for process improvement and not for performance evaluation of the staff.

Negative study results do not have to be bad news. Negative results contain data that can be used not only to explain what happened but also to adapt and improve in the future. It is important to consider the potential of a negative study and adjust expectations and strategies throughout the process to keep the negative results from being a surprise. In the worst-case situation, negative data will surprise the key sponsor at the time of presentation.

Author
Update headshot of Patti Phillips smiling
Patti P. Phillips, Ph.D.

Dr. Patti Phillips, Ph.D., CEO of ROI Institute, Inc., is a renowned leader in measurement and evaluation. Patti helps organizations implement the ROI Methodology®️ in more than 70 countries around the world.

Since 1997, Patti has been a driving force in the global adoption of the ROI Methodology and the use of measurement and evaluation to drive organization change. Her work as an educator, researcher, consultant, and coach supports practitioners as they develop their own expertise in an effort to help organizations and communities thrive. Her work spans private sector, public sector, nonprofit, and nongovernmental organizations.

Patti serves as a member of the Board of Trustees of the United Nations Institute for Training and Research (UNITAR). She serves as chair of the Institute for Corporate Productivity (i4cp) People Analytics Board; Principal Research Fellow for The Conference Board; board chair of the Center for Talent Reporting (CTR); and is an Association for Talent Development (ATD) Certification Institute Fellow. She also serves on the faculty of the UN System Staff College in Turin, Italy.

Patti has authored or edited more than 75 books on the subject of measurement, evaluation, analytics, and ROI. Her work has been featured on CNBC, Euronews, and in more than a dozen business journals.

Connect with Patti on LinkedIn and at roiinstitute.net.

Recommended Webinar
What Caused It? Connecting Programs to Results

Senior executives seek to understand the causes behind significant changes in business metrics. Learn how to isolate the program results.

What Caused It? Connecting Programs to Results webinar
More HRDQ-U Blog Posts

Consulting has grown steadily in recent years and is poised to be an essential contributor to organizations.

Read this blog to learn the 7 tips to maintaining professionalism during performances from a Disney veteran by reading this blog.

I was settling into my seat on the 50-yard line on a perfect football Sunday, ready to enjoy the Broncos’s

Related Topics
Career development
Career Development
Business coaching webinar
Coaching
Creativity and innovation skills training
Creativity and Innovation
Webinar customer service
Customer Service
Turning Negative Results into Positive Change
Decision Making
Diversity and inclusion webinars
Diversity and Inclusion
Turning Negative Results into Positive Change
Leadership
PM webinars
Project Management
Log In