top of page

Level 1 - The Hidden Treasures You Probably Still Haven't Discovered

Updated: Jan 3



The Learning community has, and always will be, dedicated to substantiating the effectiveness of interventions like training and development programs. This pursuit goes beyond mere justification of our efforts—it's essential for showcasing substantial returns on investment and earning a prominent role in the strategic decisions of senior leadership. Nevertheless, the complex web of organizational structures, the diversity of topics addressed, and the challenge of accounting for various environmental factors often make it difficult to forge a definitive link between educational initiatives and measurable outcomes. This complexity poses a persistent challenge for professionals in our field.


Spoiler alert: We haven't fully cracked the code yet, either. But what if your Level 1 evaluations (Kirkpatrick model) could offer more insights than you currently realize? What if this additional information could fundamentally transform the way you develop, organize, and deploy your programs? We believe this kind of data is not only nice to have knowledge, but a treasure. In this article, we delve into two case studies that underscore the importance of digging a bit deeper into Level 1s in order to uncover critical and often overlooked information, potentially reshaping your approach to the learning intervention and beyond. 

Case Study 1: Company A 

In your typical evaluation report, you will most likely find overall satisfaction averages and/or a Net Promoter Score (NPS). But this is just skimming the surface. For client A, we delivered a 3-day Executive Development program. Our evaluation form was our magnifying glass, seeking answers to three key questions: how participants rated their overall experience (measured as NPS), their experience in different parts of the program (the modules), and how these modules compared to one another. (Image 1 - NPS and bar chart)


Now, if the goal of Level 1 is to identify the most valuable parts of a program, you might think about discarding less popular modules. But wait – what if those average scores are just the tip of the iceberg? Often, we stop there, but there's a goldmine of critical hidden information waiting to be discovered. Let's "double click" on this and examine the data using Standard Deviations and Correlations, which are both easily applied using Excel or Google Sheets (If you're new to them, don't fret – there are plenty of YouTube tutorials that can walk you through the process step by step. Rest assured, it's easier than it sounds!). 


Standard Deviation - The standard Deviation can provide L&D practitioners with information about the range of participant experiences in a program or a module. For this client, we noted the three modules with the highest standard deviations (and highlighted the most significant one). Interestingly, module 4, which received the lowest ratings, was also the one with the highest standard deviation. With an average rating of 3.13, it's evident that some participants rated the module as high as 5, while others rated it as low as 1. In other words, the average tells us very little. Moreover it masks significant disparities in participant opinions. Through focus groups and interviews, together with the client, we learned that the module's content was too technically advanced for a significant portion of the participants. To address this, the client decided to offer a short pre-program booster for those who identified a need to strengthen their understanding of the topic before attending the program. This approach effectively narrowed the gap between the two groups. (Image 1 - standard deviations column)


Correlations - We also highlighted the three modules that most closely correlated with the program's NPS. Why, you ask? Well, if producing highly rated programs is one of our goals, it's important to identify what might best predict these ratings. We're not claiming a causal relationship (it's possible that they both measure the same thing or are influenced by an unmeasured third variable), but we also can't dismiss the possibility. At Hu-X, we consider these modules as potentially disproportionate defining moments. Think of them as levers: improving them could, in theory, enhance the overall NPS. As a result, partnering with the client, we enhanced module 10, leading to phenomenal results in the NPS. (Image 1 - correlations column)


(Image 1)



Case Study 2: Company B 

Client B's Leadership Development program, which had been garnering just "OK" scores for the past five years, presented us with a unique challenge. We partnered with them to spearhead a comprehensive redesign, development, and delivery of the program. This involved extensive stakeholder interviews, focus groups, and surveys of both past and prospective participants, culminating in a robust design that we felt was exceptionally strong. The inaugural delivery of the revamped program in June 2023 yielded a significant uptick in the NPS, a moment of triumph and celebration for us. However, in a surprising twist, the program delivered in September 2023, which we perceived to be even stronger, saw a decline in scores compared to the post-redesign high.





This prompted us to question whether the program design alone influenced these scores. We conducted a thorough analysis of every conceivable variable, from the session's location to the seniority of guest speakers and the timing of the sessions (e.g., end of quarter, announcement of a reorg, or stock market performance). Interestingly, the most significant correlation we found was a negative one with the proportion of relatively new participants (those with up to two years in the organization) in the cohort. Initially, we speculated that newer organization members were rating the program lower, but deeper investigation revealed a different narrative. It turned out that the more "new" participants there were, the lower the NPS were from those who had been with the organization for over a decade. 


(Image 3)


This revelation not only saved the Leadership Development team—and us—a significant amount of time that might have been spent revisiting the needs analysis and engaging in further redesign, but it also illuminated the profound diversity of perspectives within the organization, with implications reaching well beyond the scope of the Leadership Development program.


On one hand, the newer employees brought with them enthusiasm and were keen on introducing bold, transformative ideas. Their fresh perspectives and eagerness to instigate change represented a vital energy for organizational growth. On the other hand, the more seasoned employees, armed with over a decade of experience, offered invaluable institutional knowledge and a nuanced understanding of the organization's dynamics. Their caution and hesitancy towards rapid changes stemmed from a deep awareness of the organization's history and the complexities of implementing substantial shifts. With this insight at hand, we conducted an in-depth review of the program's design, content, and activities, aiming to ensure that an individual's tenure in the organization wouldn't affect the session's NPS. This approach paid off, with our NPS soaring to a high of 99.0 in the subsequent session. However, the dichotomy revealed at Level 1 of the program pointed to a more extensive cultural phenomenon. This realization underscored the need for a more focused effort to establish an environment where diverse perspectives could seamlessly contribute to enhancing both the program experience and the overall evolution of the organization.


In conclusion, it's clear that the true value of Level 1 evaluations lies not just in the numbers they present, but in the hidden insights they conceal. By diving deeper and examining the data with a discerning eye, we can uncover information that not only informs our program designs but may also carry broader implications and sometimes even treasures.





853 views0 comments

Comments


Commenting has been turned off.
bottom of page