The Perfect Training Program That Fit the Role but Missed the Moment
When Adam, who led leadership development for a large organization, first rolled out the new training, one of the most consistent pieces of feedback he heard was that it felt relevant. Content had been tailored to different levels of leadership. Scenarios reflected real business challenges. Managers consistently said it felt relevant to their roles, and in some cases, surprisingly so. It wasn’t generic. It wasn’t theoretical. It felt like it fit. They could imagine using it.
So when the feedback started to shift and leaders said they weren’t seeing changes in how managers were handling performance conversations or making decisions, Adam wondered what had been missed. Maybe it needed to be more targeted. More specific. More personalized. But the more he looked, the harder it was to point to a gap in what people were seeing.
Over time, it became clear the issue wasn’t that the wrong content had been delivered. It was that the right content wasn’t showing up when it needed to. And that distinction is easy to miss.
Personalization answers a very specific question: what should this person see? And increasingly, organizations are getting good at that. With better data, better systems, and more sophisticated tools, it’s possible to match content to role, level, function, even individual behavior patterns. It creates a sense of precision and makes the experience feel relevant.
But relevance is only part of the equation.
Because performance isn’t shaped by what someone sees in a learning experience. It’s shaped by what they face in the moment they have to act. And those two things don’t always line up.
Adam started to see this more clearly when he replayed the feedback he’d been getting. The managers who struggled weren’t confused about the model. They weren’t lacking clarity on what good looked like. They were facing situations where applying it felt difficult, risky, or misaligned with what the environment seemed to reward. Like when a manager sits across from a high performer who had become increasingly difficult to manage. Or when a leader tries to push back on a decision that had already been informally made. Or when a team has to navigate shifting priorities where something had to give, and no option feels good. These weren’t edge cases. They were the work.
And they were the moments the program hadn’t been explicitly designed for.
That’s where personalization runs into its limits. It can make content easier to access and easier to connect to, but it doesn’t account for the conditions under which that content has to be used. It doesn’t resolve competing priorities or reduce the perceived risk of acting differently. It doesn’t change what happens when time is short and the stakes are real.
Those are context problems.
And context is harder to design for.
It requires getting closer to the work and understanding not just what people need to know, but also where they are most likely to hesitate, default, or avoid. It means asking different questions earlier in the process, before the content is built and before success is assumed. Where will this actually show up in the next 30 to 60 days? What will make that moment difficult? What happens if it goes poorly?
Adam hadn’t asked those questions explicitly. Not because he didn’t care about application, but because the design had focused on delivering something that felt relevant and well-constructed. The assumption was that once people understood the model, they would find their way to using it.
Some did.
Many didn’t.
At Unboxed, this is where the design process shifts. Before personalizing the learning experience, we run what we call a Context Check. First, identify the moment: where will this skill actually be tested in the next 30 to 60 days? Then define the conditions around that moment. What pressures will be present? Time, incentives, risk, power dynamics, competing priorities. Finally, stress test the content against those conditions: would the model still help someone act differently in that moment, or would it remain something they understood but couldn’t use?
If the answer is unclear, personalization won’t fix it. It may help the learning feel more relevant, but it won’t make it more usable. And that’s the difference Adam missed. And he’s hardly alone. Because personalization can improve what people see. Context determines whether anything actually changes.