This blog was originally written for Team Copilot and is republished here on my personal website.
Most AI-adoption programs are focused on the wrong thing. We keep trying to drive adoption through users, while the real bottleneck sits elsewhere.
There is a chart in the new Work Trend Index that I think every adoption lead, change manager, and people leader should see. It sorts the factors that drive AI’s real impact at work into three buckets: organizational, individual, and demographic. The first bucket is more than twice the size of the others combined, and that single finding reframes a lot of what we have been calling “adoption”.
Something I see in almost every adoption program is what the report the Transformation Paradox. The gap between organizations and individuals: workers are ready, and their organizations are not. About one in five AI users sit in what Microsoft labels the “Frontier” zone, where individual capability and organizational readiness reinforce each other. About one in ten are blocked, skilled people in companies that haven’t caught up. Half are still in what the report calls the emergent or messy middle, where both sides are finding their feet. It’s the familiar moment in a kickoff where you realize the organization, not the user, will be the slower variable.
What I find genuinely useful about the report is how concrete it gets about what “organization” actually means in practice. It is not a just vibe. It is whether a manager openly uses AI in front of their team, whether they set quality standards for AI work, create real room to experiment, and ask people to redesign work rather than just speed it up. The numbers behind those behaviors are striking. When managers actively model AI use, employees report a 17-point lift in perceived AI value and a 22-point lift in critical thinking about their own AI use. When managers create psychological safety around experimentation, employees are 1.4 times more likely to be high-frequency users of agentic AI. The manager-as-multiplier story is not new, but the size of the effect surprised me.
There is one group in the data that stood out to me: the Frontier Professionals, roughly 16% of AI users, who use agents for multi-step workflows and routinely rethink how work gets done. What makes this group worth watching is not the agents themselves. It is how deliberately they protect their own judgment. They are more likely than other AI users to say they intentionally do some work without AI to keep their skills sharp (43% vs. 30%), and to pause before starting work to decide what should be done by AI versus a human (53% vs. 33%). 86% of AI users in the wider survey say they treat AI output as a starting point and “stay responsible for the thinking.” The Frontier group simply does this more consciously.
That is the part I keep coming back to in adoption conversations. The most advanced users are not the ones who delegate the most. They are the ones who know which mode the work is in. Sometimes the answer is “let the agent run with this,” and sometimes it is “this is a thinking job, I’ll do it myself, with Copilot as a sounding board.” The skill is the choice, not the speed. Knowing when to rely on AI, and when not to, is what sets advanced users apart.
For change managers and adoption leads, two things in this report are worth pointing out.
The first is that “drive adoption” is the wrong frame. The report is essentially saying that you cannot push people through a system that is built to reward the old way of working. Only 13% of AI users say they are rewarded for reinventing work with AI when results are not yet there. As long as that number stays low, the messy middle will keep absorbing every ambitious pilot. Adoption work, in this view, is less about teaching prompts and more about helping leaders rebuild the operating model around the prompts, the incentives, the performance conversations, the definitions of “good work,” and what actually gets measured.
The second is that the human story is not getting smaller as agents take on more, it is getting more important. The report frames it as “the new agency equation”: as agents handle more of the execution, humans gain more room to direct the work, make the calls, and own the outcomes. That only works if people are equipped, trusted, and rewarded for using that room well. Otherwise the agency just sits there, unclaimed.
In every Copilot adoption program I have been part of this year, the hardest part has not been the technology. It has been getting the system around the user, their manager, their KPIs, their team norms, the meeting culture, to make space for a different way of working. The 2026 WTI gives us both the language and the data to talk about that more clearly. It is no longer a soft argument that culture matters. It is a measured one.
If you’re leading a Copilot rollout, read this report before your next steering committee. If you read one section, read the Transformation Paradox piece. If you read two, also read the part on what Frontier Professionals do differently. Both will change how you frame the next adoption conversation with leadership. Because at this point, the question is no longer whether users will adopt AI. It’s whether the organization will catch up.



