team-enablementai-strategy

Your Team Bought AI Licenses Three Months Ago. Why Aren't They Using Them?

April 10, 2026 ·10 min read · Mitchel Lairscey
In this post

Three training sessions in, and the adoption dashboard hadn't moved.

I was working with an enterprise engineering organization that had rolled out Claude licenses months earlier. Leadership did everything the playbook says: scheduled workshops, built a prompt template library, pinned a getting-started video in Slack. Sixty days later, usage clustered around the same 15-20% of engineers who would have figured it out on their own.

The director's next move? More training. A structured certification program this time.

That instinct is almost universal. Enterprise Copilot deployments average 34% daily active users at the 90-day mark. The rest of those licensed seats sit idle. And across industries, the default response is always the same: schedule another workshop.

The problem isn't that your team needs more training. It's that nobody redesigned the work.

The Adoption Plateau Is Everywhere

The data on stalled AI adoption is consistent across sources, industries, and tools.

Gartner tracked finance AI adoption from 2023 to 2025. It surged from 37% to 58% in a single year, then flatlined at 59%. Among 183 CFOs and senior finance leaders polled, 91% initially reported only "low or moderate impact" from their AI initiatives. Budgets keep climbing anyway. The pattern: spend more, get the same.

This isn't a finance-specific problem. Across industries, 42% of companies abandoned most of their AI initiatives in 2025, up from 17% the year before. That isn't a gradual cooling. It's a cliff.

59% AI adoption plateau Gartner, finance orgs 2025 42% abandoned AI initiatives up from 17% in 2024 34% Copilot daily active users at 90 days, enterprise avg

What's striking isn't that adoption stalls. It's the consistency of where it stalls. Organizations clear the first hurdle (procurement, pilot, initial rollout) and then hit a wall. The tool works. People can use it. But team-wide productive use never materializes.

The question is why. And the answer most organizations land on is wrong.

The Training Reflex

When adoption stalls, the first response is almost always training. Run workshops. Build certification tracks. Create prompt libraries. The logic feels airtight: people aren't using the tool because they don't know how to use the tool.

BCG's AI at Work survey (10,635 employees, 11 countries) shows that training does have an effect. Employees who received five or more hours of AI training became regular users at a rate of 79%, compared to 67% for those who received less. That 12-percentage-point gap is meaningful. Training matters.

But here's the number that should follow every training statistic: according to a separate BCG analysis, only 5% of companies achieve AI value at scale. Not 5% of employees. Five percent of entire companies.

Training can move individual usage rates from 67% to 79%. It can't move organizational value capture from single digits to meaningful impact. Those are different problems. The first is a skills gap. The second is a process architecture gap. And 59% of enterprise leaders report their organization still has an AI skills gap even after investing in training programs. The training isn't failing because it's bad training. It's failing because it's solving the wrong problem.

Think of it this way. If your team's quarterly reporting process involves seven manual handoffs, three spreadsheet exports, and a two-day review cycle, teaching everyone advanced spreadsheet features won't fix the process. The process itself is the bottleneck. AI adoption works the same way.

The Mapping Problem Nobody Is Solving

In March 2026, researchers from INSEAD and Harvard Business School published results from a field experiment that cuts directly to the mechanism behind stalled AI adoption.

They worked with 515 high-growth startups. Every firm received identical technical AI training: same tools, same capabilities overview, same access. But a randomly selected treatment group received something extra: workflow mapping case studies showing how AI-native companies had reorganized their production processes around AI.

The results were not subtle.

Firms that received workflow mapping discovered 44% more AI use cases than the control group. They generated 1.9x higher revenue. They required 39% less capital. Same training. Same tools. Different outcomes, because one group learned where and how to integrate AI into their work.

TRAINING ONLY TRAINING + WORKFLOW MAPPING Use cases discovered baseline Use cases discovered +44% Revenue generated baseline Revenue generated 1.9x Capital required baseline Capital required -39%

The researchers named the core friction: the "mapping problem." Organizations don't struggle with using AI tools. They struggle with discovering where AI creates value within their existing production process. Training teaches the tool. Mapping teaches the integration.

One more finding worth sitting with. The revenue and investment gains were largest at the 90th percentile, not the median. AI doesn't modestly improve average performance. It expands the ceiling for teams that find the right integration points. The teams that mapped their workflows didn't do a little better. They did dramatically better.

This matches what I've seen firsthand. At an enterprise organization where I helped build and train teams on agentic development workflows, the difference between teams that saw order-of-magnitude acceleration and teams that got frustrated came down to whether AI was embedded in the workflow or bolted on top of it. The teams using Claude as a chat window (asking questions, getting suggestions, copy-pasting answers) improved maybe 10-15%. The teams that rebuilt their development workflow around Claude Code saw 16x delivery acceleration measured across PI-level initiatives.

Same tool. Same training. Different process architecture.

What High Performers Do Differently

McKinsey's State of AI survey (2025) identified what separates the roughly 6% of organizations generating measurable EBIT from AI from the other 94%. The answer isn't budget, tool selection, or training hours.

55% of AI high performers fundamentally redesign their workflows when deploying AI. Among all other organizations, that number drops to roughly 20%.

That 35-percentage-point gap is the single best predictor of AI value creation in the survey. Not data quality (though that matters). Not executive sponsorship (though that helps). Whether the organization treated AI deployment as a chance to rebuild processes, or just bolted a new tool onto the existing way of working.

So what does "redesign the workflow" look like in practice? It's not a strategy deck. Four steps.

1 Map Document every step and handoff 2 Identify Find AI integration points per step 3 Redesign Rebuild the sequence around AI capabilities 4 Embed Make AI the default path, then measure

Map the current process. Pick one team workflow. Document every step, handoff, tool, and decision point. Most teams skip this step because they think they already know how their process works. (They're usually wrong about two or three steps.)

Identify AI integration points. For each step, ask: can AI replace this step entirely, augment the person doing it, or accelerate the throughput? Not every step benefits from AI. Some are better left alone. The INSEAD mapping research shows that finding the right integration points is the highest-leverage activity, and the one most teams skip when they jump straight to training.

Redesign the sequence. This is where the work happens. Don't insert AI into the existing sequence. Rebuild the sequence around what AI makes possible. The "bolt-on" approach adds AI to step 4 of a 7-step process. The redesign approach asks: if AI can handle steps 4, 5, and 6 simultaneously, do we still need the handoffs between them?

Embed and measure. Make the AI-integrated workflow the default path, not an optional shortcut. Encoding your team's standards directly into the AI tooling means following the new process becomes the path of least resistance. Then measure what matters: cycle time, throughput, error rate. Not "how many people logged in."

At the enterprise organization where I led AI workflow development, the teams that followed this pattern reached 1,600 lines of production code per engineer per day. Not because the engineers typed faster. Because the redesigned workflow eliminated manual handoffs, redundant reviews, and context switches that had nothing to do with AI capability and everything to do with process design.

Training Still Matters. It Is Just Not the First Step.

I'm not arguing that training is useless. The BCG data is clear: five-plus hours of structured training moves individual usage from 67% to 79%. That 12-percentage-point gap is meaningful, and you shouldn't ignore it.

But look at what happens when training is contextualized. BCG found that bespoke, persona-based learning journeys deliver adoption rates 20x higher than broad-based approaches. Not 20% higher. Twenty times higher.

That gap isn't about training quality. It's about training context. A persona-based journey teaches an engineer how to use Claude Code for their specific pull request workflow. A broad-based workshop teaches the same engineer what Claude can do in general. The first is process enablement. The second is a product demo.

The INSEAD experiment proves the same point from the other direction. Both groups received identical technical AI training. The control group learned what the tools could do. The treatment group also learned where those tools fit into their production process. Same training inputs. Radically different outputs. The training was not the variable. The workflow mapping was.

So the sequence matters. Redesign the workflow first. Identify where AI fits. Then train people on the new workflow, not on the tool in the abstract. When teams using four or more AI tools report declining productivity, the problem isn't that they need more training on each tool. The problem is that nobody designed the process those tools are supposed to serve.

Skip training entirely and your team cannot use the tools. Skip workflow redesign and your trained team uses the tools for the wrong things, in the wrong places, or not at all.


If your AI adoption has stalled, resist the training reflex. Before you schedule the next workshop, audit one workflow. Map the steps. Find the integration points. Rebuild the process so AI is the default path, not an optional add-on.

Then train your team on the new workflow. Not the tool.

The organizations that get this sequence right don't just see higher adoption numbers. They see the kind of outcomes that justify the investment: faster delivery, lower costs, and teams that use AI because the process makes it natural, not because a training deck told them to.

Once your workflows are redesigned and adoption is moving, the next challenge is consistency at scale. The Engineering Manager's Guide to Governing Agentic Development walks through a three-tier framework for standardizing AI development workflows across your team without micromanaging how individual engineers work.

The AI Readiness Assessment surfaces the workflow gaps most teams overlook. Five minutes of diagnostic questions, and you'll know whether your adoption problem is a training gap or a process architecture gap. Book a call if you want to walk through the results together.


Want to talk about how this applies to your team?

Book a Free Intro Call

Not ready for a call? Take the free AI Readiness Assessment instead.

Keep reading