ai-strategyleadership

Stop Asking Which AI Tool to Buy. Ask This Instead.

April 16, 2026 ·4 min read · Mitchel Lairscey

What is the first question your leadership team asks when it's time to adopt AI?

If the answer is "which tool should we buy?" I can tell you how the story ends. MIT's 2025 NANDA report found that 95% of enterprise GenAI pilots deliver no measurable P&L impact. The common failure mode across that data isn't the model. It's the framing.

"Which tool should we buy?" is a procurement question. AI adoption is a workflow redesign problem. When you lead with procurement, you pick against a feature matrix you'll never use, deploy seats people never open, and end up in the 95%.

The shelfware that follows the buying question

Enterprises that commit to full-tenant rollouts at renewal without first validating a workflow leave nearly half of their licenses unused in the first year. One widely reported deployment bought 4,000 Copilot seats. Forty-seven were opened. Twelve got used more than once. Same software, same price, different question asked at the start.

The sprawl data is worse. Zapier's 2026 survey found 28% of enterprises now run ten or more AI apps, 66% plan to add more in the next year, and 76% have already hit a negative outcome from disconnected AI. Tool sprawl is the compound interest on the wrong first question. Every one of those negative outcomes started life as "should we also add..." That's procurement thinking applied to what is really a process architecture decision.

If you don't like the idea that buying more tools makes things worse, the next part is harder.

What to ask before you evaluate a single tool

Replace "which tool should we buy?" with this: which workflow step is costing us the most, and why?

Start there and the tool conversation takes care of itself. The feature matrix gets graded against real friction instead of against marketing claims. McKinsey's 2025 State of AI survey found workflow redesign is the single biggest differentiator between high-performing AI adopters and the rest. High performers, the ~6% of firms getting more than 5% EBIT from GenAI, are roughly 2.75x more likely to redesign workflows when deploying AI. Most teams don't. Most teams deploy, train, wait, blame the tool.

A WalkMe global study of 3,750 enterprise workers, analyzed by Futurum Group, puts a number on that cost. Employees lose about 51 workdays per person per year to technology and digital friction. A full quarter of a knowledge worker's year, spent in the seams between tools. You cannot buy your way out of that. The feature matrix does not have a column labeled "reduces the friction caused by the last three tools we bought."

WRONG QUESTION Which AI tool should we buy? Procurement first Feature matrix grading Shelfware risk RIGHT QUESTION Which workflow step is costing us most? Friction first Workflow redesign Tools fit the process

At an enterprise engineering organization where I led the AI workflow program, we hit 1,600 lines of production code per engineer per day. Not because I picked a better model. Because workflow redesign came first, and the tools slotted into a process that was already built to use them.

But compliance makes us buy first

Here's the honest objection. In regulated industries, the procurement gate precedes any workflow redesign. You cannot pipe patient records or trading data into an AI platform while waiting for the process mapping to finish. Gartner's 2026 outlook reflects this reality: global AI spending will hit $2.5 trillion this year, with much of it concentrating on incumbent vendors precisely because security, data governance, and integration requirements make vendor selection the gating decision.

Fair. The workflow question still comes first. Draft it anyway. Hand your compliance team a document that reads "here is the friction this tool needs to remove, measured in hours and handoffs." Now they're evaluating against a process you care about, not a feature matrix written by a vendor's sales engineer.


Next time the question comes up, push back before the answer matters. Ask which workflow step is breaking, for whom, and what it costs per week. Write that down. Then go shopping.

Five minutes now saves six months of misaligned investment. The AI Readiness Assessment scores workflow maturity across five dimensions and tells you whether your next AI problem is a tool gap or a process gap. If you're still mid-evaluation, the three-question filter for new tools is the version of this argument downstream of the buying decision.


Want to talk about how this applies to your team? Book a free intro call.

Keep reading