The Three-Question Filter: A Framework for AI Tool Decisions
In this post
One hundred forty-nine foundation models were released in 2023. By 2025, major labs were shipping new versions every one to three months. And that pace is only accelerating: as of early 2026, training compute for frontier models doubles roughly every five months.
Every one of those releases triggers the same cycle. A launch post hits the front page. Your Slack lights up. Someone on your team sends a link with "should we look at this?" And you face a choice that feels urgent but rarely is: evaluate, switch, or stay the course.
Most leaders default to evaluate. That instinct is the problem.
I believe the biggest AI productivity killer isn't falling behind on new tools. It's the constant evaluation cycle itself. Most teams get this wrong by treating every model release as a reason to re-evaluate their entire stack. The research backs this up, and the fix is simpler than you think.
The Real Cost of Chasing Every New AI Tool
BCG surveyed 1,488 full-time U.S. workers in early 2026 and found a clear inflection point: teams using three or fewer AI tools reported increased productivity. Teams using four or more reported the opposite. Productivity didn't plateau. It declined. And 34% of workers experiencing what BCG calls "AI brain fry" were actively looking to leave their company.
This isn't just a morale issue. It's a retention crisis hiding inside your AI strategy.
Meanwhile, McKinsey's 2025 State of AI survey found that 88% of organizations use AI in at least one business function, but only about 6% achieve significant financial impact (more than 5% EBIT). The difference? High performers didn't adopt the most tools. They redesigned workflows around a focused set of use cases. As I found in my analysis of the 2026 AI development data: tool selection matters less than organizational readiness.
The executive-employee disconnect isn't new, either. A mid-2024 Upwork Research Institute survey found that 96% of C-suite leaders expected AI to boost productivity, while 77% of employees said it had added to their workload. Nearly half didn't know how to achieve the gains their leaders expected. Two years later, that gap hasn't closed.
The pattern is consistent: more tools, more switching, more evaluation cycles. Less depth, less workflow integration, less actual output.
How to Use the Three-Question Filter
When the next model drops or a new tool shows up in your feed, resist the impulse to spin up an evaluation. Instead, run it through three questions. If it doesn't clear all three, file it under "watch" and move on.
Question 1: Does this solve a problem we are currently working around?
Not a theoretical problem. Not a problem you might have someday. A problem your team is solving with duct tape right now. If nobody on your team has complained about the gap this tool fills, it's not solving a real problem for you. It's solving a marketing problem for the vendor.
Question 2: Is this a step-function improvement, or an incremental one?
A new model scoring 3% higher on a benchmark doesn't change your workflow. A tool that turns a two-day manual process into a 10-minute automated one does. Step-function means your team would work differently, not just slightly faster. The one tool, one month approach I recommend for solopreneurs applies at every scale: going deep on one tool outperforms spreading thin across five.
Question 3: What's the real switching cost?
Include the hidden costs: team retraining, integration rewiring, loss of institutional knowledge baked into your current setup, and the productivity dip during transition. Starting with one workflow and proving it works before expanding exists because switching costs compound in ways that aren't obvious on a feature comparison sheet.
If the answer to all three is yes, move fast. That's the distinction every "slow down" article misses: sometimes speed is exactly the right call. The filter isn't about being cautious. It's about being intentional.
Falling Behind vs. Feeling FOMO
Not every pang of anxiety is a signal. Here's how to tell the difference.
You might be falling behind if:
- A competitor deployed an AI capability in a customer-facing product and your customers noticed
- You can't hire because candidates see your stack as outdated
- Your current tools lack a capability you need, and no workaround exists
You're probably feeling FOMO if:
- You read a launch post and felt anxious before you identified a specific use case
- Your team is productive, but someone saw a demo and got excited
- A benchmark score went up a few points on a task you don't perform
The distinction matters because they demand opposite responses. Falling behind requires action. FOMO requires discipline. And the three-question filter works for both: real gaps will pass all three questions easily. Anxiety won't.
Run the three-question filter as a team, not solo. When evaluation decisions are made by one person reacting to a headline, they trend toward FOMO. When a team runs the filter together, they trend toward signal.
Build the Organization, Not the Tool Collection
Gartner places generative AI in the "Trough of Disillusionment" as of 2025. That sounds negative. It's not. The Trough is where hype-driven adopters stall and disciplined organizations pull ahead. The companies that win in 2026 won't be the ones that adopted the most tools. They'll be the ones that went deep on the right ones.
That means picking your tools, committing for a meaningful period (I recommend at least 90 days before re-evaluating), redesigning workflows instead of just layering AI on top, and measuring results against the actual problems you set out to solve.
The next model release is coming. Probably this month. Your Slack will light up. Someone will ask if you should switch.
Now you have a filter for that.
If you want help identifying which AI use cases deserve your focus (and which ones are noise), take the AI Readiness Assessment. It takes five minutes and gives you a specific answer, not a slide deck full of buzzwords.
Want to talk about how this applies to your team?
Book a Discovery CallNot ready for a call? Grab the Claude Adoption Checklist instead.