AI in Business

99% Quality at 1.4% of the Price: What's Wrong with the AI Model Market

8 min read

Most managers pick an AI model the same way: grab the most expensive one available. The logic makes sense – pricier means better. That’s how enterprise software worked for the last twenty years.

The AI model market in 2026 works differently. The cost per query ranges from $0.0001 to $0.17 – three orders of magnitude. And the actual quality difference between the top ten models? 0.24 points on a five-point scale. Meanwhile, Wharton / GBK Collective reports that a third of corporate AI projects never get past the pilot stage. And Epoch AI shows that only 5.6% of users apply AI in any genuinely deep way.

Maybe the question isn’t which model is best, but whether paying a premium delivers proportionally better results for typical management tasks.

We tested it. The answer was harsher than we expected.

Read more
99% Quality at 1.4% of the Price: What's Wrong with the AI Model Market
The Agent Instead of Chat: Data Analysis Without Copy-Paste
11 min

The Agent Instead of Chat: Data Analysis Without Copy-Paste

You have three data files: an activation funnel, A/B test results, and support tickets. The task – figure out why onboarding is underperforming. You open ChatGPT, upload the first file, ask your question. You get an answer. You upload the second file. ChatGPT asks: “Can you remind me of the context?” You upload the third. The context of the first file has already been pushed out.

Forty minutes later you have three separate conversations, none of which answer the original question. Because the question was one, and the data was in three places.

This isn’t a ChatGPT problem. It’s a problem of approach.

AI Doesn't Make You Dumber. It's About How You Use It
9 min

AI Doesn't Make You Dumber. It's About How You Use It

A year and a half ago, I wrote a note on my personal blog about something I was noticing in my colleagues’ work and in my own: the more you trust AI, the less often you ask yourself “is this actually right?” I was drawing on a Microsoft study at the time – it showed that trust in AI suppresses critical evaluation of the answers it produces. The argument felt strong to me, but it had an obvious flaw: correlation, not causation.

In February 2026, Anthropic researchers Judy Shen and Alex Tamkin published an experiment that closed that gap. Randomized control. Concrete data. And a conclusion that, I think, most people who’ve read about it have misunderstood.

Because this isn’t a story about AI making us dumber. It’s a story about how exactly we use it.

9 Questions for Yourself: Are You Using AI – or Is AI Using You?
11 min

9 Questions for Yourself: Are You Using AI – or Is AI Using You?

Not long ago I was putting together a proposal for a new client. The amount was unusual, the terms – likewise. My gut said: go with X, you know this market. But I decided to “check” with Claude. The model produced a well-reasoned answer with a different number – 15% below my estimate. It sounded convincing. I changed the number.

A week later the client signed without negotiation. And instead of satisfaction, I felt annoyed: what if my original number would have gone through too? I’ll never know – because at the moment of decision I suppressed my own judgment in favor of the algorithm’s “statistically grounded” answer.

This is the very pattern that Anthropic’s researchers call Disempowerment – loss of control. Not dramatic, not obvious. Just a quiet swap of “I decided” for “AI suggested.”

The Transparency Dilemma: Should You Tell Clients the Text Was Written by AI?
15 min

The Transparency Dilemma: Should You Tell Clients the Text Was Written by AI?

You’ve written the perfect client email. The tone is spot-on, the arguments flow, there’s even a well-placed joke. One problem: you didn’t write it. Claude did. Or ChatGPT. Or Gemini – doesn’t matter.

Now the question: do you tell the client?

Instinct says: “Of course not. Who cares how it was written if it’s written well?” Corporate ethics whispers: “You should be transparent.” And the science says something unexpected: both options erode trust – but in different ways and with different consequences.