Statistics

9 Questions for Yourself: Are You Using AI – or Is AI Using You?

11 min read

Not long ago I was putting together a proposal for a new client. The amount was unusual, the terms – likewise. My gut said: go with X, you know this market. But I decided to “check” with Claude. The model produced a well-reasoned answer with a different number – 15% below my estimate. It sounded convincing. I changed the number.

A week later the client signed without negotiation. And instead of satisfaction, I felt annoyed: what if my original number would have gone through too? I’ll never know – because at the moment of decision I suppressed my own judgment in favor of the algorithm’s “statistically grounded” answer.

This is the very pattern that Anthropic’s researchers call Disempowerment – loss of control. Not dramatic, not obvious. Just a quiet swap of “I decided” for “AI suggested.”

Read more
9 Questions for Yourself: Are You Using AI – or Is AI Using You?
The Transparency Dilemma: Should You Tell Clients the Text Was Written by AI?
15 min

The Transparency Dilemma: Should You Tell Clients the Text Was Written by AI?

You’ve written the perfect client email. The tone is spot-on, the arguments flow, there’s even a well-placed joke. One problem: you didn’t write it. Claude did. Or ChatGPT. Or Gemini – doesn’t matter.

Now the question: do you tell the client?

Instinct says: “Of course not. Who cares how it was written if it’s written well?” Corporate ethics whispers: “You should be transparent.” And the science says something unexpected: both options erode trust – but in different ways and with different consequences.

AI Doesn't Save Time – It Compresses It: 8 Months of Observations
11 min

AI Doesn't Save Time – It Compresses It: 8 Months of Observations

Companies are worried about getting employees to use AI. The promise is seductive: AI will handle the drudgery – document drafts, information summarisation, code debugging – freeing up time for higher-value work.

But are companies ready for what happens if they actually succeed?

Researchers at Stanford conducted an 8-month observational study of roughly 200 employees at an American tech company that had rolled out generative AI. The company didn’t mandate AI use – it simply provided corporate subscriptions to commercial tools. Employees decided for themselves whether to adopt them.

The result was paradoxical. AI didn’t reduce work. It intensified it. Workers moved faster, took on more tasks, spread their work across more hours in the day – often without any explicit external pressure. AI made “doing more” possible, accessible, and in many cases internally rewarding.

Strikingly, the same pattern shows up in other research. Microsoft found that 62% of product managers use Gen AI daily, yet while 81% say AI saves time, 56% deny that effort has decreased. A paradox? No – a pattern.

86% of Students Use AI, But Are Getting Worse. One Experiment Changed Everything
16 min

86% of Students Use AI, But Are Getting Worse. One Experiment Changed Everything

Traditional approaches to education are breaking down. AI writes essays and papers in minutes – and that has permanently changed the purpose of creative assignments in schools. Banning neural networks doesn’t work, and isolating students from technology is a dead end. The question is not whether to use AI. The question is how to use it so the technology develops students’ skills rather than replacing their thinking.

45% of Americans Use AI Annually: Gallup 2025 Data and What Changed in a Year
12 min

45% of Americans Use AI Annually: Gallup 2025 Data and What Changed in a Year

Gallup – one of the oldest polling organizations in the US – has released fresh data on how Americans used artificial intelligence in 2025. We’ve already covered Stanford (37% personal use), Brookings (57% use AI, but only 19% see results), and Wharton (82% of executives use it weekly). Now we have Gallup’s numbers – and they point to an important trend: usage is growing, but more slowly than the hype suggests.