Research

When AI Hurts Learning – and When It Doubles Results
10 min

When AI Hurts Learning – and When It Doubles Results

In March 2025 at SXSW EDU, strategic foresight advisor Sinead Bovell delivered a talk on AI and the future of education. No hype, no panic. But with two studies that change how you should think about AI’s role in learning.

First: a group of students who used ChatGPT without restrictions scored 17% worse than the control group working from a textbook. Second: a different group, where AI was deployed within a fully redesigned instructional system, outperformed a traditional lecture by a factor of two.

Same tool. Opposite outcomes. The difference is in the approach.

AI Saves Teachers 6 Hours a Week. But 97% Don't Notice
12 min

AI Saves Teachers 6 Hours a Week. But 97% Don't Notice

A Gallup and Walton Family Foundation survey (2024–2025, representative sample of US teachers) produced an impressive number: teachers who regularly use AI save an average of 5.9 hours per week – the equivalent of six full work weeks per school year. Sounds like a solved problem.

But a parallel Royal Society of Chemistry survey (2024, UK) paints a different picture: 44% of teachers tried AI, yet only 3% reported a real reduction in workload. A maths teacher from Ireland explained the gap more precisely than any statistic: “AI generates worksheets quickly, but they need thorough checking – and the time savings turn out smaller than expected.”

Who is right? We previously examined the AI crisis in education from the student side – 86% of students use AI, yet critical thinking is declining. Now – the instructor side. Over the past two years, enough experimental data has accumulated to answer this question with numbers, not opinions.

AI Doesn't Make You Dumber. It's About How You Use It
9 min

AI Doesn't Make You Dumber. It's About How You Use It

A year and a half ago, I wrote a note on my personal blog about something I was noticing in my colleagues’ work and in my own: the more you trust AI, the less often you ask yourself “is this actually right?” I was drawing on a Microsoft study at the time – it showed that trust in AI suppresses critical evaluation of the answers it produces. The argument felt strong to me, but it had an obvious flaw: correlation, not causation.

In February 2026, Anthropic researchers Judy Shen and Alex Tamkin published an experiment that closed that gap. Randomized control. Concrete data. And a conclusion that, I think, most people who’ve read about it have misunderstood.

Because this isn’t a story about AI making us dumber. It’s a story about how exactly we use it.

KazLLM and Sovereign AI: A Guide for Kazakhstan's Civil Servants
13 min

KazLLM and Sovereign AI: A Guide for Kazakhstan's Civil Servants

On 11 February 2026, at a government meeting, President Tokayev publicly criticised KazLLM. The model, launched with great fanfare in December 2024, has just 600,000 users – 3% of the country’s population. For comparison: 2.6 million people in Kazakhstan use ChatGPT. The president was blunt: KazLLM “cannot compete with ChatGPT.”

This statement cuts to the heart of the matter. Why does Kazakhstan need its own language model if global solutions work better? And if sovereign AI is necessary – why is it losing?

The answer is more complicated than it seems. Because KazLLM is not “Kazakhstan’s ChatGPT.” It’s a fundamentally different tool with a different mission. Comparing them is like comparing a national power plant with an imported household appliance.

AI Doesn't Save Time – It Compresses It: 8 Months of Observations
11 min

AI Doesn't Save Time – It Compresses It: 8 Months of Observations

Companies are worried about getting employees to use AI. The promise is seductive: AI will handle the drudgery – document drafts, information summarisation, code debugging – freeing up time for higher-value work.

But are companies ready for what happens if they actually succeed?

Researchers at Stanford conducted an 8-month observational study of roughly 200 employees at an American tech company that had rolled out generative AI. The company didn’t mandate AI use – it simply provided corporate subscriptions to commercial tools. Employees decided for themselves whether to adopt them.

The result was paradoxical. AI didn’t reduce work. It intensified it. Workers moved faster, took on more tasks, spread their work across more hours in the day – often without any explicit external pressure. AI made “doing more” possible, accessible, and in many cases internally rewarding.

Strikingly, the same pattern shows up in other research. Microsoft found that 62% of product managers use Gen AI daily, yet while 81% say AI saves time, 56% deny that effort has decreased. A paradox? No – a pattern.