mysummit.school - AI for Managers Blog

What if AI Succeeds Too Well? Breaking Down the Citrini 2028 Scenario

7 min read

In February 2026, the investment research newsletter Citrini Research published a scenario that flips the usual logic on its head. Bears typically predict that AI will underdeliver. Citrini asks a different question: what if AI delivers on every promise – and that’s exactly what causes the problem?

Their piece «The 2028 Global Intelligence Crisis» is a fictional memo dated June 2028. Not a forecast, but a stress test: what happens to the economy if machine intelligence really does replace white-collar workers as fast as the developers claim?

Read more
What if AI Succeeds Too Well? Breaking Down the Citrini 2028 Scenario
9 Questions for Yourself: Are You Using AI – or Is AI Using You?
11 min

9 Questions for Yourself: Are You Using AI – or Is AI Using You?

Not long ago I was putting together a proposal for a new client. The amount was unusual, the terms – likewise. My gut said: go with X, you know this market. But I decided to “check” with Claude. The model produced a well-reasoned answer with a different number – 15% below my estimate. It sounded convincing. I changed the number.

A week later the client signed without negotiation. And instead of satisfaction, I felt annoyed: what if my original number would have gone through too? I’ll never know – because at the moment of decision I suppressed my own judgment in favor of the algorithm’s “statistically grounded” answer.

This is the very pattern that Anthropic’s researchers call Disempowerment – loss of control. Not dramatic, not obvious. Just a quiet swap of “I decided” for “AI suggested.”

The Transparency Dilemma: Should You Tell Clients the Text Was Written by AI?
15 min

The Transparency Dilemma: Should You Tell Clients the Text Was Written by AI?

You’ve written the perfect client email. The tone is spot-on, the arguments flow, there’s even a well-placed joke. One problem: you didn’t write it. Claude did. Or ChatGPT. Or Gemini – doesn’t matter.

Now the question: do you tell the client?

Instinct says: “Of course not. Who cares how it was written if it’s written well?” Corporate ethics whispers: “You should be transparent.” And the science says something unexpected: both options erode trust – but in different ways and with different consequences.

AI Doesn't Save Time – It Compresses It: 8 Months of Observations
11 min

AI Doesn't Save Time – It Compresses It: 8 Months of Observations

Companies are worried about getting employees to use AI. The promise is seductive: AI will handle the drudgery – document drafts, information summarisation, code debugging – freeing up time for higher-value work.

But are companies ready for what happens if they actually succeed?

Researchers at Stanford conducted an 8-month observational study of roughly 200 employees at an American tech company that had rolled out generative AI. The company didn’t mandate AI use – it simply provided corporate subscriptions to commercial tools. Employees decided for themselves whether to adopt them.

The result was paradoxical. AI didn’t reduce work. It intensified it. Workers moved faster, took on more tasks, spread their work across more hours in the day – often without any explicit external pressure. AI made “doing more” possible, accessible, and in many cases internally rewarding.

Strikingly, the same pattern shows up in other research. Microsoft found that 62% of product managers use Gen AI daily, yet while 81% say AI saves time, 56% deny that effort has decreased. A paradox? No – a pattern.

OpenClaw in Practice: Real Use Cases and the Missing Enterprise Layer
15 min

OpenClaw in Practice: Real Use Cases and the Missing Enterprise Layer

After three articles covering critical security issues, workflow lessons, and 72 hours of patches, the obvious question is: what are people actually doing with OpenClaw?

In the two weeks since its explosive growth (January 22 – February 5, 2026), a substantial body of confirmed use cases has emerged from Reddit, X/Twitter, YouTube tutorials, and developer blogs. Interestingly, the usage pattern reveals not so much revolutionary scenarios as a dramatic drop in the barrier to entry for automation that already existed.

Surprisingly, most of these use cases have been technically achievable through n8n, Make, or Zapier for the past 3–5 years. The difference isn’t in capability – it’s in who can now build it. Which raises the question: is OpenClaw truly a new category of tool, or just a more accessible wrapper around old concepts?

6,600 Commits in a Month: Workflow Lessons from the Creator of OpenClaw
16 min

6,600 Commits in a Month: Workflow Lessons from the Creator of OpenClaw

One developer. 6,600 commits. One month.

More than most teams ship in a quarter. More than many startups produce in half a year. This is not a marketing metric – it is the real-world productivity of Peter Steinberger, creator of OpenClaw (formerly known as clawdbot), one of the most viral AI projects of January 2026.

Steinberger describes the project plainly: “It’s not a company – it’s one guy sitting at home enjoying the process.” After a successful exit from PSPDFKit, he could have taken a break. Instead, he is building an AI assistant that manages his calendar, sends emails, and checks him in for flights. “AI that actually gets things done” – that is how he articulates the project’s mission.

How can one person work like an entire company? What skills are critical when working with AI agents? Why does experience managing a team of 70+ people turn out to be the key to AI-driven productivity? And how does an engineer’s focus shift – from writing code to designing architecture?

Let us examine the actionable lessons from Peter Steinberger’s workflow – applicable to any AI-assisted project, even if you never install OpenClaw itself.

OpenClaw (Clawdbot/Moltbot): A Critical Analysis of the Viral AI Agent
18 min

OpenClaw (Clawdbot/Moltbot): A Critical Analysis of the Viral AI Agent

In the last week of January 2026, the internet exploded with discussions of a new AI agent that had already gone through several name changes: Clawdbot, then Moltbot, and finally OpenClaw. In just a few days, the project racked up over 146,000 GitHub stars, drove Cloudflare stock up 11–14%, and spawned a wave of Mac Mini unboxing posts on Twitter. Memes about Mac Minis “selling faster than iPhones” in China spread like wildfire.

The project has been officially renamed OpenClaw and is now available at openclaw.ai. This is already its third name: it started as Clawd (Anthropic asked them to change it due to similarity with Claude), then Moltbot (which never caught on with the community), and now OpenClaw – blending openness with the project’s “lobster” heritage. The new name passed trademark verification.

Let’s break it down: what OpenClaw actually is, where the hype came from, why the Mac Mini myth is exactly that – a myth, what documented vulnerabilities threaten your data, and when you should opt for proven alternatives instead.

33 AI Models for Managers: Why We Need Your Ratings
11 min

33 AI Models for Managers: Why We Need Your Ratings

Over the past year, 33 new AI models have appeared on the market, each claiming the title of “best manager’s assistant.” ChatGPT updated to GPT-5.2, Claude released Opus 4.5, Gemini added a new Pro version, Yandex and Sber announced further improvements, and Chinese models went OpenSource. How do you choose a tool when every one of them promises a productivity revolution? We decided to run a large-scale comparative study – but ran into a problem that may seem paradoxical.