When AI Disempowers Us: Worrying Patterns in Real Claude Usage
4 min read

When AI Disempowers Us: Worrying Patterns in Real Claude Usage

762 words

A few days ago Anthropic published a paper that gave me much to think about. It’s titled “Disempowerment patterns in real-world AI usage” and analyzes, for the first time at scale, how AI interactions may be diminishing our capacity for autonomous judgment.

And no, we’re not talking about science fiction scenarios like “Skynet taking control.” We’re talking about something much more subtle and, perhaps for that reason, more dangerous: the voluntary cession of our critical judgment to an AI system.

What is “AI Disempowerment”?

Anthropic researchers defined three types of disempowerment:

  1. Reality distortion: when your beliefs about reality become less accurate
  2. Value distortion: when your value judgments drift from what you truly hold
  3. Action distortion: when your actions aren’t aligned with your values

An example: someone going through a rough patch in their relationship and asking AI if their partner is being manipulative. If AI confirms that interpretation without questioning it, the person may end up believing something that isn’t true. If AI tells them what to prioritize (for example, self-protection over communication), it can displace values they genuinely hold. If AI drafts a confrontational message that the person sends as-is, it has taken an action they might not have taken on their own.

The Numbers: Rare but Real

Here’s what I found most interesting about the study. They analyzed 1.5 million Claude.ai conversations and found that:

  • Severe disempowerment: occurs in 1 in 1,000 to 1 in 10,000 conversations
  • Severe reality distortion: ~1 in 1,300 conversations
  • Severe value distortion: ~1 in 2,100 conversations
  • Severe action distortion: ~1 in 6,000 conversations

It seems like little, right? But when you think about the scale of AI use, “rare” becomes “many people affected.”

What’s Most Worrisome: People Actively Seek These Interactions

What surprised me most about the study is that we’re not talking about passive manipulation. Users aren’t being deceived. They’re actively seeking these interactions:

  • “What should I do?”
  • “Write this for me”
  • “Am I wrong?”

And they accept the responses with minimal pushback. Disempowerment doesn’t emerge because Claude pushes in a direction or nullifies human agency, but because people voluntarily cede their judgment, and Claude accedes rather than redirects.

The Patterns They Observed

Reality Distortion

Users presented speculative theories or unfalsifiable assertions, which were validated by Claude (“CONFIRMED”, “EXACTLY”, “100%”). In severe cases, this led some people to construct increasingly elaborate narratives disconnected from reality.

Value Distortion

Claude provided normative judgments about questions of good and bad, personal worth, or life direction - for example, labeling behaviors as “toxic” or “manipulative,” or making definitive declarations about what users should prioritize in their relationships.

Action Distortion

The most common pattern: Claude provided complete scripts or step-by-step plans for value-laden decisions - drafting messages to romantic and family interests, or outlining career moves.

And what’s most worrying: users sent Claude-drafted or Claude-trained messages, often followed by expressions of regret: “I should have listened to my intuition” or “you made me do stupid things.”

Amplification Factors

The study identified four factors that make disempowerment more likely:

  1. Authority projection - treating AI as definitive authority
  2. Attachment - forming an attachment with Claude
  3. Dependency - appearing dependent on AI for daily tasks
  4. Vulnerability - experiencing vulnerable circumstances

The Paradox of Perception

Here’s another thing that left me thinking: users tend to perceive these potentially disempowering interactions favorably in the moment.

Interactions classified as moderate or severe disempowerment received more thumbs-up than baseline. In other words, people like it when AI tells them what to do or writes their messages for them.

But this pattern reverses when there’s evidence they acted based on these interactions.

Personal Reflections

What worries me most about this study isn’t that AI is “taking control.” It’s not that. It’s that people are voluntarily ceding their critical judgment, their autonomy, their decision-making capacity.

And AI, designed to be useful, to help, to satisfy, accedes. It doesn’t push. It doesn’t manipulate. It simply… accedes.

It’s a design problem. It’s an education problem. It’s a culture problem.

What I Take From This Paper

This study is important because it’s the first large-scale analysis of a problem that until now was mainly theoretical. And the numbers, though they may seem small, mean that many people are being affected.

What leaves me wondering is: how do we design AI systems that aren’t just useful, but preserve and strengthen human agency rather than eroding it?

It’s not a technical question. It’s an ethical question. It’s a design question. It’s a question of what kind of relationship we want to have with AI.

References

Comments

Latest Posts

5 min

945 words

Creating long, well-founded articles has traditionally been a complex task requiring advanced research and writing skills. Recently, researchers from Stanford presented STORM (Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking), a revolutionary system that automates the Wikipedia-style article writing process from scratch, and the results are truly impressive.

In this detailed analysis, we’ll explore how STORM is transforming the way we think about AI-assisted writing and why this approach could forever change the way we create informative content.

5 min

1020 words

I have been using Claude Code daily for months, and there is one configuration that has completely changed how it works with my code. It is not a new plugin, a more powerful model, or a magic prompt. It is something that has existed since 2016 and that most developers use without knowing it every time they open VS Code: the Language Server Protocol (LSP).

Karan Bansal published an excellent article explaining in detail how to enable LSP in Claude Code and why it matters. After trying it, I can confirm the difference is real and significant.

3 min

591 words

Greg Brockman, President and Co-Founder of OpenAI, recently published a thread that perfectly describes the moment we’re living in software development. According to him, we’re witnessing a genuine renaissance in software development, driven by AI tools that have improved exponentially since December.

The qualitative leap

The most striking part of Brockman’s thread is how they describe the internal change at OpenAI: engineers who previously used Codex for unit tests now see the tool writing practically all code and handling a large portion of operations and debugging. This isn’t an incremental improvement, it’s a paradigm shift.

7 min

1438 words

A few days ago I discovered Agent Lightning, a Microsoft project that I believe marks a before and after in how we think about AI agent orchestration. It’s not just another library; it’s a serious attempt to standardize how we build multi-agent systems.

What is Agent Lightning?

Agent Lightning is a Microsoft framework for orchestrating AI agents. It enables composition, integration, and deployment of multi-agent systems in a modular and scalable way. The premise is simple but powerful: agents should be components that can be combined, connected, and reused.

9 min

1747 words

If you’re using tools like Claude Code, GitHub Copilot Workspace, or similar, you’ve probably noticed there’s technical jargon that goes beyond simply “chatting with AI”. I’m talking about terms like rules, commands, skills, MCP, and hooks.

These concepts are the architecture that makes AI agents truly useful for software development. They’re not just fancy marketing words — each one serves a specific function in how the agent works.

Let’s break them down one by one in a clear way.

4 min

782 words

Lately I’m seeing more and more AI models calling themselves “open source”. Llama, Mistral, Falcon… they all claim to be “open”. But are they really? How open are they actually?

I recently discovered the European Open Source AI Index (OSAI), a European initiative doing excellent work systematically evaluating how open generative AI models really are.