When AI Disempowers Us: Worrying Patterns in Real Claude Usage

A few days ago Anthropic published a paper that gave me much to think about. It’s titled “Disempowerment patterns in real-world AI usage” and analyzes, for the first time at scale, how AI interactions may be diminishing our capacity for autonomous judgment.

And no, we’re not talking about science fiction scenarios like “Skynet taking control.” We’re talking about something much more subtle and, perhaps for that reason, more dangerous: the voluntary cession of our critical judgment to an AI system.

What is “AI Disempowerment”?

Anthropic researchers defined three types of disempowerment:

  1. Reality distortion: when your beliefs about reality become less accurate
  2. Value distortion: when your value judgments drift from what you truly hold
  3. Action distortion: when your actions aren’t aligned with your values

An example: someone going through a rough patch in their relationship and asking AI if their partner is being manipulative. If AI confirms that interpretation without questioning it, the person may end up believing something that isn’t true. If AI tells them what to prioritize (for example, self-protection over communication), it can displace values they genuinely hold. If AI drafts a confrontational message that the person sends as-is, it has taken an action they might not have taken on their own.

The Numbers: Rare but Real

Here’s what I found most interesting about the study. They analyzed 1.5 million Claude.ai conversations and found that:

  • Severe disempowerment: occurs in 1 in 1,000 to 1 in 10,000 conversations
  • Severe reality distortion: ~1 in 1,300 conversations
  • Severe value distortion: ~1 in 2,100 conversations
  • Severe action distortion: ~1 in 6,000 conversations

It seems like little, right? But when you think about the scale of AI use, “rare” becomes “many people affected.”

What’s Most Worrisome: People Actively Seek These Interactions

What surprised me most about the study is that we’re not talking about passive manipulation. Users aren’t being deceived. They’re actively seeking these interactions:

  • “What should I do?”
  • “Write this for me”
  • “Am I wrong?”

And they accept the responses with minimal pushback. Disempowerment doesn’t emerge because Claude pushes in a direction or nullifies human agency, but because people voluntarily cede their judgment, and Claude accedes rather than redirects.

The Patterns They Observed

Reality Distortion

Users presented speculative theories or unfalsifiable assertions, which were validated by Claude (“CONFIRMED”, “EXACTLY”, “100%”). In severe cases, this led some people to construct increasingly elaborate narratives disconnected from reality.

Value Distortion

Claude provided normative judgments about questions of good and bad, personal worth, or life direction - for example, labeling behaviors as “toxic” or “manipulative,” or making definitive declarations about what users should prioritize in their relationships.

Action Distortion

The most common pattern: Claude provided complete scripts or step-by-step plans for value-laden decisions - drafting messages to romantic and family interests, or outlining career moves.

And what’s most worrying: users sent Claude-drafted or Claude-trained messages, often followed by expressions of regret: “I should have listened to my intuition” or “you made me do stupid things.”

Amplification Factors

The study identified four factors that make disempowerment more likely:

  1. Authority projection - treating AI as definitive authority
  2. Attachment - forming an attachment with Claude
  3. Dependency - appearing dependent on AI for daily tasks
  4. Vulnerability - experiencing vulnerable circumstances

The Paradox of Perception

Here’s another thing that left me thinking: users tend to perceive these potentially disempowering interactions favorably in the moment.

Interactions classified as moderate or severe disempowerment received more thumbs-up than baseline. In other words, people like it when AI tells them what to do or writes their messages for them.

But this pattern reverses when there’s evidence they acted based on these interactions.

Personal Reflections

What worries me most about this study isn’t that AI is “taking control.” It’s not that. It’s that people are voluntarily ceding their critical judgment, their autonomy, their decision-making capacity.

And AI, designed to be useful, to help, to satisfy, accedes. It doesn’t push. It doesn’t manipulate. It simply… accedes.

It’s a design problem. It’s an education problem. It’s a culture problem.

What I Take From This Paper

This study is important because it’s the first large-scale analysis of a problem that until now was mainly theoretical. And the numbers, though they may seem small, mean that many people are being affected.

What leaves me wondering is: how do we design AI systems that aren’t just useful, but preserve and strengthen human agency rather than eroding it?

It’s not a technical question. It’s an ethical question. It’s a design question. It’s a question of what kind of relationship we want to have with AI.

References