Are We Outsourcing Our Thinking? Reflections on AI and Cognition
4 min read

Are We Outsourcing Our Thinking? Reflections on AI and Cognition

791 words

Lately I’ve been following a discussion that worries me quite a bit: to what extent are we delegating our thinking to AI. It’s not an abstract or philosophical question, it’s something very real I’m seeing day to day in our profession and in society in general.

Recently I read an article by Erik Johannes Husom titled “Outsourcing thinking” that, among other things, discusses the concept of “lump of cognition fallacy”. The idea is that, just as there’s an economic fallacy saying there’s a fixed amount of work to do, some believe there’s a fixed amount of thinking to do, and if machines think for us, we’ll just think about other things.

But I believe the problem is much more complicated than that.

The Typical Criticism of LLMs

The most common criticism of Large Language Models (LLMs) is that they can deprive us of cognitive skills. The typical argument is that delegating certain tasks can cause a kind of mental atrophy. And honestly, the idea of “use it or lose it” seems intuitively and empirically correct to me.

What matters most is not whether this is true or not, but which types of use are more problematic than others.

The Developer Problem: Microsoft Admits It

What worries me most is that even Microsoft has internally recognized that tools like Copilot and ChatGPT are affecting critical thinking at work. Their personnel using these technologies experience “long-term dependency” problems.

Several studies and articles are pointing to the same:

  • Junior developers who struggle without AI assistance
  • Developers who become dependent on AI for basic problem-solving
  • Risk of “cognitive decline” in software engineering skills

When Should We Avoid Using LLMs?

Andy Masley, in his article “The lump of cognition fallacy”, lists cases where it’s clearly harmful to outsource your cognition:

It’s bad to outsource your cognition when:

  • It builds complex tacit knowledge you’ll need to navigate the world in the future
  • It’s an expression of care and presence for someone else
  • It’s a valuable experience in itself
  • It’s deceptive to fake it
  • It focuses on a critical problem where you don’t fully trust who you’re outsourcing to

Personal Communication and Writing

The point about “it’s deceptive to fake it” doesn’t just apply to dating apps or intimate situations. Personal communication in general is an area where how we express ourselves matters, both for us and for those who speak or write with us.

When we let a model transform our words and phrases, we’re breaking communication expectations. The words we choose and how we formulate our sentences carry much meaning. Direct communication will suffer if we let language models contaminate this type of interaction.

The Error of the “Extended Mind”

Another point I want to discuss is the idea of the “extended mind”:

“Much of our cognition isn’t limited to our skull and brain, it also happens in our physical environment, so much of what we define as our minds could be said to exist in the physical objects around us.”

“It seems quite arbitrary whether it happens in your brain’s neurons or in your phone’s circuits.”

This assertion is simply absurd. The fact that something happens in your brain rather than in a computer makes all the difference in the world. Humans are more than information processors.

What Can We Do?

I’m not saying that nothing should be automated by LLMs. But I think many are underestimating what we lose when we delegate.

Some Principles I Try to Follow:

  1. Use AI as a tool, not a replacement - AI helps me, but doesn’t think for me
  2. Keep critical thinking active - Question AI’s responses
  3. Don’t delegate tasks that build tacit knowledge - Those “boring” tasks often teach us the most
  4. Be transparent about AI use - When I use AI on something, I say so
  5. Practice without AI regularly - Keep skills sharp

Critical Thinking: The Key Skill of 2026

What’s curious is that, according to multiple sources, critical thinking will be the differentiating skill in 2026. The most valued skills:

  • Critical thinking with AI
  • Collaboration with AI tools
  • AI literacy
  • Prompt engineering
  • Data awareness

Conclusion

We have a major challenge ahead to figure out what chatbots are adequate for in the long term. Personal communication may change forever, educational systems will need radical adaptations, and we need to reflect more carefully on what experiences in life really matter.

At the end of the day, the question is not “can I use AI for this?” but “should I use AI for this?”. And that’s a question only you can answer.

References

Comments

Latest Posts

5 min

1020 words

I have been using Claude Code daily for months, and there is one configuration that has completely changed how it works with my code. It is not a new plugin, a more powerful model, or a magic prompt. It is something that has existed since 2016 and that most developers use without knowing it every time they open VS Code: the Language Server Protocol (LSP).

Karan Bansal published an excellent article explaining in detail how to enable LSP in Claude Code and why it matters. After trying it, I can confirm the difference is real and significant.

5 min

905 words

Throughout my career, I’ve seen many things change. I’ve gone from Borland to Visual Studio, from vi to Sublime Text, from Sublime to VS Code… And believe me, each change was a deliberate decision that cost me weeks of adaptation. But what’s happening now with AI tools is something completely different.

I’ve found myself using Copilot in the morning, trying Cursor in the afternoon, and checking out Claude Code before going to bed. And I’m not alone. Developers have gone from being faithful as dogs to our tools to being… well, promiscuous.

3 min

591 words

Greg Brockman, President and Co-Founder of OpenAI, recently published a thread that perfectly describes the moment we’re living in software development. According to him, we’re witnessing a genuine renaissance in software development, driven by AI tools that have improved exponentially since December.

The qualitative leap

The most striking part of Brockman’s thread is how they describe the internal change at OpenAI: engineers who previously used Codex for unit tests now see the tool writing practically all code and handling a large portion of operations and debugging. This isn’t an incremental improvement, it’s a paradigm shift.

4 min

808 words

A few days ago I watched a video that has given me a lot to think about. Jeffrey Way, founder of Laracasts and one of the most influential people in the Laravel/PHP community, shared a brutally honest reflection on how artificial intelligence is transforming his business and his profession.

The video starts with a phrase that leaves you cold: “I’m done”. It’s not a goodbye to programming, but an acceptance of the reality to come.

9 min

1747 words

If you’re using tools like Claude Code, GitHub Copilot Workspace, or similar, you’ve probably noticed there’s technical jargon that goes beyond simply “chatting with AI”. I’m talking about terms like rules, commands, skills, MCP, and hooks.

These concepts are the architecture that makes AI agents truly useful for software development. They’re not just fancy marketing words — each one serves a specific function in how the agent works.

Let’s break them down one by one in a clear way.

2 min

402 words

A few days ago Laravel Boost v2.0 was launched, and as someone curious about everything surrounding the Laravel ecosystem, I couldn’t help spending quite a while reading about the new features. The truth is there’s one feature that has my special attention: the Skills system.

What is Laravel Boost?

For those who don’t know it, Laravel Boost is an AI tool that integrates with your Laravel projects to help you in daily development. With version 2.0 they’ve taken a major leap, introducing the Skills system that allows extending and customizing how AI works with your code.