Five principles for using AI professionally (without going crazy)
4 min read

Five principles for using AI professionally (without going crazy)

678 words

A few days ago I read an article by Dominiek about the 5 principles for using AI professionally and found myself constantly nodding. After years of watching technologies arrive and evolve, AI gives me the same feelings I had with other “revolutions”: enthusiasm mixed with a necessary dose of skepticism.

Dominiek’s article especially resonated with me because it perfectly describes what we’re experiencing: a world where AI is getting into everything, but not always in the most useful or sensible way.

The underlying problem

As Dominiek says, we’re at a moment where people send AI summaries based on dictation that are simply bad. I would add: we’re creating more noise than signal, and that’s a serious problem.

As a developer, I’ve seen this before. Every new technology promises to revolutionize everything, and at first everyone uses it poorly. Remember XML? Remember microservices? AI is going down the same path if we’re not careful.

The five principles (with my own reflections)

1. Value human thinking

This seems fundamental to me. LLMs are, essentially, very sophisticated autocomplete engines. As the eternal principle says: garbage in, garbage out.

I’ve worked with enough programming languages, databases, and frameworks to know that no tool does the thinking for you. And the best solutions have always come from understanding the problem first, not from applying the trendy tool.

AI can refine ideas and expand research, but for original thinking you need… to think. Without distractions, without screens, with colored pens and paper if necessary (my favorite method).

2. Respect human attention

Did you really read all the output the AI generated? Did you critically evaluate it?

This point touches me directly because I’ve seen how email, then Slack, then a thousand other tools have been fragmenting our attention. Now AI threatens to do the same but on an industrial scale.

Other people’s time is valuable. If you haven’t dedicated your time to reviewing what you’re going to send, don’t expect the other person to dedicate theirs to reading it.

3. Be transparent

In my current team, we pay for AI tools and encourage their use. But always with transparency. As I tell my children: don’t use ChatGPT to do your homework, use it to improve.

I really like Dominiek’s idea of adding disclaimers:

  • “AI Note: This is a proposal generated from a couple of points”
  • “AI Note: Used Claude to expand the core idea”
  • “AI Note: GPT corrected grammar and structure”

It’s honest and helps the reader contextualize what they’re seeing.

4. Take care of your clients

If you handle client data, you already have processes to safeguard it. AI tools are no different from any other SaaS, but they require extra care due to the emphasis on information manipulation.

In my experience working at different companies - Arrakis, OpenSistemas, now at Carto - we’ve always been clear that client data is sacred. AI doesn’t change that, but it does add new risk surfaces that need to be considered.

5. They’re only tools

Do you really need an AI notetaker in that intimate video call? Does anyone actually read your AI summaries?

This reminds me of when everyone wanted to convert everything to microservices, use MongoDB for everything, or shove Docker into anything. The tool doesn’t define the problem, the problem defines the tool.

My personal conclusion

After working on this long enough, I’ve learned that every new technology goes through the same phases: excessive hype, disappointment, and finally sensible adoption where it really adds value.

AI is right in the middle of the hype. But Dominiek’s principles give us a roadmap to navigate this era without losing sanity or effectiveness.

In the end, in a world full of AI, premium value will be in human thinking, attention, integrity, empathy, and ingenuity. Exactly the same things that have been valuable throughout my career.

Technology changes, tools evolve, but the fundamental principles for doing good work remain surprisingly stable.


This article was written by a human, with some specific queries to Claude to verify data and structure ideas. As principle #3 says: transparency above all.

Comments

Latest Posts

5 min

1020 words

I have been using Claude Code daily for months, and there is one configuration that has completely changed how it works with my code. It is not a new plugin, a more powerful model, or a magic prompt. It is something that has existed since 2016 and that most developers use without knowing it every time they open VS Code: the Language Server Protocol (LSP).

Karan Bansal published an excellent article explaining in detail how to enable LSP in Claude Code and why it matters. After trying it, I can confirm the difference is real and significant.

9 min

1747 words

If you’re using tools like Claude Code, GitHub Copilot Workspace, or similar, you’ve probably noticed there’s technical jargon that goes beyond simply “chatting with AI”. I’m talking about terms like rules, commands, skills, MCP, and hooks.

These concepts are the architecture that makes AI agents truly useful for software development. They’re not just fancy marketing words — each one serves a specific function in how the agent works.

Let’s break them down one by one in a clear way.

4 min

791 words

Lately I’ve been following a discussion that worries me quite a bit: to what extent are we delegating our thinking to AI. It’s not an abstract or philosophical question, it’s something very real I’m seeing day to day in our profession and in society in general.

Recently I read an article by Erik Johannes Husom titled “Outsourcing thinking” that, among other things, discusses the concept of “lump of cognition fallacy”. The idea is that, just as there’s an economic fallacy saying there’s a fixed amount of work to do, some believe there’s a fixed amount of thinking to do, and if machines think for us, we’ll just think about other things.

5 min

905 words

Throughout my career, I’ve seen many things change. I’ve gone from Borland to Visual Studio, from vi to Sublime Text, from Sublime to VS Code… And believe me, each change was a deliberate decision that cost me weeks of adaptation. But what’s happening now with AI tools is something completely different.

I’ve found myself using Copilot in the morning, trying Cursor in the afternoon, and checking out Claude Code before going to bed. And I’m not alone. Developers have gone from being faithful as dogs to our tools to being… well, promiscuous.

6 min

1248 words

A few years ago, many AI researchers (even the most reputable) predicted that prompt engineering would be a temporary skill that would quickly disappear. They were completely wrong. Not only has it not disappeared, but it has evolved into something much more sophisticated: Context Engineering.

And no, it’s not just another buzzword. It’s a natural evolution that reflects the real complexity of working with LLMs in production applications.

From prompt engineering to context engineering

The problem with the term “prompt engineering” is that many people confuse it with blind prompting - simply writing a question in ChatGPT and expecting a result. That’s not engineering, that’s using a tool.

5 min

861 words

Mozilla AI surprises again: AI agents that work just by opening an HTML

A few days ago I came across a Mozilla AI project that really caught my attention: WebAssembly Agents. And after 30 years watching the industry complicate life with dependencies, installations, and configurations, seeing something that works just by “opening an HTML” made me smile.

The problem it solves (and we all know it)

How many times have you tried to test an AI project and encountered this?