Five principles for using AI professionally (without going crazy)

A few days ago I read an article by Dominiek about the 5 principles for using AI professionally and found myself constantly nodding. After years of watching technologies arrive and evolve, AI gives me the same feelings I had with other “revolutions”: enthusiasm mixed with a necessary dose of skepticism.

Dominiek’s article especially resonated with me because it perfectly describes what we’re experiencing: a world where AI is getting into everything, but not always in the most useful or sensible way.

The underlying problem

As Dominiek says, we’re at a moment where people send AI summaries based on dictation that are simply bad. I would add: we’re creating more noise than signal, and that’s a serious problem.

As a developer, I’ve seen this before. Every new technology promises to revolutionize everything, and at first everyone uses it poorly. Remember XML? Remember microservices? AI is going down the same path if we’re not careful.

The five principles (with my own reflections)

1. Value human thinking

This seems fundamental to me. LLMs are, essentially, very sophisticated autocomplete engines. As the eternal principle says: garbage in, garbage out.

I’ve worked with enough programming languages, databases, and frameworks to know that no tool does the thinking for you. And the best solutions have always come from understanding the problem first, not from applying the trendy tool.

AI can refine ideas and expand research, but for original thinking you need… to think. Without distractions, without screens, with colored pens and paper if necessary (my favorite method).

2. Respect human attention

Did you really read all the output the AI generated? Did you critically evaluate it?

This point touches me directly because I’ve seen how email, then Slack, then a thousand other tools have been fragmenting our attention. Now AI threatens to do the same but on an industrial scale.

Other people’s time is valuable. If you haven’t dedicated your time to reviewing what you’re going to send, don’t expect the other person to dedicate theirs to reading it.

3. Be transparent

In my current team, we pay for AI tools and encourage their use. But always with transparency. As I tell my children: don’t use ChatGPT to do your homework, use it to improve.

I really like Dominiek’s idea of adding disclaimers:

  • “AI Note: This is a proposal generated from a couple of points”
  • “AI Note: Used Claude to expand the core idea”
  • “AI Note: GPT corrected grammar and structure”

It’s honest and helps the reader contextualize what they’re seeing.

4. Take care of your clients

If you handle client data, you already have processes to safeguard it. AI tools are no different from any other SaaS, but they require extra care due to the emphasis on information manipulation.

In my experience working at different companies - Arrakis, OpenSistemas, now at Carto - we’ve always been clear that client data is sacred. AI doesn’t change that, but it does add new risk surfaces that need to be considered.

5. They’re only tools

Do you really need an AI notetaker in that intimate video call? Does anyone actually read your AI summaries?

This reminds me of when everyone wanted to convert everything to microservices, use MongoDB for everything, or shove Docker into anything. The tool doesn’t define the problem, the problem defines the tool.

My personal conclusion

After working on this long enough, I’ve learned that every new technology goes through the same phases: excessive hype, disappointment, and finally sensible adoption where it really adds value.

AI is right in the middle of the hype. But Dominiek’s principles give us a roadmap to navigate this era without losing sanity or effectiveness.

In the end, in a world full of AI, premium value will be in human thinking, attention, integrity, empathy, and ingenuity. Exactly the same things that have been valuable throughout my career.

Technology changes, tools evolve, but the fundamental principles for doing good work remain surprisingly stable.


This article was written by a human, with some specific queries to Claude to verify data and structure ideas. As principle #3 says: transparency above all.