A necessary reflection on the “AI-Native Engineer”
I read Addyo’s article about the “AI-Native Software Engineer” and, as a Principal Backend Engineer who has seen technological promises come and go for years, I have quite sincere opinions about it. Not all are comfortable to hear.
I’ve seen enough “revolutions” to separate the wheat from the chaff. And there’s a lot of both here.
What’s really working (honestly)
1. AI as copilot, not as pilot
The article’s metaphor about treating AI as a “junior programmer available 24/7” is accurate. In my experience working with teams, I’ve seen developers use GitHub Copilot and Claude effectively to:
- Boilerplate and repetitive code: Writing tests, generating configs, creating basic CRUDs
- Documentation: Generating docstrings, READMEs, code comments
- Simple refactoring: Renaming variables, changing basic patterns
// This works well with AI
func (h *UserHandler) CreateUser(w http.ResponseWriter, r *http.Request) {
// AI can generate basic boilerplate
// But business logic is still yours
}
2. Acceleration of the learning curve
Where AI really shines is helping developers understand new ecosystems. A dev coming from Java can use Claude to understand Go idioms faster. This is gold for teams with diverse technologies.
3. Debugging as a talking rubber duck
The ability to explain an error to Claude and get hypotheses is genuinely useful. It doesn’t always get it right, but it forces you to articulate the problem, which is already valuable in itself.
Where the hype crashes into reality
1. “10x productivity” is marketing, not reality
The article suggests that AI can give you “2x, 5x or maybe 10x” in productivity. As an engineer who measures these things, this is wishful thinking.
The reality I see:
- 20-30% improvement in mechanical tasks (tests, configs)
- Practically zero in architecture, complex debugging, or product decisions
- Time wasted when AI leads you down wrong paths
2. “Every engineer is a manager now” is problematic
The idea that engineers now “orchestrate work instead of executing it” seems to me fundamentally wrong. As an engineer who works with teams, I know that managing requires:
- Business context that AI doesn’t have
- Interpersonal decisions that go beyond code
- Responsibility for outcomes, not just outputs
Developers don’t become managers by using AI. They’re still developers with better tools.
3. The illusion of “AI-first workflow”
The suggestion to “give every task to AI first” is operationally unsustainable. In real teams:
- Context matters more than code
- Legacy constraints aren’t in AI’s training data
- Requirements change faster than you can prompt-engineer
Tools: The good, the bad, and the unnecessary
GitHub Copilot: The de facto standard
- ✅ Works: Integrates well, doesn’t get in the way
- ✅ Predictable: You know what to expect from it
- ❌ Limited: Doesn’t go beyond intelligent autocomplete
Cursor: Promising but expensive
- ✅ Powerful: Really understands project context
- ❌ Pricing: $20/month per dev adds up quickly
- ❌ Vendor lock-in: Changing editors creates friction
Claude/ChatGPT: Useful but overrated
- ✅ Good for research and documentation
- ❌ Inconsistent for production code
- ❌ Privacy concerns for proprietary code
Bolt, v0, Replit: Marketing > Reality
The “one-prompt full-stack generators” are impressive demos but questionable products:
- They generate code that looks functional
- Never production-ready
- Faster to write from scratch than to debug their output
// What Bolt generates
const handleSubmit = async (e) => {
e.preventDefault();
// TODO: Add validation
// TODO: Handle errors
// TODO: Add loading state
console.log('Form submitted');
};
// What you actually need
const handleSubmit = async (e) => {
e.preventDefault();
setLoading(true);
try {
const validatedData = validateForm(formData);
await submitToAPI(validatedData);
showSuccessNotification();
resetForm();
} catch (error) {
handleError(error);
logToSentry(error);
} finally {
setLoading(false);
}
};
The reality in production teams
What works in practice:
AI for specific and bounded tasks
- Generate tests for existing functions
- Explain legacy code
- Create basic documentation
As a learning tool
- Understand new patterns
- Explore unknown APIs
- Translate between languages/frameworks
Automation of the obvious
- Commit messages
- PR descriptions
- Boilerplate code
What does NOT work:
Autonomous agents in production
- Too unpredictable
- Require more supervision than doing the work yourself
- The debugging of AI debugging is kafkaesque
AI-first for architecture
- AI doesn’t understand business trade-offs
- Doesn’t know specific technical constraints
- Architectural decisions are irreversible
Delegating responsibility to AI
- Code is still your responsibility
- Bugs are still your problem
- “The AI generated it” is not a valid excuse
My pragmatic recommendations
For Individual Contributors:
- Use Copilot for boilerplate, nothing more
- Claude/ChatGPT for research and explanations
- Maintain skepticism about autonomous agents
- Never commit without review AI-generated code
For Tech Leads:
- Establish clear guidelines on AI usage
- Monitor code quality of AI-assisted code
- Don’t adjust estimates until you see real results
- Educate the team about AI limitations
For Managers:
- Don’t buy the hype of “10x productivity”
- Invest in proven tools (Copilot) before experiments
- Maintain rigorous QA processes
- Budget for learning curve, not just for licenses
The true value of being “AI-native”
Being “AI-native” doesn’t mean using AI for everything. It means knowing when to use it and when not to.
The best developers I know use AI as:
- Accelerator for mechanical tasks
- Tutor for learning new concepts
- Rubber duck that can respond
NOT as:
- Architect of systems
- Decision maker for product
- Replacement for thinking
Conclusion: Evolution, not revolution
The original article paints a future where engineers “orchestrate” instead of “execute”. My experience says that this is a romantic interpretation of what’s happening.
The reality is more mundane but also more sustainable: AI is making some tasks easier, just like IDEs, frameworks, or Stack Overflow did.
Being “AI-native” is not an existential transformation. It’s adopting useful tools while maintaining critical judgment.
As I said when my team started using Docker: “It’s a powerful tool, but you still need to understand what you’re containerizing”.
With AI it’s the same: it’s a powerful tool, but you still need to understand what you’re building.
What do you think? Are you seeing the “10x improvements” the article promises? Or is your experience more like mine?
I’d love to hear real stories, not marketing demos. Reality is usually more interesting than hype.
PS: If your company is considering “autonomous AI agents” for production code, please talk to me first. I want to save you some mistakes I’ve seen.













Comments