The Software Development Renaissance with AI Agents
3 min read

The Software Development Renaissance with AI Agents

591 words

Greg Brockman, President and Co-Founder of OpenAI, recently published a thread that perfectly describes the moment we’re living in software development. According to him, we’re witnessing a genuine renaissance in software development, driven by AI tools that have improved exponentially since December.

The qualitative leap

The most striking part of Brockman’s thread is how they describe the internal change at OpenAI: engineers who previously used Codex for unit tests now see the tool writing practically all code and handling a large portion of operations and debugging. This isn’t an incremental improvement, it’s a paradigm shift.

This type of transformation reminds us of other technological revolutions like cloud computing or the Internet itself. Each of these technologies required a profound adaptation in how we work, and agentic AI will be no different.

OpenAI’s vision: agentic development by March 31st

Most interestingly, OpenAI isn’t just developing these tools, they’re adopting them aggressively internally. They have two clear goals for March 31st:

  1. Agents as first resort: For any technical task, the default tool should be interacting with an agent rather than directly using an editor or terminal.
  2. Safety and productivity: Agent usage must be safe but productive enough that most workflows don’t need additional permissions.

Strategies for the transition

Brockman’s thread details six key recommendations for adopting this approach:

1. Try the tools (for real)

It’s not enough to hear about them. OpenAI recommends:

  • Designate an “agents captain” per team
  • Share experiences in internal channels
  • Organize hackathons to experiment

2. Create skills and AGENTS.md

This is a practice I find particularly brilliant:

  • Maintain an AGENTS.md file per project that updates when the agent makes mistakes
  • Write skills for everything Codex does correctly
  • Save these skills in a shared repository

It’s a very pragmatic approach: learning from agent failures and documenting them to progressively improve them.

3. Internal tools inventory

Make all internal tools accessible to agents, whether via CLI or MCP servers. If agents can’t access your tools, they can’t truly help.

4. Agent-first code structures

This is still unexplored territory, but OpenAI suggests:

  • Quick-to-run tests
  • High-quality interfaces between components

5. Say NO to sloppy code

Brockman is very clear on this: we must maintain at least the same quality standards as with human code. Someone must be responsible for every PR, and reviewers must keep the bar high.

6. Basic infrastructure

There’s plenty of room to build infrastructure around these tools: observability, agent trajectory tracking, centralized management of accessible tools.

Cultural change, not just technical

What I like most about OpenAI’s approach is that they explicitly recognize this isn’t just a technical change, but a deep cultural shift. It requires rethinking how we work, how we evaluate code, how we structure teams.

Brockman poses a key question at the end: how to prevent “functionally-correct but hard-to-maintain” code from creeping into our codebases. This is a question that every company adopting AI agents will have to answer.

Personal reflections

From my perspective as a developer who has seen these tools evolve over the last few years, I believe we’re at an inflection point similar to what we experienced with the mass adoption of GitHub: it didn’t eliminate programmers, but it completely transformed how we work.

In my opinion, the key is finding the balance between delegating repetitive tasks to agents while maintaining human judgment for architectural and design decisions. The future isn’t programmers vs AI, but programmers empowered by AI agents. Those who adapt their workflows and maintain high quality standards will benefit the most from this revolution.


Source: Greg Brockman’s thread on Twitter/X

Comments

Latest Posts

5 min

1020 words

I have been using Claude Code daily for months, and there is one configuration that has completely changed how it works with my code. It is not a new plugin, a more powerful model, or a magic prompt. It is something that has existed since 2016 and that most developers use without knowing it every time they open VS Code: the Language Server Protocol (LSP).

Karan Bansal published an excellent article explaining in detail how to enable LSP in Claude Code and why it matters. After trying it, I can confirm the difference is real and significant.

2 min

315 words

Lately I’ve been closely following everything around the MCP protocol (Model Context Protocol), and recently I found a project that makes a lot of sense: MCPHero.

The reality is that although MCP is taking off, many “traditional” AI libraries like openai or google-genai still don’t have native MCP support. They only support tool/function calls. MCPHero comes to solve exactly this: make a bridge between MCP servers and these libraries.

What is MCPHero?

MCPHero is a Python library that lets you use MCP servers as tools/functions in native AI libraries. Basically, it lets you connect to any MCP server and use its tools as if they were native OpenAI or Google Gemini tools.

6 min

1205 words

After my previous article about agent-centric programming, I’ve been researching more advanced techniques for using Claude Code really productively. As a programmer with 30 years of experience, I’ve seen many promising tools that ultimately didn’t deliver on their promises. But Claude Code, when used correctly, is becoming a real game-changer.

Beyond the basics: The difference between playing and working seriously

One thing is using Claude Code for experiments or personal projects, and another very different thing is integrating it into a professional workflow. For serious projects, you need a different approach:

4 min

791 words

Lately I’ve been following a discussion that worries me quite a bit: to what extent are we delegating our thinking to AI. It’s not an abstract or philosophical question, it’s something very real I’m seeing day to day in our profession and in society in general.

Recently I read an article by Erik Johannes Husom titled “Outsourcing thinking” that, among other things, discusses the concept of “lump of cognition fallacy”. The idea is that, just as there’s an economic fallacy saying there’s a fixed amount of work to do, some believe there’s a fixed amount of thinking to do, and if machines think for us, we’ll just think about other things.

2 min

402 words

A few days ago Laravel Boost v2.0 was launched, and as someone curious about everything surrounding the Laravel ecosystem, I couldn’t help spending quite a while reading about the new features. The truth is there’s one feature that has my special attention: the Skills system.

What is Laravel Boost?

For those who don’t know it, Laravel Boost is an AI tool that integrates with your Laravel projects to help you in daily development. With version 2.0 they’ve taken a major leap, introducing the Skills system that allows extending and customizing how AI works with your code.

5 min

905 words

Throughout my career, I’ve seen many things change. I’ve gone from Borland to Visual Studio, from vi to Sublime Text, from Sublime to VS Code… And believe me, each change was a deliberate decision that cost me weeks of adaptation. But what’s happening now with AI tools is something completely different.

I’ve found myself using Copilot in the morning, trying Cursor in the afternoon, and checking out Claude Code before going to bed. And I’m not alone. Developers have gone from being faithful as dogs to our tools to being… well, promiscuous.