Greg Brockman, President and Co-Founder of OpenAI, recently published a thread that perfectly describes the moment we’re living in software development. According to him, we’re witnessing a genuine renaissance in software development, driven by AI tools that have improved exponentially since December.
The qualitative leap
The most striking part of Brockman’s thread is how they describe the internal change at OpenAI: engineers who previously used Codex for unit tests now see the tool writing practically all code and handling a large portion of operations and debugging. This isn’t an incremental improvement, it’s a paradigm shift.
This type of transformation reminds us of other technological revolutions like cloud computing or the Internet itself. Each of these technologies required a profound adaptation in how we work, and agentic AI will be no different.
OpenAI’s vision: agentic development by March 31st
Most interestingly, OpenAI isn’t just developing these tools, they’re adopting them aggressively internally. They have two clear goals for March 31st:
- Agents as first resort: For any technical task, the default tool should be interacting with an agent rather than directly using an editor or terminal.
- Safety and productivity: Agent usage must be safe but productive enough that most workflows don’t need additional permissions.
Strategies for the transition
Brockman’s thread details six key recommendations for adopting this approach:
1. Try the tools (for real)
It’s not enough to hear about them. OpenAI recommends:
- Designate an “agents captain” per team
- Share experiences in internal channels
- Organize hackathons to experiment
2. Create skills and AGENTS.md
This is a practice I find particularly brilliant:
- Maintain an AGENTS.md file per project that updates when the agent makes mistakes
- Write skills for everything Codex does correctly
- Save these skills in a shared repository
It’s a very pragmatic approach: learning from agent failures and documenting them to progressively improve them.
3. Internal tools inventory
Make all internal tools accessible to agents, whether via CLI or MCP servers. If agents can’t access your tools, they can’t truly help.
4. Agent-first code structures
This is still unexplored territory, but OpenAI suggests:
- Quick-to-run tests
- High-quality interfaces between components
5. Say NO to sloppy code
Brockman is very clear on this: we must maintain at least the same quality standards as with human code. Someone must be responsible for every PR, and reviewers must keep the bar high.
6. Basic infrastructure
There’s plenty of room to build infrastructure around these tools: observability, agent trajectory tracking, centralized management of accessible tools.
Cultural change, not just technical
What I like most about OpenAI’s approach is that they explicitly recognize this isn’t just a technical change, but a deep cultural shift. It requires rethinking how we work, how we evaluate code, how we structure teams.
Brockman poses a key question at the end: how to prevent “functionally-correct but hard-to-maintain” code from creeping into our codebases. This is a question that every company adopting AI agents will have to answer.
Personal reflections
From my perspective as a developer who has seen these tools evolve over the last few years, I believe we’re at an inflection point similar to what we experienced with the mass adoption of GitHub: it didn’t eliminate programmers, but it completely transformed how we work.
In my opinion, the key is finding the balance between delegating repetitive tasks to agents while maintaining human judgment for architectural and design decisions. The future isn’t programmers vs AI, but programmers empowered by AI agents. Those who adapt their workflows and maintain high quality standards will benefit the most from this revolution.













Comments