Some time has passed since I wrote an article on my thoughts about AI in the everyday work of a programmer. I have new thoughts about where we are all going to and what the old/new role of a software engineer is becoming to be.
What Has Improved
Since last time, AI tools improved a lot. If previously I would have only recommended AI for some small autocompletion or writing tests or finding entry points, now I can trust architectural decisions and even generating whole solutions to AI agents, which is labeled vibe coding nowadays. It’s now much better in understanding and explaining code and architecture. The most incredible thing is that modern AI systems can even rationalize about legacy solutions, explaining what the intent was when they were implemented years ago. Right now, I gave up on GitHub Copilot as it is seemingly less precise than competitors. I tried Cursor but didn’t like it for non-professional IDEs (in comparison to JetBrains apps) and lack of context understanding. It might be that I never learned how to set up those tools, but gave up. Right now, I am using the last versions of OpenAI Codex and Anthropic Claude. Both work really well, especially when you fine-tune their “long memory” md files, allowing the models to grab maximum info about code practices on your project and its architecture. Anthropic leads in my personal rating; I trust it most tasks, but still in some peculiar cases, Codex is much better, so for really hard tasks, I usually use both tools, asking them to criticize each other’s solutions.
What Is to Be Done
However, there’s still a lot of stuff to be improved. In distributed, convoluted architecture, AI loses context quickly and generates stupid code. It can miss important points in BRD, mess up code practices by taking some snippets from modern practices and some from legacy codebases; it’s hard for it to keep one code style. The most dangerous thing is mistakes in small but important details covered by erroneous tests which were generated synchronously with the code. Some solutions (especially algorithmic code, UI, and tests) might even be non-readable for experienced programmers, however, seemingly functional.
What Is the Role of a Human Developer in the Age of Agentic AI?
As things are today, I definitely wouldn’t say that the profession of a software developer becomes obsolete in a couple of years. But neither would I say that AI is stuck and is not developing anymore. On the contrary, being a developer means to learn a lot, to understand broader contexts, to have stricter self-discipline, and elaborate soft skills. AI returns fun to our profession. It takes routine tasks from us. Writing monotonous, repetitive tests, copying and pasting enum or ORM entity templates, remembering or Googling infinite language patterns and library functions? No, no more. Instead, you concentrate on architecture, expandability, maintainability; you write the most important parts and brainstorm database schemas and inter-service communications; you consider user experience and developer (your colleague) burnout prevention. AI is just a tool, but the best one— it’s like several eager junior programmers waiting for your instructions and critiques for 24h, but at the same time, it’s a guru expert knowing all the patterns and technologies existing on Earth and understanding how your huge codebase works as a whole. Yes, both at the same time— a junior and a guru. But you have the rudder. You are the captain. I think life is hard for junior developers nowadays as AI takes their jobs. As for middle to senior, they became team leads in the teams of one, controlling their plethora of AI helpers and reviewing their code. The future is not vibe coding, the future is not getting back to manual coding. The future is using the right tool for your task, as it always was.