The Recent Experience of Coding with AI

-

Last July, I wrote an article  of software engineering could also be affected by the increasing integration of LLM-based code assistant tools. Unfortunately for me, I used to be writing that article immediately after the primary major, functionally advanced release of Claude Code. While Claude Code technically existed in February 2024, it wasn’t until May 2025 that it was expanded to supply the form of sophistication in code assisting that it and a few of the other code assistant tools possess. For this reason, my thoughts in that article really didn’t bear in mind a few of the changes that we’ve seen since then.

Now I’m going to take a brand new have a look at the state of affairs in the usage of LLM-based code tools and see where we’re at. Particularly, I need to think concerning the implications of this technology on how we do our jobs each now and in the long run.

1. Functionality

What’s that sophistication I’m talking about? Well, I’ve used a number of different code assistant solutions (Github Copilot, Claude Code) in my very own work, and I’ve consulted software engineers which have tried out others (Cursor, Replit, etc) as well. They’ve various levels of capability, but a few of the key elements include:

  • having the ability to access all of the files in your project, search through them, and analyze their contents together
  • having the ability to write significant chunks of code or whole files into your project
  • using “reasoning” LLMs that break down tasks into chunks and process them individually, while narrating the processing of those chunks to the user
  • agent tools, where the models can independently call on different software to finish tasks that the LLM cannot do well (including searching the online)

None of this requires a change to how we understand the LLM as an entity and its structure, but we’re adding things on to the essential LLM that expand a few of its capabilities. The “reasoning” LLMs really just involve different strategies for prompting, and enabling multiple threads of LLM work to be done and combined together. While the LLM continues to be the identical constructing block, we’re combining them in other ways and enabling different practical applications, so now they’re more useful and effective in the precise task of writing code.

This isn’t meant to diminish the downsides to those tools, or to LLMs typically. I’ve talked about quite a few ways in which LLM technology has serious negative externalities. But I don’t think we will say, within the narrow space of software engineering, that this technology doesn’t work. It’s not perfect, clearly — I still get very frustrated after I’m writing code and I ask a code assistant a matter and it bungles the entire thing — however the technology we’ve today is capable of serve a useful function.

2. How People Respond

As I consult with friends within the machine learning and software engineering space about this state of affairs, I hear a number of different perspectives. Some individuals are enthusiastically adopting AI code assistants in every way they will. They’ll give the tool a prompt and let it write the code, and are available back later to review, or have the tool do the review itself. They’ll spin up multiple LLMs to collaborate on issues, reviewing one another’s work and producing voluminous amounts of code while humans sleep. This can be a type of what readers could also be accustomed to as “vibe coding”. For these people, being free of writing code themselves is an unalloyed good, and so they’re thrilled by the productivity increases they will achieve. Writing code, for them, was at all times mainly a method to an end, and so they don’t mind meting out with that labor. They’re producing recent software at speeds never before anticipated, and by and enormous, it’s meeting their needs.

Alternatively, there are those that I believe of as “craftspeople”. These are developers and engineers who’ve a love for the work of fascinated by code and writing code, and revel in the journey as much because the destination, if no more. For these people, the appearance of AI code assistants is deeply troubling. Whenever you enjoy your work since it requires thoughtfulness, creativity, and resilience, and also you have the benefit of the labor, it’s alarming to be faced with a brand new paradigm suggesting that none of those skills in your part are essential or desirable. A number of the most talented and expert software engineers I do know have talked about wanting to quit the entire career fairly than be pushed right into a vibe-coding paradigm of their each day work, where prompting and reading code reviews constitute the majority of their responsibilities.

Vicki Boykis’s latest piece addresses this thoughtfully– her advice for those of us feeling depressed concerning the direction of our field is to redouble our efforts to seek out ways to scratch the itch of wanting to be creative and make meaning in our work. I appreciate the worth she places on these skills and feelings, but it surely does suggest that even she doesn’t see the actual job keeping the core character we’ve turn into accustomed to.

This theory is after all a spectrum, populated with individuals who may enjoy coding a bit, but are all right with handing off most of that work, or individuals who really wish to code, but recognize that business pressures require they adapt their processes to incorporate more AI. Wherever you land, many if not most of us are concerned about how this shift goes to affect our careers and job prospects, in addition to the state of the software engineering field as an entire.

The Seduction

But what’s it we’re really experiencing? What’s it like sitting down in front of your keyboard and spinning up your IDE on this recent era? There’s something strangely seductive about having slightly tool on the side of your screen that may just handle a task for you.

You recognize that the assistant can probably write the following function it is advisable add to your code. Even should you haven’t used it yourself, you’ve heard your peers rave about its abilities. And, what’s the downside, anyway? Why not only go for the code assistant and have it do slightly task?

You would possibly have concerns about job security — are you going to turn into obsolete as tools like this increase their capability or we discover simpler ways to make use of them? Will you lose the abilities that you simply’ve earned over the course of your profession, as you stop using them every day in favor of letting the AI do tasks? No person can let you know if these are real concerns, because we just don’t know of course yet how the workplace for software engineers goes to evolve over the long run.

Chances are you’ll also pay attention to broader implications of generative AI. You’re implicitly saying, “this work that I would like done is definitely worth the negative costs of this technology.” By selecting to click that code assistant chat button, you might be deciding that your use case is definitely worth the electricity. That is definitely worth the water usage. That is price supporting and boosting an industry and the technology that’s, in other areas, accountable for significant socialpolitical, and cultural negative impacts. You’re saying, “I believe that’s all price it for me to get a tool to jot down the code I would like to finish this project.”

But even whenever you do have these tradeoffs delivered to your attention, it’s still hard. You’re sitting there taking a look at your code, and a part of you says, “I could just do that. I could write this component of this code. I do know the best way to write this function.” But you’ve got this little bug, this little itch in the shape of a chat window on the side of the screen or a terminal command just waiting. “It’ll take me 3 hours to jot down this class and get it working and write the tests. But man, I could just push that button. That button’s good there. Push that button, and this can be done in a number of minutes, after which I can move on to the following thing. It’d even work higher than what I’d write. My boss can be completely satisfied. I could possibly be making progress and moving forward, so why not only make the AI tool do the work?”

There are a lot of the explanation why bouncing around in your head, because you already know concerning the costs of using this technology, but that seductiveness continues to be there. Rationalizing starts in — it’s possible you’ll ask yourself, “well, does my single usage of this really make any difference? I’m only one user, in spite of everything.” That is an affordable query to ask, after all. How much difference can one prompt make? Your one prompt really isn’t that resource intensive, and others all over the world are using this technology way more for much less worthy endeavors.

Alternatively, one prompt might be never only one — what should you’re heading down a slippery slope where this becomes a routine a part of your work? In case your skills atrophy, will that make you more depending on the tool?

Is that this even really as much as you any more? Does it feel like you possibly can proceed working in software engineering and never pick up these tools? It’s very plausible that maintaining productivity and relevance at work requires you to maintain using the code assistant tools. Is it your personal responsibility to carry back the tide of AI code tools, within the face of crowds who eagerly adopt this technology for each possible use case? In a trade off between principled avoidance of technology that has negative social effects, and continuing to give you the option to feed your loved ones, what’s a person purported to do? For many of us, material survival has to win out.

3. What Now?

This mental space is a tough place to operate from. We’re witnessing a major change in how our work is finished, and every of us is deciding how we adapt to it. For a lot of, it’s emotionally taxing to see the sphere changing so dramatically, facing the uncertainty about what this implies for us and the world around us.

What did our forebears within the earliest days of computer programming think this field was going to seem like in the long run? In, say, the Nineteen Sixties, when people were operating mainframes as big as a room and writing code with punch cards, could they’ve envisioned the Python open source ecosystem? That is form of how I believe concerning the scale of change that’s potentially possible for us now, and it could occur at a rapid pace.

The AI code assistants appear to be here to remain, in some form or one other. The larger economic way forward for the massive players in LLMs could also be precarious, for reasons I even have written about before, but that doesn’t necessarily prevent us from accessing some sorts of code assistant tooling, through open source LLMs and tools like https://ampcode.com/https://opencode.ai/, or https://www.tabbyml.com/. If the models never get any higher than they’re today, then they’re still going to be functionally useful.

Our jobs are going to alter, because these recent tools can be found, and we’ve to learn the way we’ll evolve. I don’t imagine our jobs are going to vanish, they are only going to alter. We’re going to turn into accustomed to using AI assistants in our coding, and it stays to be seen what the each day works looks like consequently. Will institutional inertia limit the quantity of change we see in our workplaces? Will there still be anywhere for creativity and craftsmanship in software development and coding? In workplaces, individuals are already being given performance reviews based on whether or not they use AI enough to please management, so we don’t have much time to give it some thought.

On a private level, how are we going to return to grips with the moral implications of our participation on this industry, and the ways they’re changing? No person can answer that for you, after all. Some people may thoroughly quit and alter careers, while others will discover a strategy to live with the brand new paradigm.

We’re in a selected bind between what the economy and material conditions expect or demand from us, and the moral implications of those demands. The overwhelming majority of us must support our families and aren’t able to refuse to comply. I believe a variety of us are going to need to address a cognitive dissonance about these two sides.

Awareness and consciousness of the prices of our system are essential, even in the event that they cause us discomfort. Pretending the issues with generative AI don’t exist isn’t an answer. As social scientists know, truthfully interrogating the dynamics, flaws, and power structures of the system we discover ourselves in is a prerequisite for improving that system, nevertheless incrementally. We will’t put the generative AI genie back within the bottle, but we also don’t necessarily have to just accept the worst case scenario in social, cultural, environmental, and political effects either. Structural change, not individual alternative, is the one strategy to meaningfully improve systems, and if we’re informed concerning the ethical problems we will take part in systemic pushes toward improvement.


Read more of my work at www.stephaniekirmer.com. I’m also speaking at ODSC East at the top of April 2026, on the subject of evaluation strategies for LLM development.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x