Forum menu
Software developers...
 

Software developers, anyone still writing code?

Posts: 242
Full Member
Topic starter
 
[#13534943]

Over the last few months where I worked* have gone AI (LLM) mad coming to a head last week when it was decreed that all coding must be AI first, we are not to start writing code and deadlines have been set for productivity increases. Now I'm not a luddite and quite like using AI in editor as it saves a lot of tedious typing but using agenic AI to generate masses of code and tests for a longterm product feels a lot like trading velocity for maintainability and stability. Also the reality of of what the coders are finding using AI and the expectations of management are different, and that's before we go into how long someone can work at the productivity level expected or how long before they lose their coding ability.

I'm interested how other devs are finding it?

*took voluntary redundancy before the latest directives. 


 
Posted : 30/03/2026 10:12 am
kelvin reacted
Posts: 8199
Full Member
 

Only using it for reducing tedious bits in the editor and for suggestions on how to get something to work (the cases when you know it is possible but can't quite nail the syntax). In the last case it frequently generates crap.


 
Posted : 30/03/2026 10:21 am
Posts: 10956
Full Member
 

Not a proper dev but I'm using it to write a Kotlin app for myself (knowing nothing about Kotlin or app development). I've used a combination of pretty much all the public free ones and none of them are error free. I was also lazy a while back and couldn't be bothered to look up the syntax for a bit of M so asked Copilot which completely made up a function that does not and never has existed in M. Python I'll still hand crank but VS Code sometimes helps with filling in stuff, but sometimes just goes off on a tangent.

So no unless commercial versions are substantially ahead of the public offerings I wouldn't trust it for any large project with real clients.


 
Posted : 30/03/2026 10:28 am
Posts: 2598
Full Member
 

We are pretty much AI first now. With the lens of an old fart that's been doing this a long time, my realisation are this...

- We stopped writing assembly code a long time ago an moved to modern, managed languages, so appropriately asking AI to write the code could look like another stop along this journey.  Just zoomed out another level.

- Writing the code was never the bottle neck, and if anyone wants enterprise grade, maintainable solutions, then currently there really does need to be a human in the loop, designing, reviewing and building.  I now spend way more time reviewing monster PRs and explaining mine to others which takes significant time to do properly.

- The flow state of physically writing code is something I miss, and something that really imprints new frameworks (or entire stack) into my mental understanding.  I'm finding it harder to really understand newer stuff without that.  But then the flip side is a massive leg up in near instant productivity.

- With those points in mind, I wonder how we get new juniors up to the experience / knowledge of currently senior or lead engineers where they've only experienced agentic development.  Perhaps they won't need to, such is the pace of AI improvement.

 

 

/ scurries of to the "how much money do I need to retire" thread.


 
Posted : 30/03/2026 10:30 am
dhague and nicko74 reacted
 poly
Posts: 9128
Free Member
 

We don't routinely write code with AI but we do use AI for code reviews and its way more effective and robust than human developers.


 
Posted : 30/03/2026 10:38 am
Posts: 242
Full Member
Topic starter
 

Posted by: sam_underhill

- Writing the code was never the bottle neck, and if anyone wants enterprise grade, maintainable solutions, then currently there really does need to be a human in the loop, designing, reviewing and building.  I now spend way more time reviewing monster PRs and explaining mine to others which takes significant time to do properly.

/ scurries of to the "how much money do I need to retire" thread.

This is exactly what we've found, it generates so much code you end up with the bottleneck being massive code reviews, we've been pushing to break the tasks into smaller chunks that are properly reviewable rather than massive PRs where in all honestly any human is just going to skim that it looks alright. Management have already asked if AI can code review its own work, I'm glad I've stepped out of this, I'm not quite in the retirement window yet but any future code work is going to be small company only and NO American management. 

 


 
Posted : 30/03/2026 10:38 am
Posts: 1623
Free Member
 

I'm a Data Engineer and I have been writing scripts to automatically generate elements of my code for the last decade anyway. I've challenged myself to be AI first now. I good approach I've found is to use AI/manual coding to create a template/style and then ask AI to amend it for different requirements. This way I have more control over what it is doing and it also makes the output code look handwritten. Right now I'm using AI to double the number of clients I can work with. I believe the next few years is a golden opportunity to gain an advantage while organisations and people are getting to grips with AI.


 
Posted : 30/03/2026 10:45 am
Posts: 5149
Full Member
 

Posted by: poly

We don't routinely write code with AI but we do use AI for code reviews and its way more effective and robust than human developers.

How have you assessed that? I don't doubt it's true, but I'd be interested to learn how you have proved it.

 


 
Posted : 30/03/2026 10:47 am
Posts: 1243
Free Member
 

It's over.*

 

* unless you have a niche thats not represented well in the training data, eg my BIL who works in genomics. They use LLM's for all their dashboards and what not, but the actual science stuff AI can't handle at all. 


 
Posted : 30/03/2026 11:18 am
Posts: 7097
Free Member
 

Safety critical engineer here. Coding was only ever 10% of a project effort. So there's that.

AI is changing how software gets made in the first place. For sure. As noted above, we used to use punch cards write assembler code and now we use high level languages and soon we'll use a quasi-legal style of English language.

AI can be really quick to get first drafts off the ground. It's astonishingly good at getting cookie cutter application type code together. The more your thing looks like other things the more useful it is.

But.

It's astonishingly bad at the difficult stuff. It produces mountains of code. It's really quick at always doing something. That last point is the killer, you can travel at light speed in the wrong direction. Experience is absolutely needed here.

So yeah, whilst AI can help with most of the SW design development and testing process, it won't replace humans. The bus still needs driving. The bus just got faster. You also still need experienced people to drive that bus and to teach the juniors where the steering wheel is and what brakes look like.

Last thought, any company that hard pivots to AI is going to find life easy for a short time and then very difficult, very quickly, I would guess.


 
Posted : 30/03/2026 11:25 am
TedC and nicko74 reacted
 poly
Posts: 9128
Free Member
 

Posted by: oldtennisshoes

Posted by: poly

We don't routinely write code with AI but we do use AI for code reviews and its way more effective and robust than human developers.

How have you assessed that? I don't doubt it's true, but I'd be interested to learn how you have proved it.

The main indications are:

  • time to get the code reviewed - fallen from "when someone got a chance" to "as quick as the AI can process it" = a few minutes
  • number of times errors are detected in formal testing and sent back for rework (which hits release schedules)
  • number of "escaped bugs" reported by customers after release

But anecdotally the devs are also reporting:

  • its spotting stuff we would have missed before
  • it is not reading the requirements with the inherent bias we have and so is forcing better requirement writing / clarification too

From a people management perspective it also seems to avoid me getting into the politics of personalities about who is being a ****, or who is perceived as being an arse about feedback.  We do have a human escalation process when the developer doesn't like the AI response, and its used very little.  But more code actually goes through an improvement/change at the code review rather than being rubber stamped.  

 

 


 
Posted : 30/03/2026 11:39 am
TedC and oldtennisshoes reacted
Posts: 7097
Free Member
 

mostly matches my experience TBH

it is "fair" at reviews, although not perfect - like most other aspects of its use. It is excellent at checking against standards and styles the "simple" checks, so all the boring cruft is cleaned up by these tools

but

"did this code achieve the design intent", "is the code clear to understand", "has it been written with testability in mind" type questions... dunno, ask again in several years, it isn't there yet on those.

 


 
Posted : 30/03/2026 11:46 am
nicko74 reacted
Posts: 242
Full Member
Topic starter
 

Posted by: mrmonkfinger

It's astonishingly bad at the difficult stuff. It produces mountains of code.

As part of me leaving they've been trying to get AI summarise some of the specialist weather work I do and it produces a not insubstantial amount of made up stuff, the problem is someone outside of the this field would probably not notice because the made up stuff is so well knitted in with correct information, the most believable lies are those served with truths.  

 


 
Posted : 30/03/2026 11:58 am
kelvin reacted
Posts: 6989
Full Member
 

Software Tester here.

I think the role of Test Automation engineer is on borrowed time.  LLMs can put together Selenium scripts much quicker than I can.

What I've chosen to do is take a much more active role in implementing Shift Left principles where my job is to act as a QA on all stages of development, from analysing business requirements, risk assessments, threat modelling, etc and ensure risks and requirements are documented and there is a clear link from these items through to the Test Report.

Either I'm ahead of the curve or completely wrong as most employers still seem to be more interested in finding out if you know Playwright rather than if you can find issues before anyone has even started writing code.

I haven't tried but I suspect things like embedded systems and Hardware in the Loop testing are going to be safe from LLMs for a while.  Maybe.


 
Posted : 30/03/2026 12:27 pm
kelvin reacted
 poly
Posts: 9128
Free Member
 

Posted by: mrmonkfinger

but

"did this code achieve the design intent", "is the code clear to understand", "has it been written with testability in mind" type questions... dunno, ask again in several years, it isn't there yet on those.

Actually we find it does a pretty good job on those topics too - faultless, no, but still better than a bored dev who wants to do something more interesting than read someone else’s code and have the same argument they had last week.  We are experimenting with it writing automated test scripts etc at the moment - signs are positive, again perhaps not 100% but if you look at what real testers do it’s far from perfect.  

 

 

 


 
Posted : 30/03/2026 12:30 pm
Posts: 2622
Full Member
 

I'm an embedded software engineer and I still get to write code. I think one or two of my coworkers are using AI for some coding work but I don't think it's particularly widespread with us yet.


 
Posted : 30/03/2026 12:34 pm
nicko74 reacted
Posts: 78441
Full Member
 

Out of interest, has anyone stopped to consider whether it's developing secure code?

Or are we all about to be molested by a Russian bot farm?


 
Posted : 30/03/2026 12:44 pm
nicko74 reacted
Posts: 6989
Full Member
 

Posted by: Cougar

Out of interest, has anyone stopped to consider whether it's developing secure code?

I think what you are talking about here is production ready code developed using the principles of a Secure Software Development Lifecycle vs 'Hacked together as a Proof of Concept that we showed to management and they said, "Great, we're releasing it on Monday!"'

It doesn't really matter if you use LLMs or not.  One approach is going to reduce the risks as low as is reasonably achievable and the other is... not.

Guess which approach senior management generally prefers.


 
Posted : 30/03/2026 12:52 pm
Posts: 4415
Full Member
 

I'm not a software developer but I am trying to build an ML model for my MSc project at the moment. I'm trying to be mindful of the fact that I want to learn as much as possible in my project so am using LLMs for the boring bits but trying to learn to build the interesting bits by myself, which takes longer but has more value to me.

In my experience the LLMs I've used (ChatGPT and Claude) are great for boilerplate stuff but get bogged down quickly if you try to do anything multi-step. THey're great at translating error codes into something easier to act on, but I've often found that their solutions to the errors just add extra steps to 'fix' the code rather than identify a different approach that would work better. 

I can imagine that if I was a software engineer and the management were saying to go AI first on coding I'd be worried about the competency of my management.


 
Posted : 30/03/2026 1:28 pm
Posts: 78441
Full Member
 

Posted by: BruceWee

Guess which approach senior management generally prefers.

Well quite.

I have seen many, many "proof of concept" systems live in production.  Often taking far more man-hours keeping the fkn thing working - or trying to make it work at all - than it would've done just to tear down the demo and build it properly in the first place.  "But everyone's using it now!"  Tough, stop it.

The notion that we're now applying this mindset to raw code gives me The Fear.  We're sleepwalking into oblivion, because it's easier.


 
Posted : 30/03/2026 2:13 pm
hot_fiat reacted
Posts: 2085
Free Member
 

We're writing code manually but are allowed to use AI if we want. It's on us if there's a problem though. I'm fine with that approach - it's just a tool that I can use or not as appropriate.  AI first would be a deal-breaker for me personally, it takes all pride and enjoyment out of the work. 

It also produces really naive code a lot of the time which I begrudge inflicting on my users.

A recent example which I was asked to review was a class performing batch processing on large files.  My old hand written code worked in chunks of 1MB, it was fast but old fashioned.  Somebody used AI to rewrite it to use a modern library and it changed it to scanning one single char at a time. There were other massive bottlenecks such as loading the entire data into memory etc. Overall it was approx 100x slower and kept causing the server to panic with OOM errors. 


 
Posted : 30/03/2026 2:50 pm
Posts: 5837
Full Member
 

Software to control hardware in a company that is developing something (on investment funding) and is terrified of IP leaking out 'to the internet' - my team are still writing software. There is also a large biological element to what we are doing so it's not overly simple/straighforward and often involves a hardware change as well as a software solution. I have a smaller team than I did a few months ago, I've lost almost all of my test team partly through 'cost-savings' and partly through people changing jobs, so I'm at a point where I feel I need AI to help with some things, but I also don't really know where to start. It'd probably tell me to spend less time on here to be fair! 

Last week I was moaning about it only being Wednesday and a mate asked his AI engine if any days would overlap if we had some people on weeks that missed a Wednesday and others who kept it, it said one set was mod6 and one was mod7 so there would be no overlap, I called bu****** and it recanted and agreed there were in fact 42 days in a year that crossed over. AI is awesomes! 


 
Posted : 30/03/2026 5:05 pm
Posts: 7097
Free Member
 

quote data-userid="18810" data-postid="13716364"]

more man-hours keeping the fkn thing working - or trying to make it work at all - than it would've done just to tear down the demo and build it properly in the first place

Ironically, LLMs are about the quickest thing going if you want to refactor your code base. .


 
Posted : 30/03/2026 7:42 pm
Posts: 3329
Full Member
 

Posted by: mrmonkfinger

any company that hard pivots to AI is going to find life easy for a short time and then very difficult, very quickly

100% agree on this. 

I think many organisations aren't going to be able to resist the lure to churn stuff out. Often won't even anticipate the danger. This depending on how experienced/competent the management are (often not very!) and how familiar they are with concepts like software entropy, technical debt, and the fact that the initial build of software is only a small percentage of the total cost once you factor in maintenance over time.

I could see a lot of very big maintainability problems surfacing 5-10 years out from now.


 
Posted : 30/03/2026 9:07 pm
Posts: 9038
Free Member
 

Out of interest, how do you guys using AI to write code get it to adhere to company specific guardrails - security standards, development standards etc? I'm an architect but getting (some, mainly outsourced) humans to adhere to guardrails is bad enough...


 
Posted : 31/03/2026 8:43 am
Posts: 1243
Free Member
 

put it in the agents.md, have a sub-agent who's entire job is to validate against standards, augment with deterministic processes (lint, automated tests and coverage). If it derps, keep updating the agents.md and the sub-agent prompt. It wont be perfect, but it wont be much worse than an average dev, and it will be a lot cheaper


 
Posted : 31/03/2026 8:53 am
mmannerr reacted
Posts: 242
Full Member
Topic starter
 

Posted by: el_boufador

100% agree on this. 

I think many organisations aren't going to be able to resist the lure to churn stuff out. Often won't even anticipate the danger. This depending on how experienced/competent the management are (often not very!) and how familiar they are with concepts like software entropy, technical debt, and the fact that the initial build of software is only a small percentage of the total cost once you factor in maintenance over time.

I could see a lot of very big maintainability problems surfacing 5-10 years out from now.

It's going to be like the hole Boeing got themselves into when the c-suites decided it was cheaper to contract out engineering and get rid of the expensive but knowledge heavy engineers and they ended up with numerous very well publicised issues. I wonder how many ISO software standards certified companies have even thought about how AI fits into it, given managers say things "does it matter what the code looks like if it works", well that very much depends on your specification of "works". 

 


 
Posted : 31/03/2026 9:12 am
el_boufador and hot_fiat reacted
 poly
Posts: 9128
Free Member
 

Posted by: DaveyBoyWonder

Out of interest, how do you guys using AI to write code get it to adhere to company specific guardrails - security standards, development standards etc? I'm an architect but getting (some, mainly outsourced) humans to adhere to guardrails is bad enough...

As above we don't actually use it to write code - but we do use it for other parts of the development lifecycle.  It absolutely is told the rules / behaviours it needs to follow etc.  I'm sure it is not perfect but by and large its far easier to get it to stick rigidly to the rules than it is to get a human developer to.   

 


 
Posted : 31/03/2026 10:55 am
Posts: 8003
Full Member
 

Posted by: DaveyBoyWonder

I'm an architect but getting (some, mainly outsourced) humans to adhere to guardrails is bad enough...

I would rate it as easier. You write the rules and it, mostly, obeys them vs the outsourced humans who completely ignore it. Its a long way from reliable but beats many of the devs.

In terms of rolling it out currently its one of our main things currently is setting up subagents with the appropriate rules that the devs can run themselves before asking us for review.

The bit that has me curious is what happens when they have to increase the prices for it. Currently seems to be used as a gateway drug but the prices are going to need to be increased sooner or later.

Already have them playing games with peak/off peak times and having opaque billing strategy so you have no idea what they are up to.


 
Posted : 31/03/2026 11:09 am
Posts: 1243
Free Member
 

Posted by: dissonance

The bit that has me curious is what happens when they have to increase the prices for it. Currently seems to be used as a gateway drug but the prices are going to need to be increased sooner or later.

 

Inference is already profitable for google at least

 


 
Posted : 31/03/2026 12:15 pm
 dazh
Posts: 13390
Full Member
 

- We stopped writing assembly code a long time ago an moved to modern, managed languages, so appropriately asking AI to write the code could look like another stop along this journey.  Just zoomed out another level.

This. I've had long debates with devs in my team about the need for human review and I always end up making this argument. In my experience the models write way better code than I have ever done (not hard admittedly!) so the need for code reviews is minimal. More important is a review of the design, requirements, and architecture. Might be specific to my team but we seem to have more devs who obsess about writing code but tend not to think too much about the bigger picture. Those sorts of people are going to find themselves out of a job very quickly I think. Longer term I suspect the future for most devs is becoming more like business analysts and product owners, and using the AI models to build the software. The roles of business analysts, architects and product owners are also going to change massively though. When the cost of building is magnitudes smaller than what it was before, there's more freedom to iterate and put less effort into getting requirements and design right first time. 


 
Posted : 31/03/2026 4:48 pm
Posts: 180
Free Member
 

Reading this avidly with a massive amount of nervousness for my son.  He graduated last year in Computer Science and ambition was to get into software development, but the huge amount of grad roles that were there when he started his course have vapourised and he is now competing with literally 1000's of applicants for each role.

It feels like he has ended up in the vacuum left by AI and I'm struggling to see a path I can guide him toward which has prospects of a job in a field he is interested in that isn't a short - medium term dead end.

Are there any roles that he should be looking for that are a) still recruiting for junior roles and b) going to allow him to gain the experience to get past the "AI dead zone" in terms of coding skills or is it a lost cause and he needs to accept that he needs to pursue a different career path and possibly retrain before he has even secured his first "proper" job?


 
Posted : 31/03/2026 9:27 pm
 dazh
Posts: 13390
Full Member
 

Reading this avidly with a massive amount of nervousness for my son.

Totally justified unfortunately. I really feel for your son and thousands of others like him who've spent years studying computer science or something related with the aim of a career in software development, only to have the rug pulled from under them just as they're about to graduate. I think the only advice I could give would be to tell him to focus on devops and infrastructure, comms/networking, hardware or something like that as they're further behind on the AI curve. Basically anything where writing code isn't the core activity. If he still wants to do software dev, then invest in subscriptions to Claude Code and/or ChatGPT Premium and spend as much time as he can mastering the art of orchestrating agents. Even then though the reality is that there'll be far fewer jobs so it'll be insanely competitive. 

I've been telling my junior team members to have a hard think about the future and what they want to do, and try to have a plan B if it all goes to shit. The problem is no one knows how it will all pan out. The optimistic half of the software dev profession think AI will be a multiplier and create a boom in new jobs/opportunities, the pessimistic half (of which I am one) think we're all f*****. I'm very glad I'm at the end of my career!


 
Posted : 01/04/2026 10:17 am
el_boufador reacted
Posts: 31062
Full Member
 

AI is not just another layer of abstraction. Thinking of it that way is the path to creating a whole heap of disposable code. That's fine if it's genuinely going to have a short deployment life, but the danger is we're about to have a couple of years where the industry churns out a whole heap of the worst possible legacy code for the future.


 
Posted : 01/04/2026 10:37 am
el_boufador reacted
Posts: 242
Full Member
Topic starter
 

Yeah people saying it's like a super high level language don't get it's non-deterministic, you can give it a problem twice and get different results that will work and not work in different ways.  


 
Posted : 01/04/2026 8:53 pm
el_boufador reacted
Posts: 8003
Full Member
 

Posted by: matt303uk

Yeah people saying it's like a super high level language don't get it's non-deterministic

I have been on a POC looking at converting some truly ancient software into modern stack. Two of us working on it and its funny looking at how the wip branch names have evolved over time lots of "pass 2", "pass VXIIII" and so forth.

Its been interesting since I have been mostly stuck in the role of architect/lead where I get to smack business analysts round the head and then write notes for junior devs to build to (regardless of if it would be quicker myself) so using the AI tools is oddly familiar.

Has everyone seen the leak of the Claude code? Some amusing stuff in there. Tad annoying that I think our "office hours" with the Claude code reps have now finished since that could have given some entertainment.


 
Posted : 01/04/2026 9:44 pm
Posts: 7097
Free Member
 

Posted by: DaveyBoyWonder

Out of interest, how do you guys using AI to write code get it to adhere to company specific guardrails - security standards, development standards etc? I'm an architect but getting (some, mainly outsourced) humans to adhere to guardrails is bad enough...

Just plonk a copy of the standards somewhere the llm can access them and prompt it to apply them. We're using Vs code with an agent so for us it just needs to be part of the Vs code project.

 


 
Posted : 01/04/2026 10:24 pm
Posts: 7097
Free Member
 

Posted by: matt303uk

Yeah people saying it's like a super high level language don't get it's non-deterministic, you can give it a problem twice and get different results that will work and not work in different ways.  

LLMs are probability machines. 

You ask it something, it probably interprets it about right and then probably churns out something that is probably right. Maybe. But definitely not certainly. Something something mostly something.

 


 
Posted : 01/04/2026 10:28 pm
Posts: 8003
Full Member
 

Posted by: mrmonkfinger

We're using Vs code with an agent so for us it just needs to be part of the Vs code project.

Most of the time. In my experience it still needs consistent prompts

Me: "those rules you should be following? Are you?"

It: "why oh wise and all knowing one I failed to do so this time. You are so amazing to have seen this. I will fix this"

Part of my POC stuff is trying to turn it into useful stuff for a broader team. One thing which really does stand out is how fast it changes. In the wiki I have had to stick a note along the lines of "by the time you read this some stuff I say isnt an option probably is. Please update it". The other is trying to figure out how to put checks in place since it does do really stupid things a lot of the time even with specific instructions not to do so.

 

 

 


 
Posted : 01/04/2026 10:56 pm
Posts: 242
Full Member
Topic starter
 

Same here, we even added some specific rules to one of Claude's skills files and it occasionality ignores them, you have to keep a proper eye on what it's generating. Management's view of what AI can do and what the software developers are seeing are quite different at the moment, you can guess which of these is seen as the correct view.

 


 
Posted : 02/04/2026 7:31 am
 dazh
Posts: 13390
Full Member
 

AI is not just another layer of abstraction. Thinking of it that way is the path to creating a whole heap of disposable code.

Not yet its not, but I don't think it's far off. What's the problem with disposable code if it works and can be rebuilt with the minimum of effort? As I said it's already at the stage where it can write better code than most devs can. Yes there will be slop, but I'm sure if you compare compiler generated assembly with human-written assembly there's an awful lot of slop too. I doubt there's many humans who could maintain compiler assembly so why worry about maintaining AI written typescript, python or whatever? The days of developers putting huge effort into producing perfect code are over.

 

You ask it something, it probably interprets it about right and then probably churns out something that is probably right.

In my recent experience it interprets everything pretty much 100% and writes code that works first time. Maybe it's because I'm working in the data engineering sphere where the code is mostly procedural to implement data pipelines etc but I rarely have to check it in detail or change it manually. I get a lot of these sort of comments from people in my team and my first response is usually 'if it doesn't work is that because you haven't told it what to do in sufficient detail?'. 


 
Posted : 02/04/2026 11:12 am