Software/Data engineer here. I reckon I've got about a year - maybe 2 max - before I'm redundant. Anyone else?
When ChatGPT arrived I used it create simple scripts and search for solutions to problems which I had to review or debug. Now with the arrival of Opus4.5 and 4.6 and codex 5.2 and 5.3 all I have to do is throw some rough requirements at it and it builds entire solutions which are pretty much perfect out of the box. Haven't written a line of code in over a month. PRs are pretty much pointless box-ticking exercises so hardly any reveiw or QA required.
The above was getting my slightly concerned until I saw one of my colleagues constructed a Ralph Wiggum implementation to build an entire app. 6 months of development done in 11 hours costing $1600 instead of $74k. Game over!
I seem to have adapted into a start-up/bio-tech area where people are too concerned about IP and investors to focus on what we could be using AI for instead of getting people to do it.
So I'm ok for the time-being, although with a dwindling team, mounting problems and investors on our backs it's going to be a challenging year this year and I do feel like we could be making good use of AI to build out our automation testing and writing unit tests/fixing unit tests which my team seem to spend a hell of a lot of their time doing these days.
It's just enabled me to a add functionality to my commercial app that I simply couldn't have done myself so it's actually made me money!
But I'm self-employed .... I can 100% see software jobs going because of it.
I can 100% see software jobs going because of it.
And writers
And musicians
And artists
And photographers
And a whole load more...
Not at all
Jevons' Paradox https://en.wikipedia.org/wiki/Jevons_paradox
Roles will change, but I don't expect huge job losses.
I presently work with a copywriter who’s just started retraining in occupational therapy as she also reckons she’s got 2 years, tops, before her job no longer exists.
As a graphic designer, I should be worried, but as we discussed on another thread the other day, the standard of ‘design’ presently produced by AI is utter shite and all looks exactly the same.
I’m both presently surprised, yet amazed that it’s actually as bad as it is
I'm gonna get a job building flat pack furniture. It'll never take that over! I asked ChatGPT a question about a cabinet I'm building and it didn't have a clue 😀
My monitoring job could be AIed - in fact I looked into it. They couldn't (well, shouldn't!) make me redundant as I still would have to maintain the backend, but the other goons that work with me... see ya!
Software test manager in the NHS. I am 55 so I think I will make it to retirement at 60 but don't think my younger colleagues will.
Although I did ask chatGPT how to change my van seat from a double to a single and it told me to saw it in half and then stitch it back up.
From my perspective...
Jobbing graphic designers - the type that knock-up your local pub poster and menu, then yes, those days are numbered.
Top level graphic designers - they'll be fine. It's a different ball-game bringing a brand together.
Printers (like what I am) - we'll be fine too. The death of print has been predicted for decades but people still love a physical thing. I've got 10 years left and 'think' I'll be OK.
unfortunately you still need to check that AI has created something valid and not dangerous, some weapons grade idiot made himself an AD script the other day with no idea if it was correct and ran it. A whole evening wasted undoing something that should have been checked
I reckon it'll make me redundant, but hopefully it'll be just about the time I was going to retire anyway.
Top level graphic designers - they'll be fine. It's a different ball-game bringing a brand together.
You miss the point that there won't be the jobbing graphic designers to become the top level designers.
You miss the point that there won't be the jobbing graphic designers to become the top level designers.
They'll just go in at a higher level and be trained accordingly. Properly good ones don't muck about with pub menus. And some of the best I've known were self taught.
Nope pretty safe until an idiot decides less than average is good enough.
AI helps with some teaching things but is totally rubbish at being correct or even good enough.
Money men think kids are mini adults who want to do well. Whereas teachers know they need to be engaged otherwise they do what adults do....SFA
IT Project Manager here. I can use AI for all the boring stuff and as a bit of a sounding board. But for the herding of cats and random requirement changes we get I think it would struggle.
You miss the point that there won't be the jobbing graphic designers to become the top level designers.
That was exactly the point I made on the other thread. It’s the entry level jobs that are all being replaced, so how are younger designers (or writers, or anyone else) meant to get any experience and work their way up the ladder?
Well, judging by how many AI developed apps are heaving dumpster fires of security and privacy (at least from a technical PoV), my job of making sure _your_ shit stays safe, private and miscreant free while you do stupid things on the internet seems pretty safe.
If I do get the kick, I'll just have to go back to doing full time skydiving instructing or my old job as a government assassin.
Yes, I think it will and quite soon. I'm a very much mid-tier contract lawyer in a large public(ish) organisation and I think it already does large elements of my job pretty well. On a particularly optimistic day I can think of things that I am able to do that AI can't or maybe could never do but I don't think these elements are enough to save me.
We are already a very top heavy team (IMO) with 21 people and I'm sure we'd only need a couple of those roles to remain once AI starts automating everything except the most bespoke and complicated matters.
The thing is, I am not at all practical minded so I'm genuinely quite worried about what I will pivot to in the event I am redundant in a couple of years.
I'm 44 so don't have time to waste but at the moment have no plan B so its pretty concerning.
No, I'm actively working in infosec around AI governance for financial firms, so I've got at least 7 years until there is a GRC/Auditing/DataProtection/CISO in kind AI tool which even then nuance and business pace matters so will likely hallucinate and still need human intervention, which will work for my goals of being financially secure enough to drop to working 3 days a week and eek out toward early retirement
From doing 2 attempts at automation, basically instead of doing a job, you'll end up maintaining whatever does the job, or sorting out the mess it created. Not all that dissimilar to people that now do some mouse clicks to make machinery make things instead of hands and tools making it.
But with "AI" it mostly makes stuff up based on what it has seen, so will eventually hit the point where it can't make up any more stuff, and there won't be enough human intelligence left to progress things, unless...
If you're doing a PhD or actual research using a brain, then AI bots will pilfer all your work from OneDrive and Apple/Google clouds and it'll all be common knowledge (in the AI pool) before you can get LaTeX working to prepare the final thesis/report.
Should certainly be lots of vacancies for engineers and lawyers soon, to sort out all the mess that's been created.
IT Project Manager here. I can use AI for all the boring stuff and as a bit of a sounding board. But for the herding of cats and random requirement changes we get I think it would struggle.
Enterprise/ Solution architect here 👋
Similar thoughts to you
Maybe (I work in IT sales admin/account management) but the plan is to get rich selling all the AI platforms that will steal the existing jobs that I don’t so much get made redundant as just retire.
I mostly manage people, so no I don't think my role will be made redundant by current LLM
I might be behind the curve but I'm yet to see it being applied in a way that makes me concerned for my job.
Feels like the hype is reaching fever pitch at the moment though. Whether it's justified or just the result of what is surely an absolutely massive amount of pressure on the industry to justify the billions of investment it has received, I don't really know.
I hadn't heard of jevons paradox but it makes sense. If AI gives us the ability to increase productivity we'll use it to its maximum and in doing so create more jobs. Doubt we'll ever get to a point where we just sit back and let the machines do all the work. But if we do, sounds great. Presumably everything will be nice and cheap and we'll all live lives of luxury!
More likely, people will need to be agile. But if you're hard working and capable of learning new skills, there may be some huge opportunities along with the threats.
Nope, if anything I will be more busy as I can audit AI in regulated environments
Nope, if anything I will be more busy as I can audit AI in regulated environments
Aren't most governed under ISO 42k now?
Feels like the hype is reaching fever pitch at the moment though. Whether it's justified or just the result of what is surely an absolutely massive amount of pressure on the industry to justify the billions of investment it has received, I don't really know.
Yeah, I keep seeing the hype, my wife is getting more and more concerned, but whenever I see the real world implementations I'm cautiously relieved at just how bad they are.
But...
Nope pretty safe until an idiot decides less than average is good enough.
I still worry that our bosses will look at it and think, yep, that's good enough, and sack us anyway. That being said our whole team has just been drafted in to manually carry out a task that AI was supposed to be able to do, it achieved something like a 20% accuracy rate at a relatively simple word matching and sorting task 😱
On of our senior devs tried to use a bit of AI code to remove a device from AD - they bricked the device so badly (it was a fairly custom device) it took me a full day to get it back up & running.
That said I use it all the time to help with scripting things.
As a builder type, I hope it does. My knees only have a few more years left in them!
Self employed photographer who's spent decades shooting inaccessible, beautiful places in the best light. Had a very nice time doing it, sweating buckets and working hard, but now I'm screwed 10 years before retirement. The AI companies have taken my work and everyone else's and mushed it into a big stew that anyone can get a ladle of acceptable result out of that meets their needs.
Specific location images will never be correct with AI, since it's always an average, so they'll still sell, but anything else is done until AI starts to train on itself and ends up as grey goo. I'll never regret my creative career, but I will end up in poverty because of the greed and theft of the AI companies.
Happy days.
Use it at work for checking safety docs for errors. Have to create my own agents, so to a degree it isn't useful out of the box, and my line manager is suspicious of my actually using a tool that they supplied. It does the checks well, but adds far more work to my day than it removes. How so? Checking the output, more questions / refinement, more emails, more record keeping. It is also shockingly bad at recognising fakes - turns out it doesn't understand lies. It's years off being assimilated into my role, which given I am in my 60's, and doing the assimilating, means I really don't care about it. It's a mildly entertaining diversion is all.
Things I hear my more junior team members saying:
We'll still need humans who can write code to review AI generated code.
Only developers will be able to instruct the AIs to generate code/build apps.
AI code has inherent security risks and will need QA/approval.
AI generates bloated code.
All of them wrong mostly. The security one is still an issue but probably not on the next iteration of models. Think they're grasping at straws to justify their jobs. It's quite soul destroying seeing a group of younger people slowly realise that their chosen profession is about to disappear.
Software engineering isn't going to disappear, it's changing. The role will be more of a mishmash of biz analysts / product manager - to write specs and prompts and tester - to validate the outputs. Atm we still need to review every single line the LLMs generate, but with every iteration the amount of changes required are fewer, and other models are better able to review the code themselves anyway. The coding bit is going away, but some version of the job remains. How many of these new devs we'll need remains to be seen.
I am worried about the software side of things, particularly from the perspective of my youngest who would really like to go into a job coding and I'm not sure that will exist as we now know it.
However
Churning out greenfield code is probably just about the easiest job a developer can do.
When an AI can do something actually hard, like plan an execute an extremely critical, risky and complex brownfield system migration involving highly toxic data, deeply coupled systems, multiple user bases and clients who are very easy to piss off...
...at that point I might take my bat and ball home
...but it seems a VERY long way off indeed
I'm in software engineering and while I'm not super impressed by Gemini it's obviously a worry. I'm gradually getting less hands on though so I think I'll probably be doing more or less the same thing to retirement (I'm in my fifties now). If I were 10 or 20 years younger I wouldn't be so sure.
Churning out greenfield code is probably just about the easiest job a developer can do.
When an AI can do something actually hard, like plan an execute an extremely critical, risky and complex brownfield system migration involving highly toxic data, deeply coupled systems, multiple user bases and clients who are very easy to piss off...
...at that point I might take my bat and ball home
...but it seems a VERY long way off indeed
I've found the reverse to be true, totally greenfield the LLM has no guardrails and goes a bit wacky, with every feature implemented slightly differently. I've also worked on a legacy enterprise project thats almost impossible for a human to reason about and the LLM has been able to add, fix and modify features easily while 'fitting' in with the code base as is.
I think me and my spanners are fine.
Those photos looks quite organised and understandable TBH.
Many aged IT infrastructures are orders of magnitude more complex than that I reckon. They are still going to need maintaining and evolving.and I don't see AI doing it in the foreseeable.
Everyone gets well excited (or scared) about the AI build but forgets about the way way harder problems that happen when it hits operation in a complex enterprise environment or needs maintaining.
I am worried about the software side of things, particularly from the perspective of my youngest who would really like to go into a job coding and I'm not sure that will exist as we now know it.
Software engineering will still exist, but I doubt it'll involve much code writing and will instead be more about architecture and systems engineering. I've made the point to my younger devs that if they're in this job because they like writing code or because they think that's what it's limited to, then they need to have a hard think about their career choices. If however they do it because they like building systems that do stuff then all good. And for someone like me who has always thought coding was the boring part and who was never much good at it, it's fantastic. 🙂
fix and modify features easily while 'fitting' in with the code base as is.
Yeah I could see adding features to an existing but of software working.
It's when the impacts start to broaden out across multiple (deeply coupled, legacy) enterprise solutions where it all gets a bit hard.
If you have a more modern architecture then it's probably easier (loosely coupled APIs etc)
A lot of enterprises are still very much legacy tho
A lot of enterprises are still very much legacy tho
Thats not been what we've seen. Legacy monoliths have all the codes in the same place so its easy for the LLM to work with. Alot of the metrics for 'quality' don't seem to apply for LLMs. They can make sense of any old jank.
For the modern service oriented stuff its harder for the LLM to work with, but moving to a monorepo fixes that
@daz im suprised its your juniors that are getting anxious. It's my mid level folks who are having problems what with their mortgages depending on their current skill set. The seniors spend more time speccing and writing than coding. The juniors are aready yoloing into the future with custom agent swarms and tooling of their own making. If anything im having to hold them back (one had his agent wreck his dev machine runnning bizzaro docker commands)
On the idea that there will be new jobs to replace the ones AI takes that might be true on a large enough timescale. But if you're a certain age and your job disappears from under you I don't reckon you're likely to be one of the people getting one of the shiny new ones.
I saw this and thought it was quite interesting -
https://www.reddit.com/r/ExperiencedDevs/comments/1r6olcv/an_ai_ceo_finally_said_something_honest/
in my job (HE) the hard bit is not the technical codey stuff, but knowing the ins and outs of a big complicated organisation with loads of politics, competing interests, egos, outside influence, etc etc. And I don't see AI figuring that out just yet, unless you plumb it in at a really fundamental level.


