And writers
Poetry, don't forget poetry. I'm sure AI will produce astonishing poetry laced with authentic passion and emotion that hits just the right off-key notes to connect with other computers. Or I guess it might just be a bit binary. I bet Simon Amitage is quaking in his carpet slippers.
Nope pretty safe until an idiot decides less than average is good enough.
This is key. People will make the decision that "below average but 0.1% of the cost and 0.001% of the time" is good enough, in so many fields, more and more of the time. Talking about the "exceptional" still being needed and valued... for sure... but that's like pointing to the money Taylor Swift makes and saying it's fine out there for musicians...
I've not really seen it do much in the space of formulation chemistry. It is too niche to invest in and the large public data sets do not exist.
I also can't see how it would take the inspirational step of seeing something that hasn't been done or crossover between two totally unrelated idea.
Some things it could do far better than me as I churn through QC sheets wondering how to reduce our error rate. That is very much AI.
Poetry, don't forget poetry.
To be fair, anyone's who's made a living out of writing poetry has been on a pretty sweet deal, it was only a matter of time before they were found out.
@dakuan just the software internals of legacy software - yeah I can see that working
But i'm not talking about software internals -im discussing broader impacts due brittle enterprise-level legacy architectures that are unknowable without significant analysis.
E.g. someone stupidly bolted a critical 'shadow it' app onto the side of the DB in production - that the software delivery team didn't know anything about and isn't in a software repo etc.
How is AI going to discover that and negotiate a way through it exactly?
@dazim suprised its your juniors that are getting anxious. It's my mid level folks who are having problems what with their mortgages depending on their current skill set.
It varies. Some have been quick to jump on it and are doing ok. Others completely oblivious and hanging on to the old ways. I go around the office looking at what they're doing and if I see them writing code I ask why.. 😀
What we haven't done yet is building our own agents and automating whole features (aside from the one Ralph Wiggum example which is a R&D project). We need to get comfortable and proficient with prompt engineering first. At least now the business is releasing access to the latest models. Until a month ago they were only allowing access on request supported by a business case with restrictive token quotas. Now they've given up after a tsunami of complaints and people like me warning directors that if they didn't sort it out we'd see an exodus of developers on our hands.
E.g. someone stupidly bolted a critical 'shadow it' app onto the side of the DB in production - that the software delivery team didn't know anything about and isn't in a software repo etc.
If nobody knows about it then a human would have the same problem?
Is AI about to make you redundant?
No. Thanks for reading.
The slightly longer version: "AI" as it's currently marketed is nothing of the sort. It's large language models (LLMs), or at a more basic level, probability models. For a given prompt or set of parameters, it goes through its petabytes of training data - scraped from the internet, scanned from books and the like - and asks the question "does this word often appear alongside these other words?". And it does that iteratively til it has something that looks like it fits within its training data.
There is no "thinking", there's no process of "understanding" why the answer may or may not be correct, other than that probability modelling; there's not even learning - as the wags have it, "the 'i' in LLM is for intelligence". It's a database-scanner looking at a ton of text and going "those words often appear near each other so I'll string them together here".
More philosophically, 'AI' now is the microwave in your kitchen. Microwaves are invaluable, great for simple things like "defrost this meat I meant to get out earlier" or "reheat those leftovers"; but anyone who thinks their microwave removes the need to actually cook - sauteeing, simmering, reducing sauces etc - knows nothing about food. And anyone who'll pay money for a meal made entirely with a microwave shouldn't be allowed to spend money.
ETA: There have been a couple of papers published recently in which several AI models (ChatGPT, Claude etc) were asked to solve mathematical problems for which proofs didn't exist on the internet (but which were solveable). Every single model failed - they couldn't just copy it from somewhere else, so they were unable to do anything with the problems. It was reassuring to see this actually published and hopefully bring a bit of common sense back to these discussions
There is no "thinking", there's no process of "understanding"
Yes we all know it's clever maths and gargantuan amounts of data processing but if you've ever used Codex5.3 or Opus4.6 you'd be hard pressed to distinguish what it does from 'thinking'. First time I used Opus4.6 it blew my mind. I copied some bullet points from a jira ticket into it as a 'it'll never work but I'll try anyway test' and it generated perfect code that worked first time.
Doubt we'll ever get to a point where we just sit back and let the machines do all the work. But if we do, sounds great. Presumably everything will be nice and cheap and we'll all live lives of luxury!
Crikey. I presume this is a joke right?
We've had 1 human and a bunch of agents rewrite legacy Cobol to .net in about 25% of the time it would have taken a team of 10 humans to do it manually.
Currently, those humans would be tasked with doing something else, but I can see the next couple years being rough for some people as companies favour cost savings over output improvements.
I'm senior / product based enough that it hopefully won't affect my employment directly but there will be other indirect issues we'll all have to deal with (e.g. pension value if companies start dropping, loss of housing equity if people start getting laid off).
Snr Document Controller. Maybe.
Well that's kind of the point I am making. That you need a human to understand these kinds of things through talking to people and analysing the problem space fully.
Sure, and these wont ever go away, it's the coding part that's gone. In our example here, the unknown dep will blow up the first time, but the next time it'll be in the monorepo (or some other way of putting it into the model ctx) and the AI will manage it just fine. The big difference being that before AI we might have hired someone to maintain this old system thats sprung out of the woodwork, with AI its just more context for the model. Not much more effort to prompt for it.
but I doubt it'll involve much code writing and will instead be more about architecture and systems
This is my current job. No "proper coding" but lots of low and no code, data transformations and integrations across systems.
but I doubt it'll involve much code writing and will instead be more about architecture and systems
This is my current job. No "proper coding" but lots of low and no code, data transformations and integrations across systems.
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infrastructure engineers and you have no idea how to fix the issue.
Bit like young people.. they have no idea how to check the oil or coolant on their cars, never mind put the spare wheel on if they get a puncture...
They just call the AA or RAC, for a price.
I'm a welder and while automated robotic welders have already replaced many repetitive manufacturing welding jobs I don't see actual humans welding becoming obsolete for a while yet, even with the development of things like the Optimus robots Tesla plan to start selling in the near future
When ChatGPT arrived I used it create simple scripts and search for solutions to problems
A mate who's a self employed financial adviser was saying how much easier its made his job a while ago. I pointed out that what he's (along with many others, no doubt) actually doing is teaching it how to do his job and in a few years people will just use it to sort their own mortgages and insurance instead of paying people like him to do it for them
Nothing to add other than I hope STWers jobs are safe!
Oh and that hopefully AI implode. The frankly insane levels of "investment", I think $2Tn floating between about 10 companies, and for what? The betterment of society, environment and planet?..
I'm a welder and while automated robotic welders have already replaced many repetitive manufacturing welding jobs I don't see actual humans welding becoming obsolete for a while yet, even with the development of things like the Optimus robots Tesla plan to start selling in the near future
When ChatGPT arrived I used it create simple scripts and search for solutions to problems
A mate who's a self employed financial adviser was saying how much easier its made his job a while ago. I pointed out that what he's (along with many others, no doubt) actually doing is teaching it how to do his job and in a few years people will just use it to sort their own mortgages and insurance instead of paying people like him to do it for them
Investment is the same thing... in the olden days investing in stocks, was for pension fund managers and rich people... now with platforms like invest engine and Trading212 you can do it all from your phone with a few clicks.
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infratructure engineers and you have no idea how to fix the issue.
I could see this becoming a major issue and possibly even a growth area for people to come and sort out. I.e. as systems that AI implemented first time around become more brittle/lose architectural integrity* yet are now critical to the organisation, and still need to be maintained/enhanced/migrated etc.
* as above, people with no architectural understanding, empowered by AI to vibe-code-bolt shit onto the side of shit, resulting in epic mess to sort out
actually doing is teaching it how to do his job
No, he isn’t. The learning doesn’t really come from the (willing) users like him for systems like ChatGPT, but from content taken (often unwillingly) that was published by everyone in his field.
I'm in my early 30s, work in software consulting, and am very concerned to be honest.
The pace of improvement in the models has been breathtaking over the past few years, and the latest "extended thinking" models are able to give spectacular results. I played a puzzle game this weekend, Gemini Pro solved a complex riddle that's not its dataset, which people got stuck on for days. Seems like the stuff of science fiction.
It's how fast the world has flipped on us that feels scary. Back in 2021 learning to code was one of the most valuable skills; barely five years later and it feels like its been heavily commodified. For me writing code was the most fun part of the job, a zen flow-state activity and prompting is just not fun in the same way. Over the same period the tech job market has turned on its head although this has a lot to do with interest rates.
I'm looking at various jobs for a plan B but nothing comes close to how much I enjoy my current line of work.
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infratructure engineers and you have no idea how to fix the issue.
I could see this becoming a major issue and possibly even a growth area for people to come and sort out. I.e. as systems that AI implemented first time around become more brittle/lose architectural integrity* yet are now critical to the organisation, and still need to be maintained/enhanced/migrated etc.
* as above, people with no architectural understanding, empowered by AI to vibe-code-bolt shit onto the side of shit, resulting in epic mess to sort out
No one likes reverse engineering a big sloppy mess, I suspect there will be good money to be made for those with the patience!
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infrastructure engineers and you have no idea how to fix the issue.
That would be an ecumenical matter
Totally get your point but, in this particular use case we never had Devs (apart from the website guy) so didn't fire any.
Is AI about to make you redundant?
No. Thanks for reading.
The slightly longer version: "AI" as it's currently marketed is nothing of the sort. It's large language models (LLMs), or at a more basic level, probability models. For a given prompt or set of parameters, it goes through its petabytes of training data - scraped from the internet, scanned from books and the like - and asks the question "does this word often appear alongside these other words?". And it does that iteratively til it has something that looks like it fits within its training data.
There is no "thinking", there's no process of "understanding" why the answer may or may not be correct, other than that probability modelling; there's not even learning - as the wags have it, "the 'i' in LLM is for intelligence". It's a database-scanner looking at a ton of text and going "those words often appear near each other so I'll string them together here".
More philosophically, 'AI' now is the microwave in your kitchen.
Yes but if it can rapidly produce results that are as good as what most employees come up with, does it matter in the eyes of executives? Aren't we essentially doing "next token prediction" based on our training data a lot of the time too?
Film production / corporate film / music films/epks
Surprisingly not currently as filming events and real stuff is still a thing.
(Also about to release our own feature film on streaming platforms in March as it's made this more possible.)
AI is being used all the time and certainly removing some people from employment. (Voice-over, CGI, comping etc)
(That said my own industry has been in a struggle since the pandemic.)
No one likes reverse engineering a big sloppy mess, I suspect there will be good money to be made for those with the patience!
Hey Claude, please reverse engineer this big sloppy mess. Sound like you enjoy it too.
but there will be other indirect issues we'll all have to deal with (e.g. pension value if companies start dropping, loss of housing equity if people start getting laid off).
And this is really what I don't get. What is the end game here?
If ultimately the aim is for AI to take all the jobs, who is actually going to pay for anything if nobody has got any money? (including paying for the AI itself)
Yes but if it can rapidly produce results that are as good as what most employees come up with, does it matter in the eyes of executives? Aren't we essentially doing "next token prediction" based on our training data a lot of the time too?
And doesn't that say more about 'most employees' and the quality of work they're doing than AI? If someone can be replaced - reliably and consistently, not just in a one-off thing - by a bot spitting out random words and phrases, they may need to rethink what value they're providing.
Right now, "hey copilot, publish this review of new wheels" kinda works, but you don't know what's in the backend, and you have a pretty good idea that eventually something will fail in it. And of course "hey Claude, write me a meaningful review of these new forks that I've been riding for the last 2 weeks" doesn't work. Even "please program this incredibly basic and repetitive thing" can be done, although someone still needs to actually know the code to find out what's underneath the bonnet; plus most of the time what the client think they want programming won't actually fix their problem.
These are all v simplistic obviously, but you get the point - we're adaptable, we can explain concepts different ways to people who don't want to hear it; and we can inherently look at something in our field and go "something about that's not right, and I need to work out what".
A fair bit of my job in infosecurity involves asking questions and sniffing out bullsh1tt3rs so hopefully I should be ok for a while.
It also depends on how deep the pockets of the employer are, the cost of the tools are going to rocket at some point soon when the providers realise that the data centre costs are insane
And this is really what I don't get. What is the end game here?
If ultimately the aim is for AI to take all the jobs, who is actually going to pay for anything if nobody has got any money? (including paying for the AI itself)
That's what I don't get, surely someone has some vision of how this is all supposed to play out, and it's going to end up looking like that movie Elysium if AI is as good as they say.
Alternatively, someone drunkenly posited to me that it's all just a massive short, those in bed with it build AI up in a massive way, and when it fails, guess who's shorting the stocks 🙄
I've no idea if that's actually feasible though, I don't understand share dealing etc. even remotely well enough. Can you bet on a bubble bursting if you're on the inside actively inflating it?
it generated perfect code that worked first time.
Did the 'AI' assess this 'perfection'? Or did you?
IDK, there is no way I'd trust a stochastic BS machine to produce robust and secure application code that a business would bet its future, and insurance premiums, on without making some knowledgable sucker or cheap stand in suffer the pain of 'controlling' it.
AI won't be making me, or my former roles, redundant. But I would not put it past a C-suite exec or results-driven underling to do so based on the lies and nonsense spewed out by AI boosters and the constant drive for 'number go up'.
And, as a few have identified, no AI is genuinely intelligent. Nor are general LLMs any real use.
I work in railway structures examining. We still need to hit structures with hammers. Until a drone can swing a hammer and produce a report I think I'm safeish.
I know of one guy who uses chatgpt to assist his work. I've tried with copilot but I don't have enough knowledge to properly drill down into it.
If ultimately the aim is for AI to take all the jobs, who is actually going to pay for anything if nobody has got any money? (including paying for the AI itself)
Given the tech industry's track record, how soon before those AI costs exceed the previous payroll?
My wife got out of recruitment last year and when she talks to her old work mates they're all panicking about AI. One of them who changed companies recently has just had her probation extended. Not because she's doing a bad job but because they think they might not need her to do it soon enough!
It could be a huge problem around here as we're a bit of a recruitment hotspot with plenty of people earning big salaries and bonus without the need for degrees or other qualifications. If the jobs go there will not be anything else available on anywhere near comparable money.
Anyone who believes that because an aspect of their job has a physical component, that it makes them immune to AI really needs to see the progress I’m seeing in humanoid robotics. It’s amazing and frankly terrifying.
If your job has a significant amount of critical thinking, judgment, safety and/or cost associated to that judgement, I think you’re safe for up to 10 years…75% of everyone else is ****ed inside of 3y.
It also depends on how deep the pockets of the employer are, the cost of the tools are going to rocket at some point soon when the providers realise that the data centre costs are insane
Yeah. I would be curious how cheap AI actually is once the VC start asking for return on investment. Currently they are massively subsidised and at some point its going to be cheaper to use actual devs again. There is also the question of how much it can scale in terms of when it takes peoples power/water as well as their jobs they might get a bit smoky.
Oddly enough was in a meeting with Anthropic today with them demoing the latest and greatest to a pilot group of us devs. Was quite funny how they kept going back to "dont worry you wont lose your jobs" and that they are hiring more devs - although at the same time saying how they barely code nowadays.
Currently I get to use several tools and find it mixed. It still makes lots of simple mistakes and needs good guidance. I still rate it as a rather keen junior dev who does need careful monitoring.
It is definitely a threat to various levels of software jobs but not sure exactly how it will work out. If its given good requirements it generally does okay but then again good luck getting that out of most BAs.
I do think for most people AI as it stands at the moment is a net negative by quite a long way. Misinformation, AI slop everywhere, deepfaked pron, job losses, environmental impact etc., control of much of it in the hands of people who really shouldn't have that power. Lots of money in it for some people, not so much for everyone else. And it's not like it'd doing the crap jobs while everyone else gets to be creatives, it's taking the good stuff too.
I suppose you've got things like spotting cancers in the plus column, but I don't think that's in the same category of techniques.
I was thinking that a lot of trades are safe. But then I watched that guy Martin program the other night and I see houses being mass produced in a factory, in kit form. And a lot of it is done by robots, using new materials and techniques.
It may be hard to replace a bricklayer and lay bricks with ai/automation, but if you don't use bricks, you don't need a bricklayer at all.
It's easy to think a job might be safe cos it's hard to automate, but maybe not if in the future things are just done differently
It may be hard to replace a bricklayer and lay bricks with ai/automation, but if you don't use bricks, you don't need a bricklayer at all.
Prefabs have been around for quite a while now, some very cheap, some very expensive. Haven't yet caught on in a big way, maybe someone in construction has a reason for that?
I do think for most people AI as it stands at the moment is a net negative by quite a long way. Misinformation, AI slop everywhere, deepfaked pron, job losses, environmental impact etc., control of much of it in the hands of people who really shouldn't have that power. Lots of money in it for some people, not so much for everyone else. And it's not like it'd doing the crap jobs while everyone else gets to be creatives, it's taking the good stuff too.
I suppose you've got things like spotting cancers in the plus column, but I don't think that's in the same category of techniques.
Yeah there are strong long term fundamentals there, from an investment perspective, its just in the short term, it's gonna get messy. Very messy.
To paraphrase Elon Musk - There's no point saving for retirement, AI will fix everything!
Yeah I don't see you building hospitals or solving world hunger, or the energy crisis with your zillions... the money flows uphill, not down hill.
They don't share.. that's the problem. Not the technology per-se, but the product owners.
I don't really care what makes me redundant, just wish it would get on with it
The frankly insane levels of "investment", I think $2Tn floating between about 10 companies, and for what?
Ed Zitron writes a lot about all this. Too much really, he could do with a decent editor, but still. He makes a very compelling case that the major AI companies are going to run out of money sometime towards 2027. They are burning cash at an insane rate, even the paid plans are loss leaders, on top of which they're trying to build these gigantic data centres for billions. Their income comes from VCs who are running out of cash to invest, and the megacorps like Microsoft and NVIDIA who are passing money back and forth. It really does look like a massive bubble.
And this is really what I don't get. What is the end game here?
If ultimately the aim is for AI to take all the jobs, who is actually going to pay for anything if nobody has got any money? (including paying for the AI itself
This is the other thing. Sam Altman, if you can believe anything he says, said his plan is to create AGI and then ask it how to make money.
More realistically, the aim is to hype their companies enough to get contracts in government, defence etc, for huge and ongoing revenue. Which is also why Musk is merging Grok with SpaceX, to get govt money.
But as tech types keep saying, there's no moat! Maybe Claude and ChatGPT are the best around for now. But deepseek, qwen, mistral etc, are all only months behind. Google and Facebook can subside their models for a long time. There won't be the opportunity to get everyone hooked and yank up the prices - people can just vote with their wallet.
So yeah, i definitely see a big deflation of the bubble before the decade is out, and it will hit Microsoft, NVIDIA, oracle etc the hardest.
