Software/Data engineer here. I reckon I've got about a year - maybe 2 max - before I'm redundant. Anyone else?
When ChatGPT arrived I used it create simple scripts and search for solutions to problems which I had to review or debug. Now with the arrival of Opus4.5 and 4.6 and codex 5.2 and 5.3 all I have to do is throw some rough requirements at it and it builds entire solutions which are pretty much perfect out of the box. Haven't written a line of code in over a month. PRs are pretty much pointless box-ticking exercises so hardly any reveiw or QA required.
The above was getting my slightly concerned until I saw one of my colleagues constructed a Ralph Wiggum implementation to build an entire app. 6 months of development done in 11 hours costing $1600 instead of $74k. Game over!
I seem to have adapted into a start-up/bio-tech area where people are too concerned about IP and investors to focus on what we could be using AI for instead of getting people to do it.
So I'm ok for the time-being, although with a dwindling team, mounting problems and investors on our backs it's going to be a challenging year this year and I do feel like we could be making good use of AI to build out our automation testing and writing unit tests/fixing unit tests which my team seem to spend a hell of a lot of their time doing these days.
It's just enabled me to a add functionality to my commercial app that I simply couldn't have done myself so it's actually made me money!
But I'm self-employed .... I can 100% see software jobs going because of it.
I can 100% see software jobs going because of it.
And writers
And musicians
And artists
And photographers
And a whole load more...
Not at all
Jevons' Paradox https://en.wikipedia.org/wiki/Jevons_paradox
Roles will change, but I don't expect huge job losses.
I presently work with a copywriter who’s just started retraining in occupational therapy as she also reckons she’s got 2 years, tops, before her job no longer exists.
As a graphic designer, I should be worried, but as we discussed on another thread the other day, the standard of ‘design’ presently produced by AI is utter shite and all looks exactly the same.
I’m both presently surprised, yet amazed that it’s actually as bad as it is
I'm gonna get a job building flat pack furniture. It'll never take that over! I asked ChatGPT a question about a cabinet I'm building and it didn't have a clue 😀
My monitoring job could be AIed - in fact I looked into it. They couldn't (well, shouldn't!) make me redundant as I still would have to maintain the backend, but the other goons that work with me... see ya!
Software test manager in the NHS. I am 55 so I think I will make it to retirement at 60 but don't think my younger colleagues will.
Although I did ask chatGPT how to change my van seat from a double to a single and it told me to saw it in half and then stitch it back up.
From my perspective...
Jobbing graphic designers - the type that knock-up your local pub poster and menu, then yes, those days are numbered.
Top level graphic designers - they'll be fine. It's a different ball-game bringing a brand together.
Printers (like what I am) - we'll be fine too. The death of print has been predicted for decades but people still love a physical thing. I've got 10 years left and 'think' I'll be OK.
unfortunately you still need to check that AI has created something valid and not dangerous, some weapons grade idiot made himself an AD script the other day with no idea if it was correct and ran it. A whole evening wasted undoing something that should have been checked
I reckon it'll make me redundant, but hopefully it'll be just about the time I was going to retire anyway.
Top level graphic designers - they'll be fine. It's a different ball-game bringing a brand together.
You miss the point that there won't be the jobbing graphic designers to become the top level designers.
You miss the point that there won't be the jobbing graphic designers to become the top level designers.
They'll just go in at a higher level and be trained accordingly. Properly good ones don't muck about with pub menus. And some of the best I've known were self taught.
Nope pretty safe until an idiot decides less than average is good enough.
AI helps with some teaching things but is totally rubbish at being correct or even good enough.
Money men think kids are mini adults who want to do well. Whereas teachers know they need to be engaged otherwise they do what adults do....SFA
IT Project Manager here. I can use AI for all the boring stuff and as a bit of a sounding board. But for the herding of cats and random requirement changes we get I think it would struggle.
You miss the point that there won't be the jobbing graphic designers to become the top level designers.
That was exactly the point I made on the other thread. It’s the entry level jobs that are all being replaced, so how are younger designers (or writers, or anyone else) meant to get any experience and work their way up the ladder?
Well, judging by how many AI developed apps are heaving dumpster fires of security and privacy (at least from a technical PoV), my job of making sure _your_ shit stays safe, private and miscreant free while you do stupid things on the internet seems pretty safe.
If I do get the kick, I'll just have to go back to doing full time skydiving instructing or my old job as a government assassin.
Yes, I think it will and quite soon. I'm a very much mid-tier contract lawyer in a large public(ish) organisation and I think it already does large elements of my job pretty well. On a particularly optimistic day I can think of things that I am able to do that AI can't or maybe could never do but I don't think these elements are enough to save me.
We are already a very top heavy team (IMO) with 21 people and I'm sure we'd only need a couple of those roles to remain once AI starts automating everything except the most bespoke and complicated matters.
The thing is, I am not at all practical minded so I'm genuinely quite worried about what I will pivot to in the event I am redundant in a couple of years.
I'm 44 so don't have time to waste but at the moment have no plan B so its pretty concerning.
No, I'm actively working in infosec around AI governance for financial firms, so I've got at least 7 years until there is a GRC/Auditing/DataProtection/CISO in kind AI tool which even then nuance and business pace matters so will likely hallucinate and still need human intervention, which will work for my goals of being financially secure enough to drop to working 3 days a week and eek out toward early retirement
From doing 2 attempts at automation, basically instead of doing a job, you'll end up maintaining whatever does the job, or sorting out the mess it created. Not all that dissimilar to people that now do some mouse clicks to make machinery make things instead of hands and tools making it.
But with "AI" it mostly makes stuff up based on what it has seen, so will eventually hit the point where it can't make up any more stuff, and there won't be enough human intelligence left to progress things, unless...
If you're doing a PhD or actual research using a brain, then AI bots will pilfer all your work from OneDrive and Apple/Google clouds and it'll all be common knowledge (in the AI pool) before you can get LaTeX working to prepare the final thesis/report.
Should certainly be lots of vacancies for engineers and lawyers soon, to sort out all the mess that's been created.
IT Project Manager here. I can use AI for all the boring stuff and as a bit of a sounding board. But for the herding of cats and random requirement changes we get I think it would struggle.
Enterprise/ Solution architect here 👋
Similar thoughts to you
Maybe (I work in IT sales admin/account management) but the plan is to get rich selling all the AI platforms that will steal the existing jobs that I don’t so much get made redundant as just retire.
I mostly manage people, so no I don't think my role will be made redundant by current LLM
I might be behind the curve but I'm yet to see it being applied in a way that makes me concerned for my job.
Feels like the hype is reaching fever pitch at the moment though. Whether it's justified or just the result of what is surely an absolutely massive amount of pressure on the industry to justify the billions of investment it has received, I don't really know.
I hadn't heard of jevons paradox but it makes sense. If AI gives us the ability to increase productivity we'll use it to its maximum and in doing so create more jobs. Doubt we'll ever get to a point where we just sit back and let the machines do all the work. But if we do, sounds great. Presumably everything will be nice and cheap and we'll all live lives of luxury!
More likely, people will need to be agile. But if you're hard working and capable of learning new skills, there may be some huge opportunities along with the threats.
Nope, if anything I will be more busy as I can audit AI in regulated environments
Nope, if anything I will be more busy as I can audit AI in regulated environments
Aren't most governed under ISO 42k now?
Feels like the hype is reaching fever pitch at the moment though. Whether it's justified or just the result of what is surely an absolutely massive amount of pressure on the industry to justify the billions of investment it has received, I don't really know.
Yeah, I keep seeing the hype, my wife is getting more and more concerned, but whenever I see the real world implementations I'm cautiously relieved at just how bad they are.
But...
Nope pretty safe until an idiot decides less than average is good enough.
I still worry that our bosses will look at it and think, yep, that's good enough, and sack us anyway. That being said our whole team has just been drafted in to manually carry out a task that AI was supposed to be able to do, it achieved something like a 20% accuracy rate at a relatively simple word matching and sorting task 😱
On of our senior devs tried to use a bit of AI code to remove a device from AD - they bricked the device so badly (it was a fairly custom device) it took me a full day to get it back up & running.
That said I use it all the time to help with scripting things.
As a builder type, I hope it does. My knees only have a few more years left in them!
Self employed photographer who's spent decades shooting inaccessible, beautiful places in the best light. Had a very nice time doing it, sweating buckets and working hard, but now I'm screwed 10 years before retirement. The AI companies have taken my work and everyone else's and mushed it into a big stew that anyone can get a ladle of acceptable result out of that meets their needs.
Specific location images will never be correct with AI, since it's always an average, so they'll still sell, but anything else is done until AI starts to train on itself and ends up as grey goo. I'll never regret my creative career, but I will end up in poverty because of the greed and theft of the AI companies.
Happy days.
Use it at work for checking safety docs for errors. Have to create my own agents, so to a degree it isn't useful out of the box, and my line manager is suspicious of my actually using a tool that they supplied. It does the checks well, but adds far more work to my day than it removes. How so? Checking the output, more questions / refinement, more emails, more record keeping. It is also shockingly bad at recognising fakes - turns out it doesn't understand lies. It's years off being assimilated into my role, which given I am in my 60's, and doing the assimilating, means I really don't care about it. It's a mildly entertaining diversion is all.
Things I hear my more junior team members saying:
We'll still need humans who can write code to review AI generated code.
Only developers will be able to instruct the AIs to generate code/build apps.
AI code has inherent security risks and will need QA/approval.
AI generates bloated code.
All of them wrong mostly. The security one is still an issue but probably not on the next iteration of models. Think they're grasping at straws to justify their jobs. It's quite soul destroying seeing a group of younger people slowly realise that their chosen profession is about to disappear.
Software engineering isn't going to disappear, it's changing. The role will be more of a mishmash of biz analysts / product manager - to write specs and prompts and tester - to validate the outputs. Atm we still need to review every single line the LLMs generate, but with every iteration the amount of changes required are fewer, and other models are better able to review the code themselves anyway. The coding bit is going away, but some version of the job remains. How many of these new devs we'll need remains to be seen.
I am worried about the software side of things, particularly from the perspective of my youngest who would really like to go into a job coding and I'm not sure that will exist as we now know it.
However
Churning out greenfield code is probably just about the easiest job a developer can do.
When an AI can do something actually hard, like plan an execute an extremely critical, risky and complex brownfield system migration involving highly toxic data, deeply coupled systems, multiple user bases and clients who are very easy to piss off...
...at that point I might take my bat and ball home
...but it seems a VERY long way off indeed
I'm in software engineering and while I'm not super impressed by Gemini it's obviously a worry. I'm gradually getting less hands on though so I think I'll probably be doing more or less the same thing to retirement (I'm in my fifties now). If I were 10 or 20 years younger I wouldn't be so sure.
Churning out greenfield code is probably just about the easiest job a developer can do.
When an AI can do something actually hard, like plan an execute an extremely critical, risky and complex brownfield system migration involving highly toxic data, deeply coupled systems, multiple user bases and clients who are very easy to piss off...
...at that point I might take my bat and ball home
...but it seems a VERY long way off indeed
I've found the reverse to be true, totally greenfield the LLM has no guardrails and goes a bit wacky, with every feature implemented slightly differently. I've also worked on a legacy enterprise project thats almost impossible for a human to reason about and the LLM has been able to add, fix and modify features easily while 'fitting' in with the code base as is.
I think me and my spanners are fine.
Those photos looks quite organised and understandable TBH.
Many aged IT infrastructures are orders of magnitude more complex than that I reckon. They are still going to need maintaining and evolving.and I don't see AI doing it in the foreseeable.
Everyone gets well excited (or scared) about the AI build but forgets about the way way harder problems that happen when it hits operation in a complex enterprise environment or needs maintaining.
I am worried about the software side of things, particularly from the perspective of my youngest who would really like to go into a job coding and I'm not sure that will exist as we now know it.
Software engineering will still exist, but I doubt it'll involve much code writing and will instead be more about architecture and systems engineering. I've made the point to my younger devs that if they're in this job because they like writing code or because they think that's what it's limited to, then they need to have a hard think about their career choices. If however they do it because they like building systems that do stuff then all good. And for someone like me who has always thought coding was the boring part and who was never much good at it, it's fantastic. 🙂
fix and modify features easily while 'fitting' in with the code base as is.
Yeah I could see adding features to an existing but of software working.
It's when the impacts start to broaden out across multiple (deeply coupled, legacy) enterprise solutions where it all gets a bit hard.
If you have a more modern architecture then it's probably easier (loosely coupled APIs etc)
A lot of enterprises are still very much legacy tho
A lot of enterprises are still very much legacy tho
Thats not been what we've seen. Legacy monoliths have all the codes in the same place so its easy for the LLM to work with. Alot of the metrics for 'quality' don't seem to apply for LLMs. They can make sense of any old jank.
For the modern service oriented stuff its harder for the LLM to work with, but moving to a monorepo fixes that
@daz im suprised its your juniors that are getting anxious. It's my mid level folks who are having problems what with their mortgages depending on their current skill set. The seniors spend more time speccing and writing than coding. The juniors are aready yoloing into the future with custom agent swarms and tooling of their own making. If anything im having to hold them back (one had his agent wreck his dev machine runnning bizzaro docker commands)
On the idea that there will be new jobs to replace the ones AI takes that might be true on a large enough timescale. But if you're a certain age and your job disappears from under you I don't reckon you're likely to be one of the people getting one of the shiny new ones.
I saw this and thought it was quite interesting -
https://www.reddit.com/r/ExperiencedDevs/comments/1r6olcv/an_ai_ceo_finally_said_something_honest/
in my job (HE) the hard bit is not the technical codey stuff, but knowing the ins and outs of a big complicated organisation with loads of politics, competing interests, egos, outside influence, etc etc. And I don't see AI figuring that out just yet, unless you plumb it in at a really fundamental level.
And writers
Poetry, don't forget poetry. I'm sure AI will produce astonishing poetry laced with authentic passion and emotion that hits just the right off-key notes to connect with other computers. Or I guess it might just be a bit binary. I bet Simon Amitage is quaking in his carpet slippers.
Nope pretty safe until an idiot decides less than average is good enough.
This is key. People will make the decision that "below average but 0.1% of the cost and 0.001% of the time" is good enough, in so many fields, more and more of the time. Talking about the "exceptional" still being needed and valued... for sure... but that's like pointing to the money Taylor Swift makes and saying it's fine out there for musicians...
I've not really seen it do much in the space of formulation chemistry. It is too niche to invest in and the large public data sets do not exist.
I also can't see how it would take the inspirational step of seeing something that hasn't been done or crossover between two totally unrelated idea.
Some things it could do far better than me as I churn through QC sheets wondering how to reduce our error rate. That is very much AI.
Poetry, don't forget poetry.
To be fair, anyone's who's made a living out of writing poetry has been on a pretty sweet deal, it was only a matter of time before they were found out.
@dakuan just the software internals of legacy software - yeah I can see that working
But i'm not talking about software internals -im discussing broader impacts due brittle enterprise-level legacy architectures that are unknowable without significant analysis.
E.g. someone stupidly bolted a critical 'shadow it' app onto the side of the DB in production - that the software delivery team didn't know anything about and isn't in a software repo etc.
How is AI going to discover that and negotiate a way through it exactly?
@dazim suprised its your juniors that are getting anxious. It's my mid level folks who are having problems what with their mortgages depending on their current skill set.
It varies. Some have been quick to jump on it and are doing ok. Others completely oblivious and hanging on to the old ways. I go around the office looking at what they're doing and if I see them writing code I ask why.. 😀
What we haven't done yet is building our own agents and automating whole features (aside from the one Ralph Wiggum example which is a R&D project). We need to get comfortable and proficient with prompt engineering first. At least now the business is releasing access to the latest models. Until a month ago they were only allowing access on request supported by a business case with restrictive token quotas. Now they've given up after a tsunami of complaints and people like me warning directors that if they didn't sort it out we'd see an exodus of developers on our hands.
E.g. someone stupidly bolted a critical 'shadow it' app onto the side of the DB in production - that the software delivery team didn't know anything about and isn't in a software repo etc.
If nobody knows about it then a human would have the same problem?
Is AI about to make you redundant?
No. Thanks for reading.
The slightly longer version: "AI" as it's currently marketed is nothing of the sort. It's large language models (LLMs), or at a more basic level, probability models. For a given prompt or set of parameters, it goes through its petabytes of training data - scraped from the internet, scanned from books and the like - and asks the question "does this word often appear alongside these other words?". And it does that iteratively til it has something that looks like it fits within its training data.
There is no "thinking", there's no process of "understanding" why the answer may or may not be correct, other than that probability modelling; there's not even learning - as the wags have it, "the 'i' in LLM is for intelligence". It's a database-scanner looking at a ton of text and going "those words often appear near each other so I'll string them together here".
More philosophically, 'AI' now is the microwave in your kitchen. Microwaves are invaluable, great for simple things like "defrost this meat I meant to get out earlier" or "reheat those leftovers"; but anyone who thinks their microwave removes the need to actually cook - sauteeing, simmering, reducing sauces etc - knows nothing about food. And anyone who'll pay money for a meal made entirely with a microwave shouldn't be allowed to spend money.
ETA: There have been a couple of papers published recently in which several AI models (ChatGPT, Claude etc) were asked to solve mathematical problems for which proofs didn't exist on the internet (but which were solveable). Every single model failed - they couldn't just copy it from somewhere else, so they were unable to do anything with the problems. It was reassuring to see this actually published and hopefully bring a bit of common sense back to these discussions
There is no "thinking", there's no process of "understanding"
Yes we all know it's clever maths and gargantuan amounts of data processing but if you've ever used Codex5.3 or Opus4.6 you'd be hard pressed to distinguish what it does from 'thinking'. First time I used Opus4.6 it blew my mind. I copied some bullet points from a jira ticket into it as a 'it'll never work but I'll try anyway test' and it generated perfect code that worked first time.
Doubt we'll ever get to a point where we just sit back and let the machines do all the work. But if we do, sounds great. Presumably everything will be nice and cheap and we'll all live lives of luxury!
Crikey. I presume this is a joke right?
We've had 1 human and a bunch of agents rewrite legacy Cobol to .net in about 25% of the time it would have taken a team of 10 humans to do it manually.
Currently, those humans would be tasked with doing something else, but I can see the next couple years being rough for some people as companies favour cost savings over output improvements.
I'm senior / product based enough that it hopefully won't affect my employment directly but there will be other indirect issues we'll all have to deal with (e.g. pension value if companies start dropping, loss of housing equity if people start getting laid off).
Snr Document Controller. Maybe.
Well that's kind of the point I am making. That you need a human to understand these kinds of things through talking to people and analysing the problem space fully.
Sure, and these wont ever go away, it's the coding part that's gone. In our example here, the unknown dep will blow up the first time, but the next time it'll be in the monorepo (or some other way of putting it into the model ctx) and the AI will manage it just fine. The big difference being that before AI we might have hired someone to maintain this old system thats sprung out of the woodwork, with AI its just more context for the model. Not much more effort to prompt for it.
but I doubt it'll involve much code writing and will instead be more about architecture and systems
This is my current job. No "proper coding" but lots of low and no code, data transformations and integrations across systems.
but I doubt it'll involve much code writing and will instead be more about architecture and systems
This is my current job. No "proper coding" but lots of low and no code, data transformations and integrations across systems.
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infrastructure engineers and you have no idea how to fix the issue.
Bit like young people.. they have no idea how to check the oil or coolant on their cars, never mind put the spare wheel on if they get a puncture...
They just call the AA or RAC, for a price.
I'm a welder and while automated robotic welders have already replaced many repetitive manufacturing welding jobs I don't see actual humans welding becoming obsolete for a while yet, even with the development of things like the Optimus robots Tesla plan to start selling in the near future
When ChatGPT arrived I used it create simple scripts and search for solutions to problems
A mate who's a self employed financial adviser was saying how much easier its made his job a while ago. I pointed out that what he's (along with many others, no doubt) actually doing is teaching it how to do his job and in a few years people will just use it to sort their own mortgages and insurance instead of paying people like him to do it for them
Nothing to add other than I hope STWers jobs are safe!
Oh and that hopefully AI implode. The frankly insane levels of "investment", I think $2Tn floating between about 10 companies, and for what? The betterment of society, environment and planet?..
I'm a welder and while automated robotic welders have already replaced many repetitive manufacturing welding jobs I don't see actual humans welding becoming obsolete for a while yet, even with the development of things like the Optimus robots Tesla plan to start selling in the near future
When ChatGPT arrived I used it create simple scripts and search for solutions to problems
A mate who's a self employed financial adviser was saying how much easier its made his job a while ago. I pointed out that what he's (along with many others, no doubt) actually doing is teaching it how to do his job and in a few years people will just use it to sort their own mortgages and insurance instead of paying people like him to do it for them
Investment is the same thing... in the olden days investing in stocks, was for pension fund managers and rich people... now with platforms like invest engine and Trading212 you can do it all from your phone with a few clicks.
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infratructure engineers and you have no idea how to fix the issue.
I could see this becoming a major issue and possibly even a growth area for people to come and sort out. I.e. as systems that AI implemented first time around become more brittle/lose architectural integrity* yet are now critical to the organisation, and still need to be maintained/enhanced/migrated etc.
* as above, people with no architectural understanding, empowered by AI to vibe-code-bolt shit onto the side of shit, resulting in epic mess to sort out
actually doing is teaching it how to do his job
No, he isn’t. The learning doesn’t really come from the (willing) users like him for systems like ChatGPT, but from content taken (often unwillingly) that was published by everyone in his field.
I'm in my early 30s, work in software consulting, and am very concerned to be honest.
The pace of improvement in the models has been breathtaking over the past few years, and the latest "extended thinking" models are able to give spectacular results. I played a puzzle game this weekend, Gemini Pro solved a complex riddle that's not its dataset, which people got stuck on for days. Seems like the stuff of science fiction.
It's how fast the world has flipped on us that feels scary. Back in 2021 learning to code was one of the most valuable skills; barely five years later and it feels like its been heavily commodified. For me writing code was the most fun part of the job, a zen flow-state activity and prompting is just not fun in the same way. Over the same period the tech job market has turned on its head although this has a lot to do with interest rates.
I'm looking at various jobs for a plan B but nothing comes close to how much I enjoy my current line of work.
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infratructure engineers and you have no idea how to fix the issue.
I could see this becoming a major issue and possibly even a growth area for people to come and sort out. I.e. as systems that AI implemented first time around become more brittle/lose architectural integrity* yet are now critical to the organisation, and still need to be maintained/enhanced/migrated etc.
* as above, people with no architectural understanding, empowered by AI to vibe-code-bolt shit onto the side of shit, resulting in epic mess to sort out
No one likes reverse engineering a big sloppy mess, I suspect there will be good money to be made for those with the patience!
It's all fun and games until something breaks, and you realise you've fired all your good devs and/or infrastructure engineers and you have no idea how to fix the issue.
That would be an ecumenical matter
Totally get your point but, in this particular use case we never had Devs (apart from the website guy) so didn't fire any.
Is AI about to make you redundant?
No. Thanks for reading.
The slightly longer version: "AI" as it's currently marketed is nothing of the sort. It's large language models (LLMs), or at a more basic level, probability models. For a given prompt or set of parameters, it goes through its petabytes of training data - scraped from the internet, scanned from books and the like - and asks the question "does this word often appear alongside these other words?". And it does that iteratively til it has something that looks like it fits within its training data.
There is no "thinking", there's no process of "understanding" why the answer may or may not be correct, other than that probability modelling; there's not even learning - as the wags have it, "the 'i' in LLM is for intelligence". It's a database-scanner looking at a ton of text and going "those words often appear near each other so I'll string them together here".
More philosophically, 'AI' now is the microwave in your kitchen.
Yes but if it can rapidly produce results that are as good as what most employees come up with, does it matter in the eyes of executives? Aren't we essentially doing "next token prediction" based on our training data a lot of the time too?
Film production / corporate film / music films/epks
Surprisingly not currently as filming events and real stuff is still a thing.
(Also about to release our own feature film on streaming platforms in March as it's made this more possible.)
AI is being used all the time and certainly removing some people from employment. (Voice-over, CGI, comping etc)
(That said my own industry has been in a struggle since the pandemic.)
No one likes reverse engineering a big sloppy mess, I suspect there will be good money to be made for those with the patience!
Hey Claude, please reverse engineer this big sloppy mess. Sound like you enjoy it too.
but there will be other indirect issues we'll all have to deal with (e.g. pension value if companies start dropping, loss of housing equity if people start getting laid off).
And this is really what I don't get. What is the end game here?
If ultimately the aim is for AI to take all the jobs, who is actually going to pay for anything if nobody has got any money? (including paying for the AI itself)
Yes but if it can rapidly produce results that are as good as what most employees come up with, does it matter in the eyes of executives? Aren't we essentially doing "next token prediction" based on our training data a lot of the time too?
And doesn't that say more about 'most employees' and the quality of work they're doing than AI? If someone can be replaced - reliably and consistently, not just in a one-off thing - by a bot spitting out random words and phrases, they may need to rethink what value they're providing.
Right now, "hey copilot, publish this review of new wheels" kinda works, but you don't know what's in the backend, and you have a pretty good idea that eventually something will fail in it. And of course "hey Claude, write me a meaningful review of these new forks that I've been riding for the last 2 weeks" doesn't work. Even "please program this incredibly basic and repetitive thing" can be done, although someone still needs to actually know the code to find out what's underneath the bonnet; plus most of the time what the client think they want programming won't actually fix their problem.
These are all v simplistic obviously, but you get the point - we're adaptable, we can explain concepts different ways to people who don't want to hear it; and we can inherently look at something in our field and go "something about that's not right, and I need to work out what".
A fair bit of my job in infosecurity involves asking questions and sniffing out bullsh1tt3rs so hopefully I should be ok for a while.
It also depends on how deep the pockets of the employer are, the cost of the tools are going to rocket at some point soon when the providers realise that the data centre costs are insane
And this is really what I don't get. What is the end game here?
If ultimately the aim is for AI to take all the jobs, who is actually going to pay for anything if nobody has got any money? (including paying for the AI itself)
That's what I don't get, surely someone has some vision of how this is all supposed to play out, and it's going to end up looking like that movie Elysium if AI is as good as they say.
Alternatively, someone drunkenly posited to me that it's all just a massive short, those in bed with it build AI up in a massive way, and when it fails, guess who's shorting the stocks 🙄
I've no idea if that's actually feasible though, I don't understand share dealing etc. even remotely well enough. Can you bet on a bubble bursting if you're on the inside actively inflating it?
it generated perfect code that worked first time.
Did the 'AI' assess this 'perfection'? Or did you?
IDK, there is no way I'd trust a stochastic BS machine to produce robust and secure application code that a business would bet its future, and insurance premiums, on without making some knowledgable sucker or cheap stand in suffer the pain of 'controlling' it.
AI won't be making me, or my former roles, redundant. But I would not put it past a C-suite exec or results-driven underling to do so based on the lies and nonsense spewed out by AI boosters and the constant drive for 'number go up'.
And, as a few have identified, no AI is genuinely intelligent. Nor are general LLMs any real use.
I work in railway structures examining. We still need to hit structures with hammers. Until a drone can swing a hammer and produce a report I think I'm safeish.
I know of one guy who uses chatgpt to assist his work. I've tried with copilot but I don't have enough knowledge to properly drill down into it.

