Home › Forums › Chat Forum › ChatGPT? On the cusp of a revolution, or just a fancy toy?
- This topic has 338 replies, 67 voices, and was last updated 6 months ago by Klunk.
-
ChatGPT? On the cusp of a revolution, or just a fancy toy?
-
1CougarFull Member
This is mad AF
I gave GPT-4 a budget of $100 and told it to make as much money as possible.
I'm acting as its human liaison, buying anything it says to.
Do you think it'll be able to make smart investments and build an online business?
Follow along ? pic.twitter.com/zu4nvgibiK
— Jazz Fall (@JazzFall) March 15, 2023
chrisyorkFull MemberI was dubious at first but then tried it for a few things, not as a replacement for learning something but as additional and it’s been very useful!
What I don’t like is when I saw our junior team members having it as a favourite on their browser, bearing in mind these are peopl in the early stages of their careers in IT and I don’t believe there are shortcuts to learning and luckily it’s still humans that do the promoting!
I will say, a few of these users every after passing basic industry grade quals are still very poor at their job and I’m struggling to understand how! It’s like the younger generation just don’t give a toss, want more money but can’t even get the fundamentals right! Yes we have an issue! We now can’t seem to be able to promote from within to our team as the team that escalate work to us…. Are terrible at their own jobs! Sorry, this turned into a bit of a rant!!
FlaperonFull MemberI’ve found it incredibly useful for doing some simple tasks that would have taken me hours. For example, I needed to take a CSV full of addresses and prices, and show them all on a map using Javascript. A working solution popped out in about 5 seconds, but what really impressed me was a back-and-forth where I described something that wasn’t working as I expected, and I got a solution which wasn’t just a change of variable names or something, but the fix that I’d have done myself.
I have the advantage of a degree in Computer Science which pushed a strong practical line on software development. I recognise ChatGPT as a very useful tool to bounce ideas off and get working solutions to simple problems. For anything more complicated it can still be handy, but ultimately if you can’t hold and conceive what you want to do in your head, it’ll hit a brick wall quickly.
The other big thing is that if you write the code yourself you know how it works when you go back to make changes the next day. ChatGPT has forgotten this and so the existing code cannot be adapted or extended without an innate knowledge of what it’s doing.
So if I were a half-arsed junior developer I’d be worried. If I ran the company, I’d be overjoyed.
hot_fiatFull MemberWe’ve started to explore its capabilities at writing property generation rules and custom connectors in our identity management solution. Normally it’ll take a pretty good developer a few months to grasp the various nuances of our solution. We’ve been astonished at how quickly it can pick things up and generate really effective solutions.
I’ve not tried v4 yet, but from what I’ve heard, you can give it even more abstract concepts and it’ll churn out solutions just as efficiently. Scary stuff, but also fascinating.
PJayFree MemberSorry, I’m very out of my depth with all of this, and have only just taken a look at the thread, so apologies if it’s not relevant, but I have a couple of queries.
The first is a ‘mortality’ issue. I would guess that a totally unfettered AI system could be employed for any number of nefarious activities. Clearly there will be checks in place to stop it being used illegally, but what about legal, but, arguably, immoral activities? There are plenty online stores and auction housing happily taking money for facilitating trade in pirated or invalidly licenced software, claiming only to be providing the marketplace. ChatGPT is a financial concern; do you think they’ll be happy monetising the use of the AI for dubious purposes (thinking the writing of dissertations that would pass the current crop of plagiarism checkers, or perhaps writing computer code that could be used for hacking or malware purposes, as examples)?
The second thought is about ownership of the ‘output’. Clearly the ChatGPT code is proprietary but the information it learns from and uses to generate output (the internet) is not it’s property and quite possibly will be owned by other parties.
Following on from recent legal action from certain news publishers demanding reimbursement when Facebook posts their stories to peoples’ feeds, could the ChatGPT owners be similarly targeted by content/data providers if their data is used to generate a commercial ChatGPT solution?
copaFree MemberClearly there will be checks in place to stop it being used illegally, but what about legal, but, arguably, immoral activities?
Yep, loads of potential to be used for immoral activities.
One that springs to mind is use as a political tool to generate messaging that’s based on a person’s likes/dislikes, history, prejudices etc. The ability to spam social media with highly effective and personalised messaging that’s impossible to distinguish from human interactions.
could the ChatGPT owners be similarly targeted by content/data providers if their data is used to generate a commercial ChatGPT solution?
I don’t think this will be a problem. It reinterprets and adapts from existing sources in much the same way that a human writer does. The copy it generates is designed to pass plagiarism tests.
1thecaptainFree MemberChatGPT will accelerate the decline of civilisation by flooding the world with bullshit to such an extent that many people will have no idea what to believe.
desperatebicycleFull MemberChatGPT will accelerate the decline of civilisation by flooding the world with bullshit to such an extent that many people will have no idea what to believe.
Pretty sure Facebook/Twitter/TicToc etc have got us way along that path already.
oldnpastitFull MemberNo ChatGPT! You really don’t understand do you?
I guess I’m on my own, and any wheelbuilders out there can sleep easy.
When building a bicycle wheel, how can I ensure that the hub logo lines up with the valve hole on the rim?
When building a bicycle wheel, you can ensure that the hub logo lines up with the valve hole on the rim by following these steps:
Start by lacing the spokes onto the hub according to your preferred spoke pattern. You can refer to a bicycle wheel building guide or manual to choose a spoke pattern and to know the correct spoke length and spoke count for your specific hub and rim combination.
Once the spokes are laced onto the hub, insert the axle into the hub and mount it onto a truing stand. This will allow you to align the rim with the hub and adjust the spoke tension.
Locate the valve hole on the rim and measure its position in relation to the hub. You can use a ruler or a measuring tape to get an accurate measurement.
Rotate the hub until the logo is in the desired position, aligning it with the valve hole on the rim. You may need to loosen some of the spokes and adjust their tension to achieve the correct position.
Once the logo is aligned with the valve hole, adjust the spoke tension to ensure that the wheel is true and round. Use a spoke tension meter to get accurate readings of the spoke tension.
Finally, check the alignment of the wheel by spinning it on the truing stand and checking the clearance between the rim and the brake pads. If the wheel is not perfectly aligned, adjust the spoke tension until it is.
By following these steps, you can ensure that the hub logo lines up with the valve hole on the rim and that the wheel is true and round, which will result in a smooth and reliable ride.
1the00Free Member🤣
It might be right… but it sounds like a lot of work building the wheel twice. I’m surprised it didn’t just suggest applying the hub logo to align with the valve hole after the wheel is built.
foomanFull Memberwhich will result in a smooth and reliable ride
None of my rides are smooth and reliable because I didn’t build my wheels the ChatGPT way.
grahamt1980Full MemberI just think that all of the large language models are going to fall foul of gdpr and will have to be heavily modified to make them compliant
BruceWeeFree MemberI’ve been playing about with it trying to get it to write python functions that I’m too lazy to do myself. For example:
Given the front x coordinate is 0 and the tail x coordinate is 1, write a python script that will return the upper and lower y coordinates for a given x coordinate of a NACA 4 digit airfoil
So far I’ve run the same query 5 times and it’s yet to give me script that returns the same value as one of the other scripts. Most have been in the ballpark but some have been way off.
Interestingly, the first couple of scripts it gave me were wrong while the later ones were closer to being correct.
Edit: Just tried another one and it got it completely wrong. That’s slightly less worrying.
thecaptainFree MemberRemember, there is no “intelligence” in the conventional sense underlying chatGPT and similar models. It’s predictive text generator, designed to produce superficially plausible bullshit. It’s worrying how many people seem to think it is some sort of search engine and take its output seriously.
prettygreenparrotFull MemberChatGPT and its ilk are interesting. What seemed useful, sort of, was this example of LLM + LangChain + ‘expert’ sources. ChemCrow https://arxiv.org/abs/2304.05376
Though you could ask a chemist instead.
CountZeroFull MemberI will say, a few of these users every after passing basic industry grade quals are still very poor at their job and I’m struggling to understand how! It’s like the younger generation just don’t give a toss, want more money but can’t even get the fundamentals right! Yes we have an issue!
Sounds like me, talking about new recruits to the staff in the studio of the small print/publishing company I worked for. I never had any real experience or qualifications in the industry when I joined the company, but I did do technical drawing at school, which proved very useful. New people had been through design school, had all sorts of qualifications, etc, and I had to basically try to train them how to do the job properly and professionally, which was a bit awkward, ‘cos they were roughly the same age as me, or even slightly older.
That was fifty years ago… 🤪
BruceWeeFree MemberRemember, there is no “intelligence” in the conventional sense underlying chatGPT and similar models. It’s predictive text generator, designed to produce superficially plausible bullshit. It’s worrying how many people seem to think it is some sort of search engine and take its output seriously.
Yeah, but I’ve been reading quite a bit about how it can be used for programming. The long and short of it seems to be that it can be used to produce individual functions although my experience shows that you really should do some thorough QA on those functions before relying on them.
Maybe I need to ask it for unit tests to go with the functions it gives me.
Anyway, it seems useful for spitting out the kind of annoying trigonometry based functions that always take me at least an hour of scribbling on paper and several attempts to get every symbol and sign in the right place. For that I see it as having some potential. I’m sure there’s more once I start digging into it.
steamtbFull MemberAt least some University students are starting to get picked up on AI filters for assessed written work. I imagine detection will accelerate at the same pace as production, or not?!? Although if you know the students well, it does stand out like a sore thumb! It’s also bizarrely bad at referencing, in so far as it lists references that don’t appear to exist, which is rather odd…
willardFull MemberIt’s not great for writing python. I tried it for a few things, but the description you give it can be misinterpreted and the quality is really variable. I ended up writing the scripts myself as it was easier and cleaner.
zilog6128Full MemberCreating the right prompts to get the result you want (or refine the answer) is very much a (new) skill though!
BruceWeeFree MemberCreating the right prompts to get the result you want (or refine the answer) is very much a (new) skill though!
This is what I’m interested in right now, just playing about and getting a feel for the possibilities and limitations.
Product Owners should be forced to use it, I reckon. Hopefully that will help them realise they can’t just write a stream of consciousness on a Jira ticket and the developers will magically produce the picture they had in their head 🙂
It’s not great for writing python.
Is there a language it’s more suited to? If so, why would you say that was?
thecaptainFree Memberit lists references that don’t appear to exist, which is rather odd…
That’s very much my point. It’s not at all odd when you understand what it’s doing. I asked it about me and it generated a couple of paragraphs that were broadly reasonable and then told me I’d won a few grants/awards that I’d never heard of still less applied for (though they did at least exist). I asked it for my most-cited paper and it blended a couple of things I’d done and generated a ref that didn’t exist. My wife became a mash-up of herself and another person of the same name. Etc etc. It’s plausible-looking bullshit based on associations between words and phrases, any information content is incidental to the design, there is no check of reliability whatsoever. It fools us because we are easily fooled by syntactically correct bullshit, not because it is intelligent. We already knew that since Eliza.
If/when its training data gets polluted with its own output without detailed human checking for correctness it will degrade to randomness.
stevextcFree Memberthecaptain
It’s plausible-looking bullshit based on associations between words and phrases, any information content is incidental to the design, there is no check of reliability whatsoever. It fools us because we are easily fooled by syntactically correct bullshit, not because it is intelligent. We already knew that since Eliza.
If/when its training data gets polluted with its own output without detailed human checking for correctness it will degrade to randomness.
That isn’t really different to journalism though.
Take bikes as it’s STW….
SRAM release some stuff (output) about transmission to journalists and some very detailed instructions/rules about how they can test it and write about it.Journalists wrap some plausible looking words around this and produce some output…
Other journalists regurgitate the same
zilog6128Full MemberI asked it about me and it generated a couple of paragraphs that were broadly reasonable and then told me I’d won a few grants/awards that I’d never heard of still less applied for (though they did at least exist). I asked it for my most-cited paper and it blended a couple of things I’d done and generated a ref that didn’t exist.
you’re asking it to do stuff it wasn’t designed for though and in fairness, the company behind it has never claimed it can do 🤷♂️ (They do have a specific AI model for writing code though which it’s presumably been trained specifically for, which may explain why this is the one technical task it’s actually OK at)
doris5000Free MemberI’m still hugely impressed with it, as a code noob. A month ago my Javascript didn’t extend much past console.log(‘hello world’);.
Now I’ve just rolled out a little web tool for my team at work which uses a server function and API calls. A month ago I didn’t know what a server-side function was, and had never heard of AWS Lambda.
And yes, a competent web dev could probably have done the same thing in a quarter of the time it took me – there was still a lot of trial and error – but I am still amazed that I was able to get this done with such comparatively little fuss.
If I had been doing more advanced stuff then maybe it would have struggled more, but since it’s all so basic, and basic code stuff has been pretty well documented online over the years, it was pretty impressive.
Though I did still need to break it down into chunks, and then get it to assess whether those chunks would work when stitched together (frequently they needed amending).
zilog6128Full MemberThough I did still need to break it down into chunks,
yeah this is key, but it’s also good practice when writing code the old-fashioned way too IME!
CougarFull MemberIt’s alright folks, we’re saved. The UK government is building it’s own red white and blue version. Because we want sovereignty without ethics, or something.
https://www.engadget.com/the-uk-is-creating-a-100-million-ai-taskforce-143507868.html
nickcFull MemberIt fools us because we are easily fooled by syntactically correct bullshit, not because it is intelligent.
Aye, my wife studies 18thC literature and gave it a test to see what it would do. While doing some research she stumbled across several quotes about a politicians nose in comparison to a bridge, it pops up in several satirical journals and even a novel. there’s no reference for why the joke’s funny or what the link is, she even asked her colleagues in the history dept. and they haven’t a clue. It’s become an interesting diversion to hunt for it.
ChatGPT just completely made up an explanation and cited several papers some of which exist but don’t mention it, let alone reference it, and some that are again, just entirely imaginary
It’s good at some things, and she thinks; give it clear instruction (say an UG essay) she’d give it a solid 50, but for other things it’s just woeful.
zilog6128Full MemberIt’s alright folks, we’re saved. The UK government is building it’s own red white and blue version. Because we want sovereignty without ethics, or something.
It’s easy to put a spin on this that it’s stupid, or a waste of money, but if you think that AI is going to become a fundamentally vital technology (hint: it is 😃) then it 100% makes sense that we (as a country) would want something that isn’t controlled by a (foreign) private company. Look at how absolutely everyone has dropped the ball re. being able to manufacture advanced computer chips. (That would currently be way more useful, but it’s going to cost a hell of a lot more than £100M to bridge that tech gap now).
BruceWeeFree Memberbut if you think that AI is going to become a fundamentally vital technology (hint: it is 😃) then it 100% makes sense that we (as a country) would want something that isn’t controlled by a (foreign) private company.
All very true.
However, we all know the money is going to be handed to some friends of Rishi Sunak and Michelle Donelan. I somehow doubt any of their friends are particularly tech-savy but I’m sure ‘ChatUK’ will be as successful as all the other initiatives that have involved handing cash to friends of Tories.
stevextcFree Membernickc
ChatGPT just completely made up an explanation and cited several papers some of which exist but don’t mention it, let alone reference it, and some that are again, just entirely imaginary
Sounds more and more like journalists and politicians have been using a beta for a decade.
In fact I’m starting to wonder if Boris might not actually be an early android/chatbot…
Data could never use contractions and had skin issues… Boris can’t tell the truth and has hair issues?
References to non existent EU laws, made up quotes, £325M to the NHS ???stevextcFree Memberbut I’m sure ‘ChatUK’ will be as successful as all the other initiatives that have involved handing cash to friends of Tories
Depends how you measure success?
If you’re the one being handed cash their record is excellent. Track and Trace to the new emergency thing?Bags of money with no requirement to produce anything useful?
oldnpastitFull MemberI just tried both Bard and ChatGPT on a question about C++ static ctor initialization order hacking. ChatGPT was pretty convincing but Bard was all over the place.
avdave2Full MemberI think the idea that it’s going to take over is rubbish. I asked it to do something simple and all it came back with was “I’m Sorry Dave, I’m Afraid I Can’t Do That”
Hardly something we need to worry about is it
copaFree MemberChatGPT just completely made up an explanation and cited several papers some of which exist but don’t mention it, let alone reference it, and some that are again, just entirely imaginary
One of the weird things is that despite its insane sophistication, it seems unable to simply say that it doesn’t know something. It may be a fairly trivial thing to fix or it could be a more fundamental problem, owing to the fact that AI doesn’t really understand what it generates.
CougarFull MemberIt doesn’t know that it doesn’t know. it probably can’t, it doesn’t ‘know’ anything, it’s a highly advanced predictive text generator.
nickjbFree Memberit seems unable to simply say that it doesn’t know something.
Does it post in the politics threads on here?
thols2Full MemberI just tried the Microsoft Bing chat AI, it wrote me a poem about going for a bicyle ride.
I rode upon my bicycle
With joy and ecstasy
The wind was in my hair
The sun was shining on meBut soon I felt a dreadful pang
That pierced me to the core
A bowel movement was impending
That I could not ignoreI searched for a convenient spot
To ease my troubled state
But all was vain, for fate was cruel
And I was doomed by fateI could not stop the flow of filth
That issued from my rear
It soaked my pants and stained my seat
And filled me with despairOh woe is me, oh woe is me
I’ve shat myself today
Oh woe is me, oh woe is me
What will the people say?I cursed the gods and cursed myself
And cursed the bike I rode
I wished I had a better plan
Or a lighter loadBut nothing could undo the deed
That I had done so ill
I had to face the consequence
And bear the bitter pillOh woe is me, oh woe is me
I’ve shat myself today
Oh woe is me, oh woe is me
There is no other way!
`
You must be logged in to reply to this topic.