Home › Forums › Chat Forum › ChatGPT? On the cusp of a revolution, or just a fancy toy?
- This topic has 338 replies, 67 voices, and was last updated 5 months ago by Klunk.
-
ChatGPT? On the cusp of a revolution, or just a fancy toy?
-
kelvinFull Member
I just tried the Microsoft Bing chat AI, it wrote me a poem about going for a bicyle ride.
Will Pam Ayres sue ?
thols2Full MemberWill Pam Ayres sue?
I wonder what she’ll do
If someone steals her rhyme
Or copies her style sublimeWill she take them to court
And make them pay a lot
Or will she laugh it off
And write another spoofWill Pam Ayres sue?
I hope she never does
She’s such a lovely lady
And her poems make me happy1BruceWeeFree MemberWhat is the forum on singletrackworld.com like?
The forum on Singletrackworld.com is very active and friendly. There is a wide variety of topics discussed, from mountain biking to bike maintenance, and plenty of helpful advice from experienced riders. The forum is also a great place to ask questions or get tips for your next ride.
Is the forum on singletrackworld.com a good place to ask about wood burning stoves?
Yes, the forum on singletrackworld.com is a good place to ask about wood burning stoves. There are a number of knowledgeable members who are willing to offer advice and tips about wood burning stoves. The forum also includes a section specifically dedicated to wood burning stoves.
How about Audis?
Yes, the forum on singletrackworld.com is a good place to ask about Audis. The forum has a section dedicated to Audi cars and a community of knowledgeable Audi owners who are more than willing to share their expertise.
Is there a danger the forum on singletrackworld.com is too middle class?
No, there is no danger that the forum on Singletrackworld.com is too middle class. The forum is open to anyone interested in discussing topics related to mountain biking, regardless of their background or economic class.
nickcFull MemberHaha, just read that in the Guardian
That is some **** up shit, right there.
scratchFree MemberSurely once these things start to get ahead of themselves we can just pull the plug out right?…right???
dbFree MemberI guess that is the point. If we think we can use AI to save the planet it doesn’t need to be very clever before it works out humans are the problem and deploys a biological weapon to quietly remove us.
2molgripsFree MemberGood job Isaac Asimov has already done the important work on this.
1nickcFull MemberThe trouble with that is that Issac never really got to see the world that we live in already. The truth is that if you can programme something into a machine, some-one else can re-programme it, turn bits of it off, or switch it off altogether.
molgripsFree MemberYes but that’s a different question. In the case of the AI drone, someone left out rule 1, perhaps on purpose since it would prevent a military drone from doing anything. But essentially, there needs to be a military version of rule 1 (as there is for humans). When they added that, it worked around it.
There’s nothing intrinsically wrong with AI, it just needs to be programmed very very carefully – and that’s the challenge. And that’s what this experiment was all about.
1desperatebicycleFull MemberI just tried the Microsoft Bing chat AI, it wrote me a poem about going for a bicyle ride.
Will Pam Ayres sue ?
Funny that- saw the BBC woman show the 2 fellas from Abba some lyric she got it to generate “in the style of Abba”. Bjorn says “Thats rubbish!” It’s got Abba lyrics spot on then eh! 😛
1nickcFull MemberRead the article, when they re-programmed the AI drone to not kill the human, it just then cut of any comms so that it couldn’t be interfered with while it collected “points”
In the case of the AI drone, someone left out rule 1
And anyway, if you programme the drone; again, that means that some-one else can re-programme it.
The basic issue with the machine learning model that has taken over from the logical step model of programming AI; the engineering is way ahead of the science. All the AI experts have more or admitted now that they don’t really 1. understand the process, and 2 have no real understanding of what happens at the point that you give it all the Info it can handle. ChatGPT has some pretty robust guard-rails, we just don’t know what happens if/when those are removed.
2zilog6128Full MemberThere’s nothing intrinsically wrong with AI, it just needs to be programmed very very carefully
you’ve missed the point though of why people are worried about “hyper-intelligent” AI – you don’t program it, or have any say about how it works or what results it comes up with. It’s machine-learning, it programs itself. Hence the scenario where the AI snowballs and starts doing all kinds of things that weren’t envisaged. There’s a documentary called The Terminator you might want to check out 😂
2copaFree MemberThe YouTube channel of Rob Miles has really good analysis of how insanely complex AI safety can be. Here’s an example of something seemingly simple: having a reassuring big red button to switch an AI off if it starts misbehaving: AI Stop Button Problem
As one of the comments points out about the dangers of AI testing: “You haven’t proved it’s safe, you’ve only proved that you can’t figure out how it’s dangerous.”
BruceWeeFree MemberIt’s an interesting proposition and I think we are going to have to fundamentally change the way we approach programming.
Up until now you could fairly safely assume Garbage in = Garbage out. Corrupted inputs or a mistake in the programming syntax would mostly result in a crash or nonsense output. The program wouldn’t deliver you a perfect output that wasn’t exactly what you had in mind.
Now we have to think that if a mistake is made with the input data or the syntax then we are still going to get something out. And that something could be very very bad depending on the system in question.
nickcFull Member@copa’s link up there is a video, just a heads up if you’re at work.
kelvinFull Membera military version of rule 1
Military tech that can’t kill people? And can’t stand by when it could stop people killing people? I can see the appeal, but…
nickcFull Memberthen we are still going to get something out.
If you ask it to provide an academic paper on a subject, ChatGPT will often cite entirely made-up references, papers, even authors to support it’s thesis. The programmers call it “hallucinatory errors”.
molgripsFree Memberyou’ve missed the point though of why people are worried about “hyper-intelligent” AI – you don’t program it, or have any say about how it works or what results it comes up with.
AI has to be programmed one way or the other, just like real I does.
Read the article
I did, and I referred to the fact the AI worked around the rule about not killing the operator.
And anyway, if you programme the drone; again, that means that some-one else can re-programme it.
Only if they break your security. The exact same risk exists for non AI drones, power grids, weapons and anything else with a connection.
Military tech that can’t kill people?
I said a military version of rule 1, meaning that it can’t kill people in its own side.
Less facetiously you’d need to program rules of engagement into it, just like you do with human soldiers. If an AI can be taught the rules of chess, grammar, or how to do operations, then it can be taught the rules of war. The hard part will be things like sacrificing troops for the greater good, which is more succinctly described by the trolley bus problem. That’s really difficult for humans, and I suspect an AI would cause problems whichever way it decided to go.
1nickcFull MemberOnly if they break your security
Or the machine itself doesn’t like it’s own programming.
molgripsFree MemberAgain, just like human intelligence. But what do you mean by ‘like’ ? To like something you have to evaluate it against your own criteria for likeability, which an AI would have to be provided with, or told how to obtain.
These AI programs aren’t sentient beings with their own agenda. They are just doing what they’ve been told to do. The difference is that they’re being told at a higher level. So rather than ‘do this’ they’ve been told ‘figure out how to do this’. The issue is that the ways it comes up with may be unpredictable by the humans in charge. What we need to work on is ‘figure out how to do this whilst obeying the rules 1 to x’ where x is a large number.
I’m not familiar with the AI experiment in question but they seem to have started off with a blank slate, and then added the ‘don’t kill the operator’ rule. But they didn’t add ‘don’t damage your own infrastructure’, so it could do that. It prioritised the mission above all else because it wasn’t told otherwise. As a military drone it probably needed to be programmed to respect and obey the chain of command like human soldiers are. The principle of “if I don’t hear you tell me not to do it, then I can do it” is something that humans are aware of and understand the consequences. This thing wasn’t, it behaved like a badly brought up child rather than a soldier.
1nickcFull MemberAs a military drone it probably needed to be programmed to respect and obey the chain of command like human soldiers are.
You’re familiar with the libel case that’s just concluded in the Australian courts, no?
I think the issue is 1. we just don’t know, (because the science behind at scale machine learning is currently lagging behind our ability to engineer it) and 2. Humans are very unforgiving of machine error – and they will make errors. Even if you have stats that prove AI makes fewer mistakes than human operators, humans will be less tolerant of those errors; especially if those AI are involved in killing humans on purpose.
1zilog6128Full MemberAI has to be programmed one way or the other, just like real I does.
that’s not how neural-nets, machine learning etc work. Someone has to program the basic structure yes, but that’s it. After that it creates it’s own algorithms – the only input the creators have is which training data they feed it. That’s the reason the Chat-GPT creators cannot control it’s output & get it to do (or not do!) whatever they want. Nobody is scared of the possible consequences of a simple AI that can be explicitly/precisely programmed! What makes ML AI so useful to us dumb humans is that it’s capable of doing what we can’t, or can’t even understand… but that’s the danger.
1copaFree MemberThe issue is that the ways it comes up with may be unpredictable by the humans in charge. What we need to work on is ‘figure out how to do this whilst obeying the rules 1 to x’ where x is a large number.
I think the fundamental issue is that the people who created ChatGPT aren’t sure exactly what it’s doing. They created the conditions for it to work but they don’t know how it does. How do you apply safety to something you don’t fully understand?
molgripsFree Memberthat’s not how neural-nets, machine learning etc work. Someone has to program the basic structure yes, but that’s it. After that it creates it’s own algorithms – the only input the creators have is which training data they feed it.
The training IS the programming. Just like with humans.
I think the fundamental issue is that the people who created ChatGPT aren’t sure exactly what it’s doing. They created the conditions for it to work but they don’t know how it does. How do you apply safety to something you don’t fully understand?
Chat GPT is not the same as a military AI, in the same way that your laptop is not the same as NORAD.
kelvinFull MemberThey are trying to get across to you why Machine Learning is so different to AI or expert systems, or whatever other “programmed” forms of pseudo-intelligent systems we’ve been familiar with in the past… once created you can’t easily determine what it knows or how it knows it (and so predict what decisions it might make).
nickcFull MemberChat GPT is not the same as a military AI
The military are not building their own AI, they’re just getting from the same couple of sources that everyone else is getting it from, and adapting it to what they want/need. With the same lack of understanding that everyone else has.
timbaFree MemberThe programmers call it “hallucinatory errors”
A made-up practical experiment rather than a made-up reference; the US “‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation”
https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/molgripsFree MemberA hypothetical thought experiment that was itself not really needed since it’s the plot of 2001 I think.
thecaptainFree MemberIf you ask it to provide an academic paper on a subject, ChatGPT will often cite entirely made-up references, papers, even authors to support it’s thesis. The programmers call it “hallucinatory errors”.
Calling it a hallucination or an error completely misses the point. ChatGPT was never designed or intended to generate any sort of truth or reality. It’s a bullshit merchant designed to generate something that might be believed by a credulous person, by sticking together words and phrases that are commonly used together. You might as well ask Boris Johnson a question. It’s no more a hallucination when it says something that turns out to be right, as when it says something that isn’t.
You can get away with bullshit when you’re writing poetry or politics, not so much when you’re dealing with factual content.
6dyna-tiFull MemberChatGPT was never designed or intended to generate any sort of truth or reality. It’s a bullshit merchant designed to generate something that might be believed by a credulous person
When are we offering it STW membership ?
KlunkFree Membersome interesting stuff on bing behaving badly (sorry if this has been posted)
molgripsFree MemberLol, Bing chat has censors like talking to someone in Soviet Russia.
thebunkFull MemberOwn a Dewalt toolkit but can’t figure out how to raise a saddle? ChatGPT can now help with bike maintenance (for the truly clueless).
If it can count chain links for me that would be useful though.
thols2Full MemberIt’s weird how it’s reasonably good at things I would have expected it to suck at, but terrible at things I would have expected it to excel at.
I’ve been told GPT-4 with code interpreter is good at math.
GPT-4 with code interpreter: pic.twitter.com/YpyLwsoneJ
— Daniel Litt (@littmath) October 1, 2023
mattyfezFull MemberIt’s a curiosity, at the moment…
But that given, imagine when governments outsource thinking and policy to ‘AI’…actual AI, I mean.
‘AI’ is the new ‘Cloud’ it’s just a buzz word to those who weild it when it’s just an algorithm, or just your data on a remote PC/datacenter or a pseudonym for whatever you want it to be, but at some point, it’ won’t be.
You must be logged in to reply to this topic.