Forum menu
What's worrying / annoying is that it gives objectively wrong answers with such utter conviction.
I can do that, after significant amounts of alcohol.
There was talk that Apple was considering buying Perplexity, but it now appears that the AI is just as guilty of stealing information from copyright holders as all the others, so either the deal is off, or Apple buys it, then uses the technology to train it internally.
I don’t really use AI apps; I’ve found Perplexity handy occasionally for giving me answers that would otherwise be full of random garbage or links to advertising sources, but I can’t be arsed to sit playing around creating random content or art - I’d rather read a book.
Quite often it just makes up its own sources.
Manchester Uni did some work looking at how they can detect plagiarism by students who've used Chat GPT and other LLM to produce essays, and they realised that they don't need to check the actual essay, they just go directly to the references and citations and check those instead. Students don't tend to make up books or authors, so it's pretty much 100% accurate. Although I understand now that the software that looks for plagiarism can now detect LLM use anyway.
So, it got there in the end without me giving it any answers. What's worrying / annoying is that it gives objectively wrong answers with such utter conviction. It might not be the future of scientific research any time soon but there will be a general election in a couple of years.
This is the greatest problem, it’s being foisted on an unsuspecting public as something it’s not.
There’s no intelligence behind it but to your average user it generates the appearance there is or someone who thinks they can lose staff/increase productivity gets excited.
Hallucinations without the use of psychedelic substances! How can I do that?
Non AI search says
https://www.sciencealert.com/how-to-create-hallucinations-without-drugs-surprisingly-easy-science
As with all current AI related things, it's less about the AI and more about how you phrase your query.
This.
I've been using it (Copilot) a fair bit at work. The most successful instances are when I drip feed requests and slowly build up to what I need.
My wife has been using AI to check maths problems and sometimes it makes terrible mistakes.
Ahh prompt engineers are so last weeks bandwagon 🙂
It’s a great tool but unfortunately still requires programming skills and experience to get what you want although it can save on a lot of typing and sometimes you do get some good ideas from it.
I find it’s useful as a really good help system but tbh as soon as your doing stuff that there’s not much material to plagiarise it can go well off the reservation.
Without already having the skills it’s pretty much back to the good old days of that monster departmental access db app that someone coded years ago which everyone’s too scared to touch as it’s the flakiest code thang ever 🙂
What's worrying / annoying is that it gives objectively wrong answers with such utter conviction.
I can 100% guarantee that it does not.
I can 100% guarantee that it does not.
How so? I can understand the semantics of querying the truth relationships between predicted words... but, as in my example, if it fails to get the date correct within a few decades of a well known and extensively digitally documented historical event, something that occurred without questioning. Literally hundreds of books written on the subject, and reams of web available reports from the time- then how can we describe it as anything other than 'objectively wrong'? It is very odd that it makes this mistake given the quantity of reliable consistent data available from archives.
If it's getting confused over solid events from history then what else is it hallucinating? In my mind I call it the F.I. 'Fake Intelligence'.
Think you misunderstood the previous post.
It seems the Altman bubble is slowly deflating,
https://twitter.com/GaryMarcus/status/1954069722833805483
Think you misunderstood the previous post.
Oh, I thought we were getting in to some philosophical semantics about how we determine what is real. Maybe it was just a joke.
The thing is... the misplaced Techno Optimism amongst the silicon avant-garde is so pervasive it is hard to differentiate - but yeah, that's pretty much what I look like.
The thing is... the misplaced Techno Optimism amongst the silicon avant-garde is so pervasive it is hard to differentiate
Yep, it's pretty hard to tell if a lot of stuff on the net is a parody or someone who is genuinely an idiot.
oh why bother reading that long document, I'll get Copilot to summarise it...
This irks me , as the some of the online newspapers seem to be using this at the top of every story.
Its hardly like they need it as the stories are hardly that long.
Why bother writing a long article in the first place.
Seems to just be a way of making people dumber or un-necessary use of ai.
I used to love Apple News but now find it disheartening as it’s so bad, I’m just finding all the online news seems to be getting worse.
Seems to just be a way of making people dumber or un-necessary use of ai.
But ‘summarize this’ is the killer app of LLM’s few-trick pony! You’ve got to use it, otherwise what have the billions been spent for? 😉
Why bother writing a long article in the first place
As Blaise Pascal is alleged to have written
Je n'ai fait celle-ci plus longue que parce que je n'ai pas eu le loisir de la faire plus courte.
[I would have written a shorter letter, but I did not have the time].
Constructing a concise and informative article would take skill and time. Many organizations are unwilling to pay enough for the former and ‘pushing content’ limits the latter. These pressures have always been present in journalism but seem accentuated in the current ‘attention economy’.
there are some good articles out there still.
Its all going to be Ok, Sam Altman introduced Chat GPT 5 a few days ago - which claims to now be able to make mistakes at PHD level- with an image likening it to the Death Star... everything's going to be just fine.
So the cut to the chase, is anyone here using AI for commercial gain. Either to increase revenue/profit, reduce costs or reduce risk?



