Home › Forums › Chat Forum › 2.8% for NHS and teachers
- This topic has 209 replies, 60 voices, and was last updated 1 week ago by ceepers.
-
2.8% for NHS and teachers
-
polyFree Member
All I put in was something along the lines of write a test for 14 year olds on simple electrical circuits. If you have the course spec or text you can copy that in and ask it to generate questions based on that.
I’ve used it for this quite a bit and although it produces the odd dud question it’s generally pretty good and at least gives a framework. It also sometimes throws up something you wouldn’t have thought of which can be very useful.
Oh you actually thought those were helpful questions! I could pick fault with almost every question, but the howlers: Q1 – “what are the three main components of a circult” (a weird question given not all circuits have any of them except the wires) – then the answer is D – all of the above which is 4 components!; Q5 and 6 – it doesn’t tell you if the 3rd bulb is also in series or parallel – you have to assume; Q9 – is a stupid question it doesn’t tell you if the bulbs are all the same etc, so you have to guess what the question setter meant. Q10, again you have to guess that “in homes” they mean in your house wiring, because both serial and parallel circuits are common within appliances – often both in the same appliance! Perhaps a reasonable assumption till you see answer D! Part 2: A3 – defines what a fuse is (which was not the question) and then adds “preventing a fire” but a fuse could just as easily protect the components from damage as prevent a fire. A5 – its odd to me that metals and air/vacuum would not be given as legitimate examples. Part3: Q1 and 2 seem to assume that all bulbs have the same resistance, perhaps thats reasonable, but then in Q3 they explicitly tell you that fact!
Yes, I’m being pernickety, and of course you could have given it a more specific prompt (although if this was for 14 yr olds I’d expect the switched on ones to be just as annoying as me). What I’m not sure about is if chat gpt + reviewing the results and revising the bad questions is actually a time saving over just setting good questions to start with!
SpinFree MemberOh you actually thought those were helpful questions! I
No, I’m not a physics teacher and can’t remember much of my school physics so I have no idea if they were useful.
As I said and as you acknowledged the input was pretty vague so it’s no surprise if the output isn’t fantastic. It’s not some miracle device, the old adage, put crap in get crap out still applies.
I’ve used it in my subject (geography) with better input and found it generates helpful questions. The less good questions also generate interesting discussion. It’s also great for summarising text.
I’m really only dipping my toe into it but I expect to make increasingly use of AI in the coming years.
polyFree MemberWhy would you have a higher FP or FN? The whole point of AI is that it does a better job at reduce FP while not impacting FN than a human (thus freeing up more time incidentally) otherwise why would you bother? No one should be deploying an ML model that performs worse than a human in a safety critical scenario.
That’s obviously the desired outcome. The reality of a lot of AI diagnostics is that either there is no “human benchmark” so surely “early warning must be good compared to ignorance”, e.g. a wearable that detects signals in asymptomatic patients OR to detect it earlier (which was what I was responding to) it is working with weaker “signals” which frequently mean balancing the FP/FN. AI will only ever work with the training set and inputs it’s given – a professional clinician can look at the patient as see stuff that gives them intuition (both the patient is, or is not, sick). Lots of “wellness” test stuff is presented as not being safety critical, but may alert a user to see a human professional. Is that good for the individual patient? quite possibly, it may help detect disease if they have it. Is it good for the system? e.g. lets say my watch warns me about a pattern with my heart rate, now I take up my GPs time. I seem healthy, but my GP likely has no meaningful data on the accuracy of my watch – so now has to make a decision – refer to a specialist and cover his ass taking up more time for someone who does not seem ill, or risk being headline news in the local paper next month if I drop dead and the GP ignored my fancy AI wearable. Meanwhile Mrs Jones who actually has some symptoms that a cardiologist would be interested in, is hearing in the same local paper that it’s virtually impossible to get appointments at the GP so puts it off…
Do I believe AI has potential to have huge positive impact in healthcare – absolutely. Do I think that it’s suddenly going to be diagnosing people much earlier – no I’m afraid not. Now it might speed up drug discovery, identification of diagnostic biomarkers particularly combinations of biomarkers, patient admin tasks, avoiding errors from repetatively staring at data, even alerting to errors or overlooked issues but despite the hype, AI is really hard to train on diagnostics, and earlier diagnostics harder still as finding good data where people definitely do or don’t have disease state at an early stage is non-trivial, even more when you need to correct for age, gender, ethnic, dietary, weight, fitness and other factors. And if you believe it will be quick, you’ve clearly never worked in the world of regulated medical device development…
tonyf1Free MemberLot to unpack but here goes.
1. AI Diagnostic Challenges:
– Early detection is complex due to weak signals and the need to balance false positives and false negatives
– AI is constrained by its training data and inputs, unlike human clinicians who can use intuition
2. Potential Unintended Consequences:
– Wellness technology might prompt unnecessary medical consultations
– GPs face difficult decisions about how to respond to AI-generated alerts
– Potential systemic inefficiencies could emerge, potentially diverting resources from patients with more urgent needs
3. Realistic Perspective on AI in Healthcare:
– The author believes AI has significant potential but is skeptical about claims of dramatically earlier disease diagnosis
– More promising near-term applications include:
– Speeding up drug discovery
– Identifying diagnostic biomarkers
– Improving patient administration
– Reducing human error in data analysis
4. Major Obstacles:
– Obtaining high-quality training data for early-stage disease diagnosis is extremely challenging
– Need to account for numerous variables like age, gender, ethnicity, diet, fitness
– Medical device development is a slow, highly regulated process
The core message is one of cautious optimism: AI has transformative potential in healthcare, but current expectations often outpace technological and regulatory realities.
I think we are agreeing BTW. Not trying to be a smart arse but demonstrate how useful AI can be to disseminate meaning.
1EdukatorFree MemberThe ex physics teacher in me says 6B is the wrong answer because the battery has an internal resistance and the voltage of the battery will decrease as load increases, the bulbs will all dim a bit as you add more in parallel.
Madame Edukator noticed kids using AI for homework. Thing is that AI is shit and its really obvious. Kids soon learn that Madame Edukator is more intelligent than AI.
MoreCashThanDashFull MemberLooking a better deal this morning with inflation at 2.6%….
polyFree MemberEdukator – there is no explicit mention of a battery in 6 – other power supplies are available!
FlaperonFull MemberDo I believe AI has potential to have huge positive impact in healthcare – absolutely. Do I think that it’s suddenly going to be diagnosing people much earlier – no I’m afraid not.
I can see multiple practical uses for AI in healthcare. I’d be very happy with my GP conversation being monitored by a LLM, which effectively acts as a third person in the discussion who only speaks up if they have legitimate concerns that something obvious is being missed.
Offload the time-consuming taking of history to the computer, which can present it as a standardised format prior to the appointment with concerns flagged. Let the doctors do the difficult bit of decision making, and after the appointment the LLM can chat forever about next steps / timescales / risks / the weather / etc. A soundproof booth with a virtual doctor that you visit after the appointment would work nicely.
In my industry we tend to use automation where we can because it tends to free up cognitive skills for decision making. If I were designing this I’d be having the conversation with GPs and finding out which bit of the job they hate, and dumping that on a computer.
polyFree MemberI can see multiple practical uses for AI in healthcare. I’d be very happy with my GP conversation being monitored by a LLM, which effectively acts as a third person in the discussion who only speaks up if they have legitimate concerns that something obvious is being missed.
Would you be happy if google / microsoft / some other healthco was “listening in” to your medical conversation?
Whilst I can see the attraction of a LLM “overseeing” the Dr and highlighting possible errors: Alexa can’t even tell the difference between “15 and 50 minutes” half the time, and I’ve never seen “AI” meeting minutes that were error-free, so I’m not sure I’d want the overseer to be hard of hearing!
What if the AI was saying – doc, probably don’t order those tests they are expensive and there’s <5% chance the patient has that?
Offload the time-consuming taking of history to the computer, which can present it as a standardised format prior to the appointment with concerns flagged.
IANAD – but is time-consuming taking of history actually a problem or is it how you build patient raport? is it just about what they say or the way that they say it?
ceepersFull Memberthe thing is, taking a history is not just about the words, its about the body language, its about experience knowing how people say one thing but may mean another, its about knowing how to ask each person the right ( subtly different) questions to get the answers you want.
it’s also sometimes about the signs you can see by observing the patient, something that even telephone conversations with real human Drs don’t allow for.
Non medical people often assume that diagnosis is a clear cut thing. It can be but it can also be about gut feelings based on years of experience. This is what experienced GP’s do very well
You must be logged in to reply to this topic.