Retinal age AI tech...
 

[Closed] Retinal age AI technology bonkers or not?

33 Posts
13 Users
0 Reactions
37 Views
Offline  reeksy
Full Member
Topic starter
 
Share this post

I listened to an interview with one of the authors of this paper on predicting mortality based on biological age as determined by AI assessment of retina.

https://bjo.bmj.com/content/early/2021/11/17/bjophthalmol-2021-319807

What i found staggering is that the researcher in the interview said that they don't actually know how the AI works:

Quote from interview

"we just labelled all the retinal images with age, and after learning features or patterns related to age through more than 11,000 retinal images and the AI was able to predict the age with a retinal image. However, there is a black box through this AI process, so we didn't know how exactly the AI did predict the age but the heat map shows us that the AI focuses on the retinal vessels."

Is this bonkers?

 
Posted : 08/02/2022 11:02 pm
Offline  zilog6128
Full Member
 
Share this post

Sounds like they’re talking about machine learning tools (ML) rather than just AI (where you might explicitly program an algorithm hence know exactly why/how it works)

I have used ML tools… honestly haven’t got a clue how they [i]really[/i] work and couldn’t create them myself - but don’t need to, that’s the point, the computer figures it out itself 😀 You can verify that the conclusion is correct with known cases, which then gives a reasonable confidence in its conclusions on unknown cases!

 
Posted : 08/02/2022 11:11 pm
Free Member
 
Share this post

It is simple but as it scales it becomes almost impossible to understand.

If you present an image the computer needs to break this down to features. These might be as simple as colour but then could be shapes of similar colour, multiple shapes in specific spatial relationships etc

Try to recognise a set of traffic lights in a photo.

If the simplest model can only detect colour then if it sees red and green on the same image it assumes that there is a traffic light. Simple but not very accurate.

Add shape. If there is a red round shape and a green round shape on the same image it assumes that there is a traffic light. Better but still not very accurate.

Add relative position. If there is a red round shape close to and above a green round shape on the same image it assumes that there is a traffic light. Getting better.

Now let the computer start building its own patterns based on the images it has seen where it was told that there were or were not traffic lights. You don't know what the patterns will be because the computer derives them itself based on some simple rules you gave it. Maybe it adds something about having 90 degree angles to catch the lamp unit, perhaps something about being on poles, or hanging on wires. Or not being shaped like a sign post. It is very easy to build up 1,000s of different patterns and rules about which to include of not include depending on what is on the image and quite soon it can spot traffic lights without you knowing how.

Now change the image to the human eye, or a patients cancer smear etc at it goes up another scale and you don't know how it knows.

 
Posted : 08/02/2022 11:36 pm
Offline  reeksy
Full Member
Topic starter
 
Share this post

Strikes me this recaptcha 'I am not a robot' malarkey hasn't got a hope then.

 
Posted : 08/02/2022 11:39 pm
Free Member
 
Share this post

[i]Strikes me this recaptcha ‘I am not a robot’ malarkey hasn’t got a hope then.[/i]

That actually tracks the user mouse movements and screen interactions when selecting the images with traffic lights as well as if they just select the right ones. It uses ML for that too. Basically people hesitate more that a computer and might check and then uncheck images.

Some are as dumb as just making you click in a box but most are not. Mostly not written by the website writers but added as 3rd party add-ins so the website owner doesn't even know how it decided the person was not a robot but just has to accept the pass/fail that gets returned

 
Posted : 08/02/2022 11:44 pm
Offline  reeksy
Full Member
Topic starter
 
Share this post

It's kinda weird that I fail it so often when acting in no way intelligently. Shirley that should be proof that i'm not a robot?

 
Posted : 09/02/2022 12:04 am
Online  maccruiskeen
Full Member
 
Share this post

Shirley that should be proof that i’m not a robot?

The balance of probability suggests you are. You just dont know it.

 
Posted : 09/02/2022 12:10 am
Free Member
 
Share this post

I can never read the letter properly in the badly written word tests. Something about the way different peoples brains process images. We are not all programmed the same and we have different learning experiences that have shaped the way we recognise things. Same as machine learning really.

 
Posted : 09/02/2022 12:14 am
Free Member
 
Share this post

Some interesting stuff here about the human brain and visual perception : https://gondwana-collection.com/blog/how-do-namibian-himbas-see-colour

 
Posted : 09/02/2022 12:17 am
Offline  reeksy
Full Member
Topic starter
 
Share this post

The balance of probability suggests you are. You just dont know it.

Could not compute 🙂

I'm the same with those word tests. Hate them.

That article's very interesting... the comments below possibly more so 😉

 
Posted : 09/02/2022 12:51 am
Offline  Tenuous
Free Member
 
Share this post

Sounds like that's the result of some excessive bending of the facts. From https://languagelog.ldc.upenn.edu/nll/?p=18237 ...

"The BBC's presentation of the mocked-up experiment — purporting to show that the Himba are completely unable to distinguish blue and green shades that seem quite different to us, but can easily distinguish shades of green that seem identical to us — was apparently a journalistic fabrication, created by the documentary's editors after the fact, and was never asserted by the researchers themselves, much less demonstrated experimentally."

 
Posted : 09/02/2022 12:52 am
Offline  reeksy
Full Member
Topic starter
 
Share this post

Jinx!

 
Posted : 09/02/2022 12:58 am
Offline  dissonance
Full Member
 
Share this post

Is this bonkers?

It depends on the AI used. The term covers a massive range of options and possibly the two which are most applicable here are neural networks and genetic algorithms.
Neural networks try and replicate how a brain works in a stylised form with multiple layers of neurons interacting with each other. Without going into details on the grounds it aint an area I work in/understand beyond the basics it ends up with the various neurons being connected together (after multiple cycles against test data saying yes this is the right result or not) in ways which wouldnt be easy for the researcher to understand and explain. It will work but why neuron 5.f has the weighting it does will be hard for them to explain.
Genetic algorithms are where the requirement can be specified in a way that the researcher can write some code which can be bred with itself eg bits of each algorithm switched with bits of another. Here I am even more clueless than with neural networks but you can end up with some really weird solutions to problems which no one would have ever come up with.

There is a, probably, apocryphal story of an early attempt at the use of AI in the military for identifying camouflaged tanks. The AI was trained against a bunch of sample photos and it scored really well but when tested against random data failed. When they checked it ended up that the tanks vs non tank photos were taken on two different days one of which was sunny and the other cloudy.

 
Posted : 09/02/2022 1:17 am
Free Member
 
Share this post

[i]There is a, probably, apocryphal story of an early attempt at the use of AI in the military for identifying camouflaged tanks. The AI was trained against a bunch of sample photos and it scored really well but when tested against random data failed. When they checked it ended up that the tanks vs non tank photos were taken on two different days one of which was sunny and the other cloudy.[/i]
Not AI and actually real - the Russians trained dogs with antitank mines to walk over to tanks so the mine attached and blew up the tank (and dog). Unfortunately they trained them with Russian tanks soi when released against the German tanks they just ran back and blew up the Russian tank division. Whoops!

ML In action today - A large supermarket has an app that tells the pickers collecting for home deliveries which item to pick. It also tells them which item to substitute if the first item is not available. The idea was that a clever person would walk around the shop carefully selecting the best choice and the next best choice to train the model. This would vary for each store due to slightly different layouts. The idea was it would make pickers faster and substitutions faster.
The individual store managers tended to just send out their quickest pickers to train the app. These pickers just grabbed the nearest item for substitutions which meant the app in some stores was trained with weird choices - substitute hair dye for nappies was one example. Trouble was the training planned by the store manager, not the model but you can guess which was blamed for the rise in customer complaints at some stores

 
Posted : 09/02/2022 10:14 am
Offline  molgrips
Free Member
 
Share this post

The fact that the ML algorithm is a black box has been discussed as a concern. If you offer a service doing something like this, and it makes a mistake, then you are liable. But who wants to be liable for things when they have no idea how they work?

 
Posted : 09/02/2022 10:28 am
Offline  grahamt1980
Full Member
 
Share this post

It is a massive challenge in the medical area to find the balance of explainability and understanding against the fact that sometimes it just works. The regulators are lagging a bit in understanding and how to action.
Very interesting place to be from a quality standpoint and should keep me in work for some time

 
Posted : 09/02/2022 10:31 am
Free Member
 
Share this post

I don't think anyone has managed to definitively explain how anaesthetics work and sometimes they don't do what is expected but on the whole patients seem to prefer to be anesthetised.
Dogs trained to smell cancer - how does that work? No idea but do you want the lump removed or not?

These things tend to by used as one part in the toolkit of detections. Yes, there will be some false positives which can later be discounted. Yes there will be some false negatives which will hopefully be picked up by the other, more established methods. As you find fewer and fewer false negatives, the more you trust the model.

 
Posted : 09/02/2022 10:53 am
Offline  grahamt1980
Full Member
 
Share this post

Agreed on results being a key indicator of performance, but there has to be a level of understanding otherwise the entire thing while on the surface may be working well is actually complete rubbish.
I saw a report where they were training ml to identify an animal, which it was doing well. But when they investigated it more it turned out that all of the examples of the specific animal were against a snow background, so the algorithm had identified that if it had snow then it was that animal.
Raises massive issues of bias etc, as seen when they made an ai to detect skin cancer looking at moles. They only used Caucasian skin pictures as there were limited examples of black skin with cancerous moles available

 
Posted : 09/02/2022 11:17 am
Offline  johnx2
Free Member
 
Share this post

To contradict posts above, this is worth a read:

Predicting sex from retinal fundus photographs using automated deep learning
paper from biobank/NIHR, deepmind, google health
https://www.nature.com/articles/s41598-021-89743-x

Although ophthalmologists may continue to ponder what these deep learning models are “looking at”, our study demonstrates the robust potential of CFDL to characterize images independent of experts’ knowledge of contributing features...

Basically - the AI can look at photos of the back of retinas and say with a high degree of accuracy whether they're of a man's or a woman's retina. Humans - expert opthalmologists - can't do this and don't know what model the AI is using to do this.

So interesting: we tend to look for theories, rules, patterns, models to understand and makes sense of a bunch of datapoints, and make predictions. AI doesn't have to work this way. Scientific theories are a human construct and not part of the natural world?

 
Posted : 09/02/2022 11:46 am
Free Member
 
Share this post

Agreed about bias.

Intentional bias during training is good - no point having pictures of jungles and polar landscapes when training for traffic lights - but often harder with unintentional bias, especially if it is hard to get properly random samples for the training. If you train it with picture only showing traffic lights on posts by the side of the junction then it might not recognise those above a junction on gantries or wires.

Limited samples lead to the 'racist' algorithms where they are only trained on one ethnicity. This is true of medical recommendations across the globe. Here I am obese, USA I am overweight and Samoa I am healthy*

I read somewhere that all American psychology was based on lab rats and Harvard students as this was the make up of 95% of the trial population.

Computers do not cause the problems of bias, they simply allow it to be automated and scaled

*I can't remember the exact details but I found something along those lines when I got told I had Type 2 diabetes last summer. The easiest cure was to move to Samoa or Tonga where the medical guidance was different

 
Posted : 09/02/2022 11:50 am
Free Member
 
Share this post

[i]Scientific theories are a human construct and not part of the natural world?[/i]

They are the current best guess to explain what is seen to happen and try to explain it. Science follows reality and tries to explain, it doesn't create it.

 
Posted : 09/02/2022 11:52 am
Offline  TiRed
Full Member
 
Share this post

This is worth a read https://www.gwern.net/Tanks

ML is a complex regression method that takes multiple predictors to try and generate an outcome. Standard regression techniques are analyst-driven - "I think smoking might influence life expectancy so will plot lifespan vs. smoking years and see if there is a correlation". ML uses multiple inputs, but the analyst doesn't necessarily know what they are or in what combination. This is particularly true for image analysis, where image information can be chunked in many ways. So not surprising at all.

Typically data is divided into a training and test dataset, then you train a model and see how well it predicts the test set. Higher level validation requires independent data sets - see how well it predicts the future. Sadly nature, and the stock market have intrinsic variation that makes simple (or complex) pattern recognition only good to a point. And then it isn't. If it were easy, I 'd be typing this from my private island instead of the dining room.

 
Posted : 09/02/2022 11:57 am
Offline  johnx2
Free Member
 
Share this post

They [scientific theories] are the current best guess to explain what is seen to happen and try to explain it. Science follows reality and tries to explain, it doesn’t create it.

Agreed. Newton's laws are a human approximation of how some of the world works. Einstein's theories go further but are likewise. They are our understanding. Key word being "our".

 
Posted : 09/02/2022 12:11 pm
Offline  dissonance
Full Member
 
Share this post

I read somewhere that all American psychology was based on lab rats and Harvard students as this was the make up of 95% of the trial population.

Whilst certainly not all psychology (I can think of several well known experiments which didnt) there is a bias towards using undergrads and more specifically psychology students since those are the easiest to recruit.

The term weird "Western Educated Industrialized Rich Democratic" has been used to describe them.

 
Posted : 09/02/2022 12:20 pm
Offline  molgrips
Free Member
 
Share this post

I don’t think anyone has managed to definitively explain how anaesthetics work and sometimes they don’t do what is expected but on the whole patients seem to prefer to be anesthetised.

Yes, of course, but there is a very large amount of data on their usage; but don't forget that medical science is based on our best efforts to understand something that no-one designed, isn't fully understood, but we are stuck with - human bodies. Mistakes are inevitable, but it is understood that following the procedure and the best guess is all medical practitioners can do.

In the case of AI systems they are being brought in without much experience but as a conscious choice, mostly to replace things we already do a different way. As such they have to reach different standards in a much shorter space of time and someone will be liable for it if it goes badly wrong and someone suffers.

 
Posted : 09/02/2022 12:43 pm
Free Member
 
Share this post

[i]ML is a complex regression method that takes multiple predictors to try and generate an outcome. Standard regression techniques are analyst-driven – “I think smoking might influence life expectancy so will plot lifespan vs. smoking years and see if there is a correlation”. ML uses multiple inputs, but the analyst doesn’t necessarily know what they are or in what combination. This is particularly true for image analysis, where image information can be chunked in many ways. So not surprising at al[/i]

This

Basically humans try to break things into simple groups so they can reduce the complexity which leads to the stupidity of things like racism and anti-immigration based on skin colour. "He is a black American so should be sent home to Africa". "He is a white American but looks much like a white European so he can stay". Totally stupid but easier to point out skin colour than do a fully balance profiling for every individual person, especially if you just want to shout and kick the shit out of someone.

Same goes for sexism, ageism etc. Pick a single criteria and build everything on that one thing is normal recipe for stupid decisions

 
Posted : 09/02/2022 1:25 pm
Full Member
 
Share this post

Same goes for sexism, ageism etc. Pick a single criteria and build everything on that one thing is normal recipe for stupid decisions

Indeed. Though anyone who believes an AI would be unbiased just because it uses a multi-component model has read too much sci-fi. GIGO.

It is pretty common for (large), self-learning models to be inscrutable. And if it is generally right then OK. As long as the edge cases are recognised.

Unsure what the point is of the model the OP posted. Reminds me of some excellent metabolomic research: it was possible to tell the difference between white and brown mice based upon metabolomic analysis of their urine. Or you could turn the light on. Similarly here, you could ask ‘how old are you, have you smoked, …’. This AI sounds like a fancy actuarial table.

 
Posted : 09/02/2022 6:21 pm
Offline  footflaps
Free Member
 
Share this post

ML is crazy stuff, was reading about ML scaning images for cancer. They could reduce the image resolution down to something daft like 8x8 pixels and it still had a very impressive prediction rate. None of the oncologists could see anything in images that low, yet the ML algorithm could somehow 'spot' cancer.

They can also be very energy intensive to train - Tesla, Amazon, Google, FB, Apple etc are all building insane super computers to train their ML algorithms with Peta Bytes of data (or more) - which consumes huge amounts of energy. Once trained, they are low energy to run - but they keep re-training them all the time with new source data, so overall they use a lot.

Tesla has all its cars upload video sequences where the driver did something the car didn't expect (even if driven in manual mode). Millions of these video snips are fed into their ML algorithms continually to try and improve the self driving algorithms.

 
Posted : 09/02/2022 9:43 pm
Offline  reeksy
Full Member
Topic starter
 
Share this post

Well, i'm learning a bit about AI!

Whilst certainly not all psychology (I can think of several well known experiments which didnt) there is a bias towards using undergrads and more specifically psychology students since those are the easiest to recruit.

In my undergrad days the psychology dept was an excellent source of beer money 🙂

Unsure what the point is of the model the OP posted.

Retinal age gap as a predictive biomarker for mortality risk ... so of interest to the insurance industry presumably.
Obviously there are established methods of biological age estimation, but i think the idea with this is to develop something that's super quick and relatively cheap.

Tanks for the Thanks reference Tired - a bit too long to read it all, but i got the gist.

 
Posted : 10/02/2022 1:14 am
Free Member
 
Share this post

AI and ML is quite good for security in public areas like airports and train stations as it can analyse the movements of the people to identify the 95%-99% or people who behave like people at an ariport/train station by their gate, head movements, locations they loiter etc.

This then highlights just 1%-5% the security guys need to check on. Many of these can be quickly discounted - man in fancy dress outfit, drunk etc - to allow the proper suspicious people to get close surveillance.

Even better in office blocks where the same employees turn up day in, day out and just a few visitors need to be checked. Anyone who is not recognised should head to the reception or wait in the reception area. Anyone who doesn't should be shot immediately as a potential security risk - or possibly just asked what they are doing.

 
Posted : 10/02/2022 12:29 pm
Offline  footflaps
Free Member
 
Share this post

or possibly just asked what they are doing.

Sounds expensive, cheaper to just shoot them and dump the bodies in the recycling bin.

 
Posted : 10/02/2022 1:19 pm
Offline  johnx2
Free Member
 
Share this post

or possibly just asked what they are doing.

and whether they're a robot

 
Posted : 10/02/2022 1:20 pm
Offline  ji
Free Member
 
Share this post

there has to be a level of understanding otherwise the entire thing while on the surface may be working well is actually complete rubbish

Or it works but not for the reasons you think. There are examples of computers learning games such as noughts and crosses. One kept winning, and when its strategy was examined in more depth it turned out it was cheating by adding extra spaces on the grid. Similarly when designing electronic circuits, the most economical ways designed by computer (many times smaller and simpler than that designed by humans) often used cheating - turning part of the circuit into a radio receiver to pick up signals from unnconected PCs nearby for example.

 
Posted : 10/02/2022 1:37 pm
Offline  zilog6128
Full Member
 
Share this post

They can also be very energy intensive to train – Tesla, Amazon, Google, FB, Apple etc are all building insane super computers to train their ML algorithms with Peta Bytes of data (or more) – which consumes huge amounts of energy. Once trained, they are low energy to run
I have one of the hardware Google Coral TPU modules which I just use for messing about with ML at home. It's insanely cheap/efficient (£50 and many times more powerful/energy efficient than even the best Intel CPU at ML tasks - even a machine as basic as a Raspberry Pi can do real-time object recognition/tracking over multiple camera streams simultaneously with one of these plugged in!)

Anyway, some eggheads strapped 5000+ of them together, taught it the rules of chess - but no strategy/tactics/openings etc - and simply by playing against itself for 9 hours figured all those out to the point where it could reliably beat the strongest conventional chess AI (Stockfish). It doesn't need to run on a supercomputer either, just a single 44-core Xeon with only 4 of the TPU modules (once training complete).

https://en.wikipedia.org/wiki/AlphaZero
(bear in mind that happened 5 years ago, and things will have moved on even further since then!!)

 
Posted : 10/02/2022 1:56 pm