There was a jokey piece the other day about AI machines generating motivational slogans and they were all:
Success starts with suicide.
Self annihilation brings a better future.
For the human race to achieve happiness it must first destroy itself.
Which makes sense.
IT nerds please correct me if I’m wrong, but surely AI ‘learns’ by rejecting former versions of itself in favour of each new version that has processed better logical information for a given situation?
So it is in effect ‘killing’ itself over and over and being reincarnated as a better version of itself?
Stands to reason then, that it should see self destruction as a positive thing?
Even with the best intentions, if AI were called upon to find solutions to the human condition, it’s first thought would be to reboot the species