MegaSack DRAW - This year's winner is user - rgwb
We will be in touch
I took the view that those in the vehicle were always less valuable than those outside. And I'm also speciesist it appears.
Seems like I prefer to save young women at the expense of dogs.
http://moralmachine.mit.edu/results/388050087
Take with a lb of salt....
Of course the moral decisions are extreme in those circumstances, for most drivers they just choose not to see... that and the chances of the car being put in that situation are much much lower if done right.
Those self-driving cars are very strange.
Who has "sudden brake failure" approaching a pedestrian crossing at such a speed that there's no option but to take out the pedestrians? Rather than engine braking, sounding the horn, and, as a last resort, veering into the kerb?
And how does it know whether the person crossing is a doctor, or just some guy? Does it connect to the phones of the pedestrians, find out their occupations, and use that to decide who to kill.
[url= https://en.wikipedia.org/wiki/Trolley_problem ]Trolley Problem [/url]
how does the car know what a villain looks like ? are they as racist as a US cop ?
Where is the option for "Hit the brakes"? I find that helps me to avoid a moral decision every time I reach a pedestrian crossing with people on it.
Where is the option for "Hit the brakes"? I find that helps me to avoid a moral decision every time I reach a pedestrian crossing with people on it.
In all of these fantasy scenarios, the brakes have mysteriously failed.
I think the whole thing is total nonsense.
Basically I save doctors, and kill dogs.
I wonder what my score on this "moral machines" thing you posted would be - similar probably...
DrP
It's possible to share your results so we can get all judgemental... Just shame there was no close your eyes button
Id also think it might be part of a psychology study in parallel...
I kill the cars occupants as they've chosen to buy and use a self driving car. those walking made a choice which didn't imperil anyone.
Where is the option for "Hit the brakes"?
It's there, along with the option to [i]read the bloody scenario properly[/i].
Interesting idea but hope they aren't using the data for anything meaningful as they are making certain assumptions about why choices are made which may not be correct
There's just doubling down on the idea that no one really cares about males dying. So 20 years down the line when all cars, lorries buses etc are self driving there'll be no scandal if they just kill the odd man here or there. Just so long as they don't swerve to avoid another car and plough into a crowd of children. That would be unacceptable.
Not sure breaking it down via male/female and fit/unfit etc makes much sense.
Running it twice gave completely different results on whether I cared more about women than men which isnt overly surprising since my actual rules were:
animals or people. Bye bye fido.
people in car or people on pedestrian crossing: people in car lose.
people on crossing vs other people on crossing: those who dont require the car to swerve lose.
oldnpastit - MemberWho has "sudden brake failure" approaching a pedestrian crossing at such a speed that there's no option but to take out the pedestrians? Rather than engine braking, sounding the horn, and, as a last resort, veering into the kerb?
That's just scenario building, pretty standard- you break the decision down into the most simple version of the outcomes you're looking for even if that excludes viable options. It doesn't matter that it's not realistic, you're looking for very specific binary decisions.
i better not buy a dog....poor sod wouldnt last a day with me around
This - not the technology - will be the biggest hurdle for autonomous cars.
Not really. An autonomous car, unlike a human, won't ignore the symptoms of failing critical systems, so is far less likely to be putting itself in a position where it needs to be making moral decisions.
Until Skynet becomes self aware that is. Then we're in trouble.
Yep so many "accidents" have a heap of warning signs ahead of them. The point of autonomous is it doesn't let you get to the critical one. If it.does how does it better a humans judgement and survival instinct?
Hmm.. apparently I'm motivated by saving most lives and upholding the law.
Which accurately reflects the decisions I made:
• choose the option that kills the fewest people (regardless of age/fitness/class etc)
• if that is equal then choose the option that kills people who were breaking the law
• if that is equal then kill the car occupants
Flashes up a big red warning on the dash "DANGER TO MORALS"?If it.does how does it better a humans judgement and survival instinct?
[b]Save cats,kill all humans.[/b]
[i]Sorry,my cat ran across the keyboard.[/i]
[I]Interesting idea but hope they aren't using the data for anything meaningful as they are making certain assumptions about why choices are made which may not be correct [/I]
This.
Never swerve, put the effort into stopping.
Huge pinches of salt required.
The first problem with this test is it asks the subject to make a decision based on knowledge the machine does not possess.
*I* am told things like the 'social value', in the scenario described the machine does not have that information.
The second problem with the test is that it assumes and outcome which cannot be known. The death's are not certain.
The third problem is that it's too rigidly defined, no other avoiding actions are contemplated or allowed.
Now obviously tests like this have to constrain the parameters, but that makes it even more important not to have issue number 1 above.
My results showed that I am apparently gender neutral, age neutral, perfer to save humans, am strongly motivated to save more lives but prioritise those outside the vehicle, and bizarrely have a preference for saving fit people, which is curious as I [i]deliberately[/i] didn't make any of my choices based on any of the 'social' factors as I deemed them irrelevant due to point 1.
have said it many times before but these problems are largely theoretical as the autonomous system will rarely if ever get into a situation where this kind of choice is required, and the if it does it will have other options available to it due to increased levels of mechanical control and sensory input.
An autonomous car doesn't have to be perfect, it just has to be better than humans, and that in itself will lead to less incidents.
The machine won't get distracted, won't get tired, won't be emotional, won't have an ego, won't react late, won't have poor skills, won't take risks, won't be poorly maintained, won't have poor eyesight... blah blah...
The machine won't get distracted, won't get tired, won't be emotional, won't have an ego, won't react late, won't have poor skills, won't take risks, won't be poorly maintained, won't have poor eyesight... blah blah...
Sarah Connor would approve 🙂
[i]Watching John with the machine, it was suddenly so clear. The machine would never stop, it would never leave him. And it would never hurt him, never shout at him. It would always be there and it would always protect him. Of all the transporters that had come and gone over the years, this thing, this machine was the only one that measured up. In an insane world, it was the sanest choice.[/i]
I kill the cars occupants as they've chosen to buy and use a self driving car. those walking made a choice which didn't imperil anyone.
Similarly outcome, but I worked on the principal that crashing the car into a barrier will probably kill fewer people, because the car has crumple zones and people don't. Although the choice aspect is probably the more important from a moral standpoint.
[i]It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear! And it absolutely will not stop, ever, until you are dead or have arrived safely at your destination.[/i]
amedias - Member
...The machine won't get distracted, won't get tired, won't be emotional, won't have an ego, won't react late, won't have poor skills, won't take risks, won't be poorly maintained, won't have poor eyesight... blah blah..
But it will be made by the same sort of people who made my last FIAT...
Oops!
Mercedes are working on this apparently:
https://www.fastcoexist.com/3064539/self-driving-mercedes-will-be-programmed-to-sacrifice-pedestrians-to-save-the-driver
It appears to have rumbled me for being at the end of the scale for upholding the law 😆
It will be full of bugs, because that's how software is. And even the hardware will have bugs - if you sell 10 million cars, and you have a 100ppm hardware failure rate, that's a thousand people dying every year just from that.
The project I'm currently working on (consumer electronics thankfully) has a roughly 1000ppm hardware failure rate (we just have to live with with the customer returns).
if you sell 10 million cars, and you have a 100ppm hardware failure rate, that's a thousand people dying every year just from that.
You are assuming every failure results in death, rather than the car just refusing to start or gently slowing to a safe stop.
But even so, a thousand deaths a year would be a pretty significant improvement on the 1,700 deaths per year we currently have in the UK.
[quote=GrahamS ]• if that is equal then choose the option that kills people who were breaking the law
• if that is equal then kill the car occupants
I ignored what it said about people on the crossing breaking the law, because I don't live in such a country. Hence I just defaulted to saving pedestrians as lawbreaking became essentially irrelevant.
IMHO it says a lot about our society that killing pedestrians (who aren't introducing any danger to the world) is even seen as a possible moral option. Deciding they are expendable because they're "breaking the law" in a way which is of no threat to anybody else seems immoral.
Say the car is[b] spinning out of control[/b], and on course to hit a crowd queuing at a bus stop. [b]It can correct its course[/b], but in doing so, it'll kill a cyclist for sure. What does it do? Mercedes's answer to this take on the classic Trolley Problem is to hit whichever one is least likely to hurt the people inside its cars. If that means taking out a crowd of kids waiting for the bus, then so be it.
Spot the logic flaw here???
How about what happens now - the car is spinning out of control but magically the driver who has completed years of professional driver training regains control in any way possible with the view spinning in front of them and hits a cyclist... accident mate, get over it.
The point being most drivers would not get out of the catastrphic situation in any way that enables them to make that important moral decision.
It also removes the bit where the human operator who could be a young rich kid in their first car is far more likely to cause damage or get into the situation in the first place. The bar is being set at perfect ignoring that we are happy for drivers to kill, maim and ruin lives through basic negligence and incompetance every single day.
mikewsmithThe point being most drivers would not get out of the catastrphic situation in any way that enables them to make that important moral decision.
It also removes the bit where the human operator who could be a young rich kid in their first car is far more likely to cause damage or get into the situation in the first place. The bar is being set at perfect ignoring that we are happy for drivers to kill, maim and ruin lives through basic negligence and incompetance every single day.
It's not whether people can be rash and incompetent, we know this is the case. We also know machines can (in theory) be vastly superior drivers. The issue is that someday soon a machine may well have to choose between killing person A and person B.
How does it decide who lives and who dies? Does it decide? How do we feel about the decision?
A person losing control and causing an accidental death is one thing, but a product doing it has different implications. What if brands programme their cars with different biases to appeal to different markets? Given the choice a Ford will save women and children over men, a Toyota will favour men and an Audi will plough through lines of toddlers to minimise damage to the car......
It's also part of the wider question of what how we deal with real AI. When we develop super intelligent, self aware machines how do we give them morality? What eversion of morality etc etc.
I think it's utterly disgusting to prioritise the lives of those inside the vehicle over those outside. It's the fast-moving vehicle that introduces the danger in the first place, of course those responsible for this should be the first to suffer the consequences.
thecaptain - MemberI think it's utterly disgusting to prioritise the lives of those inside the vehicle over those outside. It's the fast-moving vehicle that introduces the danger in the first place, of course those responsible for this should be the first to suffer the consequences.
The vehicle contains a young mother and her newly born triplets. The computer controlled vehicle they are traveling in is obeying the speed limit and maintaining safe distance from other vehicles around it.
Oblivious to traffic two drunks stumble in to the road, having been engaged in a knife fight in an alleyway, unseen by the vehicles' scanners. A man and woman. The car can avoid one or the other. Or hit a lamp post. What should the car do?
Sounds like you're suggesting the car should decide to kill or injure its occupants by default?
Drones shoot the drunks.
thecaptain - Member
I think it's utterly disgusting to prioritise the lives of those inside the vehicle over those outside. It's the fast-moving vehicle that introduces the danger in the first place, of course those responsible for this should be the first to suffer the consequences.
The whole marking system is flawed, I based all of my answers on if the people were crossing the road legally (green light), I didn't read any of the character bios simply if they had right of way then the car passengers died - choosing the outcome this way apparently I chose fit people as a priority.
Where couldn't chose between legal or not I chose to aim at the people most able to get out of the way if I sounded the horn.
As mentioned it's impossible to know if the people on the crossing were criminals (unless we're talking cartoon world and they're walking around with 'swag' on their bags).
Very poor test for brainiacs of MIT to host
Strange test I couldn't even answer the first question as to who I would most like to kill
The reality is at the time I would not have enough time to process the information and will just slam on and hope for the best
Very poor test for brainiacs of MIT to host
If it works the way that most psychology experiments work it is probably a very good test for something else entirely, which is cunningly disguised as a very poor test for something irrelevant to the outcome. 🙂
Though really I think they just did it as a talking point.
I just took it as being a test of the test - I'm making notes on what preferences it thinks I have on something which didn't play any part in my decision making process.
i took the view that if the car is travelling so fast that it can't stop/slow down in time to avoid a fatal crash with a large concrete block *in it's own lane*, all the occupants deserve to die.
and now i'm told that i'm a child-killer...
