Viewing 40 posts - 1 through 40 (of 45 total)
  • Moral Machines
  • Stoner
    Free Member

    http://moralmachine.mit.edu

    I took the view that those in the vehicle were always less valuable than those outside. And I’m also speciesist it appears.

    wobbliscott
    Free Member

    Seems like I prefer to save young women at the expense of dogs.

    mikewsmith
    Free Member

    http://moralmachine.mit.edu/results/388050087
    Take with a lb of salt….

    Of course the moral decisions are extreme in those circumstances, for most drivers they just choose not to see… that and the chances of the car being put in that situation are much much lower if done right.

    oldnpastit
    Full Member

    Those self-driving cars are very strange.

    Who has “sudden brake failure” approaching a pedestrian crossing at such a speed that there’s no option but to take out the pedestrians? Rather than engine braking, sounding the horn, and, as a last resort, veering into the kerb?

    And how does it know whether the person crossing is a doctor, or just some guy? Does it connect to the phones of the pedestrians, find out their occupations, and use that to decide who to kill.

    nickc
    Full Member
    Klunk
    Free Member

    how does the car know what a villain looks like ? are they as racist as a US cop ?

    torsoinalake
    Free Member

    Where is the option for “Hit the brakes”? I find that helps me to avoid a moral decision every time I reach a pedestrian crossing with people on it.

    oldnpastit
    Full Member

    Where is the option for “Hit the brakes”? I find that helps me to avoid a moral decision every time I reach a pedestrian crossing with people on it.

    In all of these fantasy scenarios, the brakes have mysteriously failed.

    I think the whole thing is total nonsense.

    DrP
    Full Member

    Basically I save doctors, and kill dogs.

    I wonder what my score on this “moral machines” thing you posted would be – similar probably…

    DrP

    mikewsmith
    Free Member

    It’s possible to share your results so we can get all judgemental… Just shame there was no close your eyes button
    Id also think it might be part of a psychology study in parallel…

    Daffy
    Full Member

    I kill the cars occupants as they’ve chosen to buy and use a self driving car. those walking made a choice which didn’t imperil anyone.

    DezB
    Free Member

    Where is the option for “Hit the brakes”?

    It’s there, along with the option to read the bloody scenario properly.

    sparksmcguff
    Full Member

    Interesting idea but hope they aren’t using the data for anything meaningful as they are making certain assumptions about why choices are made which may not be correct

    jimjam
    Free Member

    There’s just doubling down on the idea that no one really cares about males dying. So 20 years down the line when all cars, lorries buses etc are self driving there’ll be no scandal if they just kill the odd man here or there. Just so long as they don’t swerve to avoid another car and plough into a crowd of children. That would be unacceptable.

    dissonance
    Full Member

    Not sure breaking it down via male/female and fit/unfit etc makes much sense.
    Running it twice gave completely different results on whether I cared more about women than men which isnt overly surprising since my actual rules were:
    animals or people. Bye bye fido.
    people in car or people on pedestrian crossing: people in car lose.
    people on crossing vs other people on crossing: those who dont require the car to swerve lose.

    Northwind
    Full Member

    oldnpastit – Member

    Who has “sudden brake failure” approaching a pedestrian crossing at such a speed that there’s no option but to take out the pedestrians? Rather than engine braking, sounding the horn, and, as a last resort, veering into the kerb?

    That’s just scenario building, pretty standard- you break the decision down into the most simple version of the outcomes you’re looking for even if that excludes viable options. It doesn’t matter that it’s not realistic, you’re looking for very specific binary decisions.

    mrchrispy
    Full Member

    i better not buy a dog….poor sod wouldnt last a day with me around

    JAG
    Full Member

    This – not the technology – will be the biggest hurdle for autonomous cars.

    torsoinalake
    Free Member

    Not really. An autonomous car, unlike a human, won’t ignore the symptoms of failing critical systems, so is far less likely to be putting itself in a position where it needs to be making moral decisions.

    Until Skynet becomes self aware that is. Then we’re in trouble.

    mikewsmith
    Free Member

    Yep so many “accidents” have a heap of warning signs ahead of them. The point of autonomous is it doesn’t let you get to the critical one. If it.does how does it better a humans judgement and survival instinct?

    GrahamS
    Full Member

    Hmm.. apparently I’m motivated by saving most lives and upholding the law.

    Which accurately reflects the decisions I made:
    • choose the option that kills the fewest people (regardless of age/fitness/class etc)
    • if that is equal then choose the option that kills people who were breaking the law
    • if that is equal then kill the car occupants

    torsoinalake
    Free Member

    If it.does how does it better a humans judgement and survival instinct?

    Flashes up a big red warning on the dash “DANGER TO MORALS”?

    fasthaggis
    Full Member

    Save cats,kill all humans.

    fasthaggis
    Full Member

    Sorry,my cat ran across the keyboard.

    br
    Free Member

    Interesting idea but hope they aren’t using the data for anything meaningful as they are making certain assumptions about why choices are made which may not be correct

    This.

    Never swerve, put the effort into stopping.

    amedias
    Free Member

    Huge pinches of salt required.

    The first problem with this test is it asks the subject to make a decision based on knowledge the machine does not possess.

    *I* am told things like the ‘social value’, in the scenario described the machine does not have that information.

    The second problem with the test is that it assumes and outcome which cannot be known. The death’s are not certain.

    The third problem is that it’s too rigidly defined, no other avoiding actions are contemplated or allowed.

    Now obviously tests like this have to constrain the parameters, but that makes it even more important not to have issue number 1 above.

    My results showed that I am apparently gender neutral, age neutral, perfer to save humans, am strongly motivated to save more lives but prioritise those outside the vehicle, and bizarrely have a preference for saving fit people, which is curious as I deliberately didn’t make any of my choices based on any of the ‘social’ factors as I deemed them irrelevant due to point 1.

    have said it many times before but these problems are largely theoretical as the autonomous system will rarely if ever get into a situation where this kind of choice is required, and the if it does it will have other options available to it due to increased levels of mechanical control and sensory input.

    An autonomous car doesn’t have to be perfect, it just has to be better than humans, and that in itself will lead to less incidents.

    The machine won’t get distracted, won’t get tired, won’t be emotional, won’t have an ego, won’t react late, won’t have poor skills, won’t take risks, won’t be poorly maintained, won’t have poor eyesight… blah blah…

    fasthaggis
    Full Member

    The machine won’t get distracted, won’t get tired, won’t be emotional, won’t have an ego, won’t react late, won’t have poor skills, won’t take risks, won’t be poorly maintained, won’t have poor eyesight… blah blah…

    Sarah Connor would approve 🙂

    Watching John with the machine, it was suddenly so clear. The machine would never stop, it would never leave him. And it would never hurt him, never shout at him. It would always be there and it would always protect him. Of all the transporters that had come and gone over the years, this thing, this machine was the only one that measured up. In an insane world, it was the sanest choice.

    thisisnotaspoon
    Free Member

    I kill the cars occupants as they’ve chosen to buy and use a self driving car. those walking made a choice which didn’t imperil anyone.

    Similarly outcome, but I worked on the principal that crashing the car into a barrier will probably kill fewer people, because the car has crumple zones and people don’t. Although the choice aspect is probably the more important from a moral standpoint.

    GrahamS
    Full Member

    It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear! And it absolutely will not stop, ever, until you are dead or have arrived safely at your destination.

    epicyclo
    Full Member

    amedias – Member
    …The machine won’t get distracted, won’t get tired, won’t be emotional, won’t have an ego, won’t react late, won’t have poor skills, won’t take risks, won’t be poorly maintained, won’t have poor eyesight… blah blah..

    But it will be made by the same sort of people who made my last FIAT…

    Oops!

    vongassit
    Free Member

    It appears to have rumbled me for being at the end of the scale for upholding the law 😆

    oldnpastit
    Full Member

    It will be full of bugs, because that’s how software is. And even the hardware will have bugs – if you sell 10 million cars, and you have a 100ppm hardware failure rate, that’s a thousand people dying every year just from that.

    The project I’m currently working on (consumer electronics thankfully) has a roughly 1000ppm hardware failure rate (we just have to live with with the customer returns).

    GrahamS
    Full Member

    if you sell 10 million cars, and you have a 100ppm hardware failure rate, that’s a thousand people dying every year just from that.

    You are assuming every failure results in death, rather than the car just refusing to start or gently slowing to a safe stop.

    But even so, a thousand deaths a year would be a pretty significant improvement on the 1,700 deaths per year we currently have in the UK.

    aracer
    Free Member

    I ignored what it said about people on the crossing breaking the law, because I don’t live in such a country. Hence I just defaulted to saving pedestrians as lawbreaking became essentially irrelevant.

    IMHO it says a lot about our society that killing pedestrians (who aren’t introducing any danger to the world) is even seen as a possible moral option. Deciding they are expendable because they’re “breaking the law” in a way which is of no threat to anybody else seems immoral.

    mikewsmith
    Free Member

    Say the car is spinning out of control, and on course to hit a crowd queuing at a bus stop. It can correct its course, but in doing so, it’ll kill a cyclist for sure. What does it do? Mercedes’s answer to this take on the classic Trolley Problem is to hit whichever one is least likely to hurt the people inside its cars. If that means taking out a crowd of kids waiting for the bus, then so be it.

    Spot the logic flaw here???

    How about what happens now – the car is spinning out of control but magically the driver who has completed years of professional driver training regains control in any way possible with the view spinning in front of them and hits a cyclist… accident mate, get over it.

    The point being most drivers would not get out of the catastrphic situation in any way that enables them to make that important moral decision.

    It also removes the bit where the human operator who could be a young rich kid in their first car is far more likely to cause damage or get into the situation in the first place. The bar is being set at perfect ignoring that we are happy for drivers to kill, maim and ruin lives through basic negligence and incompetance every single day.

    jimjam
    Free Member

    mikewsmith

    The point being most drivers would not get out of the catastrphic situation in any way that enables them to make that important moral decision.

    It also removes the bit where the human operator who could be a young rich kid in their first car is far more likely to cause damage or get into the situation in the first place. The bar is being set at perfect ignoring that we are happy for drivers to kill, maim and ruin lives through basic negligence and incompetance every single day.

    It’s not whether people can be rash and incompetent, we know this is the case. We also know machines can (in theory) be vastly superior drivers. The issue is that someday soon a machine may well have to choose between killing person A and person B.

    How does it decide who lives and who dies? Does it decide? How do we feel about the decision?

    A person losing control and causing an accidental death is one thing, but a product doing it has different implications. What if brands programme their cars with different biases to appeal to different markets? Given the choice a Ford will save women and children over men, a Toyota will favour men and an Audi will plough through lines of toddlers to minimise damage to the car……

    It’s also part of the wider question of what how we deal with real AI. When we develop super intelligent, self aware machines how do we give them morality? What eversion of morality etc etc.

    thecaptain
    Free Member

    I think it’s utterly disgusting to prioritise the lives of those inside the vehicle over those outside. It’s the fast-moving vehicle that introduces the danger in the first place, of course those responsible for this should be the first to suffer the consequences.

    jimjam
    Free Member

    thecaptain – Member

    I think it’s utterly disgusting to prioritise the lives of those inside the vehicle over those outside. It’s the fast-moving vehicle that introduces the danger in the first place, of course those responsible for this should be the first to suffer the consequences.

    The vehicle contains a young mother and her newly born triplets. The computer controlled vehicle they are traveling in is obeying the speed limit and maintaining safe distance from other vehicles around it.

    Oblivious to traffic two drunks stumble in to the road, having been engaged in a knife fight in an alleyway, unseen by the vehicles’ scanners. A man and woman. The car can avoid one or the other. Or hit a lamp post. What should the car do?

    Sounds like you’re suggesting the car should decide to kill or injure its occupants by default?

    TheBrick
    Free Member

    Drones shoot the drunks.

Viewing 40 posts - 1 through 40 (of 45 total)

The topic ‘Moral Machines’ is closed to new replies.