• This topic has 24 replies, 18 voices, and was last updated 8 years ago by kcr.
Viewing 25 posts - 1 through 25 (of 25 total)
  • How long before a self drive car kills a cyclist?
  • ti_pin_man
    Free Member

    Food for thought self drive car versus cyclist article

    Interesting but I think the most interesting question it raises is that of making a choice, does the self drive car choose to protect its passengers, hit a cyclist, hit pedestrian or another action and how does it make that decision, based on what coding, perhaps its a functionality the user chooses at set up… errrr I want to protect the passengers, then lets see… then check the age of the cyclist and pedestrian and kill the oldest one.

    If the car has the chance to choose, what order does it decide. Manufacturers are bound to protect passengers but then who?

    slowoldman
    Full Member

    OK let’s revert back for a moment to you driving the car? What decision would you take?

    stevemuzzy
    Free Member

    The logical and cold way i would say is whoever is at fault. If everyone was following the rules of the road no accidents would happen….its still one heck of an arguement…

    amedias
    Free Member

    haven’t we already done this discussion recently?

    The default answer is that a self driving car of sufficiently advanced technology should be a lot less likely to get into a ‘no-win’ scenario. It would be both more ‘aware’ and able to take mitigating steps before it happens, and in more control when something does go wrong which is sufficiently unexpected.

    If it does end up in such a situation then then the default is to try to bring the vehicle to a safe stop or avoid impact, ie: do whatever a human would do, only better. The additional benefit of having other self driving cars is that hopefully the other vehicles in the area would also be doing the same thing.

    Whenever this is trotted out it ends up degenerating into ever more hypothetical scenarios and constructed no-win situations trying to force a decision that simply wouldn’t be made. Whatever you program the software to do you’re doing based on known input, and such ethical decisions about who to ‘kill’ would rely on data that isn’t available, it would simply come down to trying to avoid collision in the first place, and if that is impossible then have the collision at the slowest possible speed.

    The software would not ‘decide’ to kill anyone, it may kill or injure someone through failing to stop quick enough/perform a manoeuvre, but these two are not the same thing.

    And always, refer back to point 1 “a self driving car of sufficiently advanced technology should be a lot less likely to get into a ‘no-win’ scenario in the first place”

    lemonysam
    Free Member

    This is not a new question and it’s one which highlights many of the psychological and emotional difficulties relating to ethical reasoning:
    https://en.wikipedia.org/wiki/Trolley_problem

    scotroutes
    Full Member

    Zeroth Law

    jimdubleyou
    Full Member

    I suspect we will have teleportation devices before we have a computer that can make an ethical decision on it’s own.

    ahwiles
    Free Member

    it’s an interesting situation, but…

    1) the self-drive-car will not be speeding
    2) the sdc will be paying attention
    3) the sdc won’t panic

    ‘stay on route, and brake hard’, isn’t far from a simple solution that’ll reduce ‘kills’ to merely ‘injures’.

    the car industry can have that protocol for free, it’s my gift to humanity.

    STATO
    Free Member

    Its simple, it will never decide, its not an AI.

    If there is something in the way and parameters allow it to swerve as one option in avoiding the accident it will.
    If it cannot swerve, due to obstacles elsewhere, it will take action to stop as quickly as possible while staying in lane.

    It is no more complex than that, if little jonny has wandered into the road the designers have made the best reasonable efforts to have the car stop to prevent the accident. Anything more than this is unreasonable and would not be expected (and is not expected) of existing drivers.

    [ahwiles beat me to it]

    Bez
    Full Member

    Yup, basic trolley problem, assuming it’s coded to make the sort of evasive manoeuvre that might introduce the trolley problem.

    But what’s possibly worth noting is that even if such manoeuvres are coded as available decisions, a self-driving car (or, rather, its designer) is arguably less likely to find itself facing the trolley problem in the first place, because thus far they’ve been coded to be cautious and law-obeying (though having to co-exist both physically and politically with angry, impatient, error-prone humans means that the former is under threat).

    STATO
    Free Member

    amedias – Member
    haven’t we already done this discussion recently?

    Yes, months ago when the article about the fixie rider confusing google car first came out. This is just a paranoid scaremongering article from ‘the legal and policy specialist for The League of American Bicyclists’. Says it all really that he thinks cars should choose crash into each other to avoid cyclists.

    ti_pin_man
    Free Member

    ahhh the sweet smell of optimism that owners wont ‘up-chip’ their self drive cars, that cars cant be hacked, that AI wont be with us and that human brains wont write the code in these machines… your faith is reassuring 😉

    Bez
    Full Member

    Hence “arguably” 🙂

    amedias
    Free Member

    Says it all really that he thinks cars should choose crash into each other to avoid cyclists.

    Indeed, I’d rather they chose not to crash at all, something they’d be ideally suited to do 😉 And if they must crash (due to what exactly?*) that they do it as slowly as possible.

    We must stop attributing SDC with the same fallibilities as their current meat-based overlords.

    *The only really likely situations are things like unexpected (remember they may be able to detect this in advance) mechanical failure causing a rapid change of direction, in which case having faster reflexes and better control than a human seems to be a good thing and at least better than a human driver. this will also be a lot less likely to result in catastrophe if all the other cars nearby are also SDC and can respond similarly quickly.

    Bez
    Full Member

    if they must crash (due to what exactly?*)

    Deer etc leaping out of bushes? But yes, this is the point I was making above: the self-driving car represents an opportunity to largely avoid the scenario where the driver has added all but the final one or two ingredients of a collision. (ie a more general form of Karr’s Choice)

    amedias
    Free Member

    Exactly Bez, and I think we’re on the same page there.

    You also might hope that a decent SDC might even have been aware of the deer and slowed down in much the same way your or I might have if we could see through bushes.

    ahhh the sweet smell of optimism that owners wont ‘up-chip’ their self drive cars, that cars cant be hacked,

    Well those two points put the ‘decision’ and blame firmly in the hands of the owner/hacker and not the SDC don’t they?

    If/when we get AI good enough to make the ethical decisions in our cars about who to kill then I think humanity will be doomed much more directly, but that’s a whole different kettle of fish and the singularity is worthy of a thread on it’s own!

    downshep
    Full Member

    The human brain that writes the code that drives the car will be in a very different mood from the human brain that’s driving while distracted by kids in the car, late for an appointment etc. There’ll be oodles of scenario testing and legal department involvement before companies market such machines. Personally, I’d much rather share the road with a computer controlled car than white van man heading back to depot to get changed for the pub.

    HoratioHufnagel
    Free Member

    it does appear to be scaremongering somewhat. Especially this bit…

    Some self-driven cars have trouble identifying a stopped bicycle to begin with. A report from Bicycling.com showed that, when faced with a stationary bike rider, Google’s own self-driving car was unsure whether the cyclist was a pedestrian or something else.

    Which is why they are test vehicles in prototype form, 10 years or so away from real-world use without a supervising driver.

    The final algorithms will be refined over the equivalent of trillions of miles or so of test footage.

    ahhh the sweet smell of optimism that owners wont ‘up-chip’ their self drive cars, that cars cant be hacked,

    Surely you won’t “own” the car?? I think we need to stop thinking of autonomous cars as “cars” in the current sense at all, and simply view them as a new and different form of transport that will be used entirely differently.

    amedias
    Free Member

    The final algorithms will be refined over the equivalent of trillions of miles or so of test footage.

    And the brilliant thing about that is that every vehicle can learn form the collective experiences of the others too.

    Imagine if that worked for humans… you learn to identify a new scenario/threat/situation, and minutes or hours later EVERY other driver in the world also now knows how to identify that scenario and act accordingly.

    brooess
    Free Member

    +1 that the situation won’t arise in the first place, therefore it’s a false scenario.

    Mind you, I’d like to see something more scientific/empirical about this particular debate cos it’s an important one for SDC to become the default form of transport which will represent cycling paradise.

    I’m pretty sure it’ll get sorted – the size of the prize for Google, Apple and other tech companies to take the entire car manufacturing industry to pieces is so big you can bet they’ll keep working at it till they crack it. The legal liability of getting it wrong will be too high too, hence they’ll find a resolution… it’s one of the good things about capitalism IMO, when the money’s there for the taking, all barriers will be dealt with.

    mattyfez
    Full Member

    I’d would be very exptional circumstances.. The odds would be staggeringly small.

    Echoing the above.. The will be doing 30 max in a 30 zone, modern cars can stop exceptionally fast at that speed.. It’s failing to observe potential hazards, Driver distracted etc that’s the main issue.

    A self driving car is paying 100% attention 100% of the time.
    A self driving cars reactions can be measured in milliseconds rather than seconds.

    It will never be phased by tiredness, crying children in the back seat and it will never get annoyed at the actions of other road users and become frustrated and have its judgement clouded.

    Chances are it would have clocked a potential hazard before a human generally does and according to its programming it will have probably dropped it’s speed from 30mph to 20mph, massively increasing it’s effective ability to react in even faster time and at a slower speed.

    If these cars communicate with each other via a central database via wifi or the internet .. Other cars connected to the system would be prewarned of any potential issues on the road ahead by reports from vehicles further ahead performing evasive action or having to break hard.

    Also I belive self driving cars would eliminate a lot of congestion as there would be no late braking.. Which echoes back through the traffic causing a stop/start rubber band effect.. Traffic flow would be much smoother which whilst causing the odd journey to take a tiny bit longer, would more than likely decrease the average journey time.

    mrchrispy
    Full Member

    im with the wont happen in the first place camp.
    Car would see a danger and just push on and hope for the best and then say “it all happened so fast there was nothing I could do”, the SDC will have slowed down in preparation that something could happen. might get there ever so slightly slower but you’ll get there very much alive.

    molgrips
    Free Member

    You also might hope that a decent SDC might even have been aware of the deer

    It would be fairly cheap and pretty damn useful to fit infra-red cameras to these things, wouldn’t it?

    Solve the deer problem right away.

    Northwind
    Full Member

    brooess – Member

    +1 that the situation won’t arise in the first place, therefore it’s a false scenario.

    It probably isn’t quite that simple. But it doesn’t have to be perfect, it just has to be better than the mean.

    kcr
    Free Member

    That’s a really poor article. It doesn’t actually pose any serious questions about why self driving cars would be inherently more dangerous than manually driven cars for cyclists (or anyone else).

Viewing 25 posts - 1 through 25 (of 25 total)

The topic ‘How long before a self drive car kills a cyclist?’ is closed to new replies.