Viewing 25 posts - 41 through 65 (of 65 total)
  • Driverless cars in the UK!
  • IA
    Full Member

    some manoeuvre that would minimise the losses involved, be it property or human life.

    Right, but say the car has a choice, kill you (in the car) or kill two pedestrians. It’s clever, it knows this is the most likely outcomes to choose between. Seems like it should kill you, right?

    But then would people want to ride in a car that didn’t share their built-in self preservation instincts?

    cookeaa
    Full Member

    I had a think about it IA and I reckoned the car would probably have a hierarchy of things it should avoid, people probably being near the top. In that case there would surely be some manoeuvre that would minimise the losses involved, be it property or human life. The car may very well be able to pull of some mental move that manages to not kill any pedestrians but still avoids killing the driver(s).

    Maybe it should just be user configurable based on how altruistic the driver is feeling on any given day?

    Asimovian Laws Vs owners being able to configure their cars as autonomous killing machines if they fancy?

    Edit: actually “IA” is that you?

    if so I’m sure a tweaked version of the “Zeroth Law” could be invoked where by all Ford Fiesta’s would deliberately target reality TV stars for the “Good of humanity” or similar…

    This could be of major benefit to our species as no Human could ever be prosecuted for these AI initiated culls…

    cookeaa
    Full Member
    IA
    Full Member

    Ah but we have a conflict here, if there are two courses of action and both result in harm to humans?

    Ethics and AI always raises interesting problems, as humans on the whole aren’t rational. E.g. if you ask if humans or computers should control nuke plant processes, then people want the computers, they don’t want a person to mess up and end up killing millions.

    But flip it around, and ask about a robot with a gun…or a person controlling the trigger and they want the person. After all, a robot might mess up and kill one person….. and see the contradiction.

    andyl
    Free Member

    Surely the programming would be to avoid causing an accident so it would not be able to allow itself to have one type of accident in order to avoid another.

    Obviously if you were driving along and saw a big HGV coming at you, a cliff to the left and a big pile of cushions to your right you would choose to swerve into the big pile of cushions but AI can’t be expected to make that call unless it knows there is a big pile of cushions somehow.

    So in the case of heading down a road and seeing a pedestrian step out but no where to swerve to then surely it’s only option would be to continue on the current route and brake hard but safely. Of course the sensors mean that it should hopefully detect such a problem before it arises but if not the person who steps out is going to have to be responsible for their own self preservation and move out of the way!

    Most types of accidents such as vehicles failing to give way at junctions, not stopping in time, changing lanes when not clear should all be avoided unless something goes wrong so the accident types that are left are either acts of God (meteor, tree etc) or humans/animals suddenly getting in the way (on foot or otherwise). The radar systems will detect risks and help avoid better than a human might except in rare circumstances were there has been a slight indication of intent that only a human may notice – eg someone clearly waiting to jump out on purpose.

    whatnobeer
    Free Member

    What are the chances that the government would legislate for that situation and take the decision away from the programmes?

    Tbh, I’m finding it hard to argue either way with myself. Firstly because I share the desire for my car to choose not to kill me deliberately, but also because I’m struggling to see a situation where it’s a binary outcome, kill the pedestrians or kill the driver.

    cookeaa
    Full Member

    Like I said Zeroth Law territory innit:

    0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    So that gun toting Robot will only kill/harm a human if they’re poised to harm a larger portion of Humanity, for instance if they were about to initiate the meltdown of a manually controlled nuclear reactor, the AI is compelled to act for the greater good…

    Going back to the computer controlled car, “Humanity” or multiple humans will trump a single Human life, Bottom of the Hierarchy comes the AI itself, if its a bus stop full Vs one passenger, the passenger loses.

    Given a choice between endangering a single Pedestrian/cyclist or a single passenger, weighing both lives as having “Equal value” it will opt for self sacrifice and assume the passenger’s chances of survival are marginally improved by being in the vehicle and opt save the Pedestrian/cyclist… its an improvement actually because a computer should never put itself, or the financial cost of an accident ahead of a human life, Human drivers might well weight their emergency maneuvers differently…

    Of course I doubt there’s anything like that programmed into Google’s Prius’… But if Asimovian laws are called up, yes it may have to kill it’s owner…

    IA
    Full Member

    cliff to the left and a big pile of cushions to your right you would choose to swerve into the big pile of cushions but AI can’t be expected to make that call unless it knows there is a big pile of cushions somehow.

    Ah, but this is exactly the sort of thing that an autonomous car can be better at – knowing its surroundings in 360degrees the whole time, whereas a human can only perceive a relatively narrow cone in front of them (or via the mirrors, but not simultaneously).

    I’m struggling to see a situation where it’s a binary outcome, kill the pedestrians or kill the driver.

    As I initially said, oncoming car in lane (for whatever reason). Hit the car, mount the kerb or veer into oncoming traffic. Could take your chances with the head on (60mph+ combined speed, less hard braking) or with the pedestrians (after all, you’ll be slowing…).

    But what if the computer calculates low chances of survival for someone either way? You have to make the call, it might never be clear cut, but you still have to choose (at design time) how you weight the probabilities.

    You can just do the “easy” thing – unseen person with pram say (so 2 lives) steps out from between cars. Brake hard, hope you don’t hit them, if you do, their fault right?

    But then the computer will *know* it can’t brake in time, and it’s going to hit. But it could swerve and pile you into the parked cars, avoiding them. Etc…

    IA
    Full Member

    So that gun toting Robot will only kill/harm a human if they’re poised to harm a larger portion of Humanity, for instance if they were about to initiate the meltdown of a manually controlled nuclear reactor, the AI is compelled to act for the greater good…

    Yeah, but people are uncomfortable with the idea of the gun toting robot in case it goes wrong and kills someone, but that’s different standards to the nuke plant where the consequences are far worse…

    I wasn’t making a point about asimov’s laws there, more people’s (ir)rationality.

    whatnobeer
    Free Member

    As I initially said, oncoming car in lane (for whatever reason). Hit the car, mount the kerb or veer into oncoming traffic. Could take your chances with the head on (60mph+ combined speed, less hard braking) or with the pedestrians (after all, you’ll be slowing…).

    But this assumes that there’s no other options regarding moving the cars position, not other road users etc. Obviously in the where there are genuinely no other options you need to make a design decision (or hand back control to the user to avoid being sued 😉 ), but in practical terms I think the other road users/pedestrians/computer might react in a way to make the collision (or major damage) avoidable.

    If I saw the car in front of me drift into the other lane I’m going to be slowing down as well, not knowing what’s about to happen I want to be able to react.

    In general though, I accept that there might be the possibility of a situation which is no-win. In that case I want the car to save me from a selfish point of view, but put my trust in the designers and other drivers that it would hopefully never come to pass.

    cookeaa
    Full Member

    You’re right, people are irrational, that’s why we need the computers to take over…. Everything!

    Of course software is not infallible, it is after all only as good as its programming, which is only as good as its authors, if they don’t consider these sorts of situations, and give the control systems a means of addressing/weighing it responses then the outcome isn’t really predictable…

    That is why software is often iterative, take your best stab in house, beta test, revise, release, and then address all the real world problems you hadn’t predicted…

    IA
    Full Member

    Interesting article on this issue here BTW:

    http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/

    Particularly the point about the motorcyclists. To summarise, choice of hitting two cyclists (shall we say, as we’re on STW). One with a helmet and one without.

    So you choose the one with, as they’re more likely to survive*. But now the more responsible* cyclist has been penalised for their choice….

    *helmet debaters, shush! Lets imagine a future where this has been proven 😉

    coolhandluke
    Free Member

    Wonder if Audi driverless cars will zoom up behind you and sit about 3ft off your tail whilst you are doing 70 in the outside lane overtaking things in the middle lane….

    wilburt
    Free Member

    How will we display our wealth and social status ( ability to get credit) if no one owns a car?

    julianwilson
    Free Member

    I’d be interested to know what the environment and ecconomical side of them stacks up like, surely running some meaty processors and sensors has an impact on vehicle MPG/MPA and I’m sure such systems won’t be cheap…

    will this be offset by the vast improvement in mpg that impatient drivers lose in accelerating too hard and braking too late?

    +1 to the capacity of raods and improved traffic flow. Hands up who has never been in a traffic jam on a motorway that is purely down to too many cars ‘caterpillering’ faster and slower, or a rubberneckers jam on the opposite (and totally clear!) side of the crash barriers to the actual RTC.

    IA
    Full Member

    raffic jam on a motorway that is purely down to too many cars ‘caterpillering’ faster and slower,

    [video]http://www.youtube.com/watch?v=Suugn-p5C1M[/video]

    Processing power trends toward “free” for a given task with time anyway, computers get faster…. There’s quite a lot of compute in a google car just now (not to mention 60k worth of velodyne on the roof) but not all driverless cars need so much processing power or such sensors.

    crazy-legs
    Full Member

    Not sure what the fuss is about really, there’s already driverless trains (DLR for example) and most modern aircraft can pretty much do all the flying bit themselves, some (depending on type and also the airport having the necessary ground based kit) can land, taxi to the terminal and park without the pilot touching anything.

    Yes, there’s still a pilot there and obviously the train example is very simplistic given it runs on rails but the technology exists.

    About the only conflict is actually having “other road users” take the piss. I’m sure some people would find it hilarious to pull out from a junction knowing that the oncoming car will slam it’s brakes on automatically.

    cloudnine
    Free Member

    If there were ejector seats fitted to the cars it would solve many problems

    brooess
    Free Member

    How will we display our wealth and social status ( ability to get credit) if no one owns a car?

    ^^ this. You bet your bottom dollar that despite there being many many rational arguments for self-driving cars, there’ll be some people/interest groups who’ll be against them. Their rationale will be ostensibly rational but it’ll just be a cover story for their desire to continue trying to get a feeling of self-worth from the car they own.

    KPMG report

    I read this last year. IIRC they reckon 10 years and self-driving cars will be a reality. The trial in Milton Keynes is already in plan for next year, which is clearly a proof of concept.

    All the moral and legal stuff are known aspects which will need to be worked through.

    I say bring it on. Mass usage and excessive usage of cars is destroying our communities, relationships, mental and physical health…

    mikewsmith
    Free Member

    Given you can detect the people/hazards around you, should the system kill the occupants of the car to save more lives?

    I honestly don’t know how you’d make that design choice. It’s easy to say it’d never happen, or just ignore it, but then you can’t really… it’s easy to find counter examples. Crash the car or swerve and hit a group of cyclists? Humans currently make the same “decisions”…only they don’t, we just react -normally to save ourselves. But when you can make a cold, calculated design decision – what do you do?
    In most cases the computer will be reacting to the same information that the person gets just quicker. As with all moral questions like this what would you do?
    As currently society values the human in the box over the rest then I’d assume most legal/government people would sign off on pedestrians as being acceptable collateral damage.
    The big advantage would however be that the number of these “accidents” should be significantly reduced. In fact as most of car “Accidents” are actually crashes due to somebody doing something wrong if they can be removed the place will be safer.
    Unfortunately there are plenty of complex phd worthy arguments in there, but simply look at the state of the roads. Changes need to be made and removing the driver is one of the best ideas.

    IA
    Full Member

    the computer will be reacting to the same information that the person gets just quicker.

    Right, only this isn’t quite right. The person is *reacting* where as the computer is simply enacting a premeditated decision based on the information. The person isn’t weighing up the odds in a calculated fashion.

    FWIW I’m very much in favour of autonomous vehicles and AI, it’s literally what I’m spending my life working on. I’m just also in favour of arguing on the internet 😉

    As currently society values the human in the box over the rest then I’d assume most legal/government people would sign off on pedestrians as being acceptable collateral damage.
    The big advantage would however be that the number of these “accidents” should be significantly reduced

    I think you’re right in the initial case, though as the technology improves I can see things swaying toward the more complex (ethically) situations I envisage.

    FWIW the way I see things going is more like this:

    First we’ll see either “autonomous only” roads, or perhaps lanes on the motorway (perhaps segregated?), the aim being to encourage their use (potentially for vastly reduced emissions/congestion). I’ve always thought that the haulage industry is more likely to go auto first – and the bulk of their miles is on motorway anyway (and keep the driver for the end parts).

    You avoid a lot of the issues here that way too, on the motorway it’s a reasonable assumption that people shouldn’t be there (indeed, that’s why we currently whisk along at 70mph). An AI can potentially* see further, and certainly react faster, not be as blinded in weather* and thus reduce accidents, smooth out traffic jams thus saving fuel etc.

    *depending on sensor configurations… lot of issues/debate here too.

    whatnobeer
    Free Member

    Looks like the STW massif won’t be able to buy a driverless car in good conscience any more:

    Google’s self-driving cars are programmed to exceed speed limits by up to 10mph (16km/h), according to the project’s lead software engineer.

    😛

    darrenspink
    Free Member

    Kind of related, very interesting watch.

    [video]http://youtu.be/7Pq-S557XQU[/video]

    thisisnotaspoon
    Free Member

    You avoid a lot of the issues here that way too, on the motorway it’s a reasonable assumption that people shouldn’t be there (indeed, that’s why we currently whisk along at 70mph). An AI can potentially* see further, and certainly react faster, not be as blinded in weather* and thus reduce accidents, smooth out traffic jams thus saving fuel etc.

    I can see this being more mainstream, program the sat-nav, drive to the motorway slipraod, engage the autopilot[i]driver[/i], and it sounds an alarm a few minutes before your exit and the driver takes over again, if you don’t respond it stops on the hard shoulder.

    Whilst I’d trust it with nice predictable things like other cars. I’m not sure I’d strust it going past say a school at 8:30, a driver can look at someones expression and body language and see if they’re about to walk out, a car would just see someone stood there.

    Frankenstein
    Free Member

    Cool

Viewing 25 posts - 41 through 65 (of 65 total)

The topic ‘Driverless cars in the UK!’ is closed to new replies.