What are the chances that the government would legislate for that situation and take the decision away from the programmes?
Tbh, I'm finding it hard to argue either way with myself. Firstly because I share the desire for my car to choose not to kill me deliberately, but also because I'm struggling to see a situation where it's a binary outcome, kill the pedestrians or kill the driver.
Like I said Zeroth Law territory innit:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
So that gun toting Robot will only kill/harm a human if they're poised to harm a larger portion of Humanity, for instance if they were about to initiate the meltdown of a manually controlled nuclear reactor, the AI is compelled to act for the greater good...
Going back to the computer controlled car, "Humanity" or multiple humans will trump a single Human life, Bottom of the Hierarchy comes the AI itself, if its a bus stop full Vs one passenger, the passenger loses.
Given a choice between endangering a single Pedestrian/cyclist or a single passenger, weighing both lives as having "Equal value" it will opt for self sacrifice and assume the passenger's chances of survival are marginally improved by being in the vehicle and opt save the Pedestrian/cyclist... its an improvement actually because a computer should never put itself, or the financial cost of an accident ahead of a human life, Human drivers might well weight their emergency maneuvers differently...
Of course I doubt there's anything like that programmed into Google's Prius'... But if Asimovian laws are called up, yes it may have to kill it's owner...
cliff to the left and a big pile of cushions to your right you would choose to swerve into the big pile of cushions but AI can't be expected to make that call unless it knows there is a big pile of cushions somehow.
Ah, but this is exactly the sort of thing that an autonomous car can be better at - knowing its surroundings in 360degrees the whole time, whereas a human can only perceive a relatively narrow cone in front of them (or via the mirrors, but not simultaneously).
I'm struggling to see a situation where it's a binary outcome, kill the pedestrians or kill the driver.
As I initially said, oncoming car in lane (for whatever reason). Hit the car, mount the kerb or veer into oncoming traffic. Could take your chances with the head on (60mph+ combined speed, less hard braking) or with the pedestrians (after all, you'll be slowing...).
But what if the computer calculates low chances of survival for someone either way? You have to make the call, it might never be clear cut, but you still have to choose (at design time) how you weight the probabilities.
You can just do the "easy" thing - unseen person with pram say (so 2 lives) steps out from between cars. Brake hard, hope you don't hit them, if you do, their fault right?
But then the computer will *know* it can't brake in time, and it's going to hit. But it could swerve and pile you into the parked cars, avoiding them. Etc...
So that gun toting Robot will only kill/harm a human if they're poised to harm a larger portion of Humanity, for instance if they were about to initiate the meltdown of a manually controlled nuclear reactor, the AI is compelled to act for the greater good...
Yeah, but people are uncomfortable with the idea of the gun toting robot in case it goes wrong and kills someone, but that's different standards to the nuke plant where the consequences are far worse...
I wasn't making a point about asimov's laws there, more people's (ir)rationality.
As I initially said, oncoming car in lane (for whatever reason). Hit the car, mount the kerb or veer into oncoming traffic. Could take your chances with the head on (60mph+ combined speed, less hard braking) or with the pedestrians (after all, you'll be slowing...).
But this assumes that there's no other options regarding moving the cars position, not other road users etc. Obviously in the where there are genuinely no other options you need to make a design decision (or hand back control to the user to avoid being sued 😉 ), but in practical terms I think the other road users/pedestrians/computer might react in a way to make the collision (or major damage) avoidable.
If I saw the car in front of me drift into the other lane I'm going to be slowing down as well, not knowing what's about to happen I want to be able to react.
In general though, I accept that there might be the possibility of a situation which is no-win. In that case I want the car to save me from a selfish point of view, but put my trust in the designers and other drivers that it would hopefully never come to pass.
You're right, people are irrational, that's why we need the computers to take over.... Everything!
Of course software is not infallible, it is after all only as good as its programming, which is only as good as its authors, if they don't consider these sorts of situations, and give the control systems a means of addressing/weighing it responses then the outcome isn't really predictable...
That is why software is often iterative, take your best stab in house, beta test, revise, release, and then address all the real world problems you hadn't predicted...
Interesting article on this issue here BTW:
http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/
Particularly the point about the motorcyclists. To summarise, choice of hitting two cyclists (shall we say, as we're on STW). One with a helmet and one without.
So you choose the one with, as they're more likely to survive*. But now the more responsible* cyclist has been penalised for their choice....
*helmet debaters, shush! Lets imagine a future where this has been proven 😉
Wonder if Audi driverless cars will zoom up behind you and sit about 3ft off your tail whilst you are doing 70 in the outside lane overtaking things in the middle lane....
How will we display our wealth and social status ( ability to get credit) if no one owns a car?
I'd be interested to know what the environment and ecconomical side of them stacks up like, surely running some meaty processors and sensors has an impact on vehicle MPG/MPA and I'm sure such systems won't be cheap...
will this be offset by the vast improvement in mpg that impatient drivers lose in accelerating too hard and braking too late?
+1 to the capacity of raods and improved traffic flow. Hands up who has never been in a traffic jam on a motorway that is purely down to too many cars 'caterpillering' faster and slower, or a rubberneckers jam on the opposite (and totally clear!) side of the crash barriers to the actual RTC.
raffic jam on a motorway that is purely down to too many cars 'caterpillering' faster and slower,
Processing power trends toward "free" for a given task with time anyway, computers get faster.... There's quite a lot of compute in a google car just now (not to mention 60k worth of velodyne on the roof) but not all driverless cars need so much processing power or such sensors.
Not sure what the fuss is about really, there's already driverless trains (DLR for example) and most modern aircraft can pretty much do all the flying bit themselves, some (depending on type and also the airport having the necessary ground based kit) can land, taxi to the terminal and park without the pilot touching anything.
Yes, there's still a pilot there and obviously the train example is very simplistic given it runs on rails but the technology exists.
About the only conflict is actually having "other road users" take the piss. I'm sure some people would find it hilarious to pull out from a junction knowing that the oncoming car will slam it's brakes on automatically.
If there were ejector seats fitted to the cars it would solve many problems
How will we display our wealth and social status ( ability to get credit) if no one owns a car?
^^ this. You bet your bottom dollar that despite there being many many rational arguments for self-driving cars, there'll be some people/interest groups who'll be against them. Their rationale will be ostensibly rational but it'll just be a cover story for their desire to continue trying to get a feeling of self-worth from the car they own.
[url= http://www.kpmg.com/US/en/IssuesAndInsights/ArticlesPublications/Documents/self-driving-cars-are-we-ready.pdf ]KPMG report[/url]
I read this last year. IIRC they reckon 10 years and self-driving cars will be a reality. The trial in Milton Keynes is already in plan for next year, which is clearly a proof of concept.
All the moral and legal stuff are known aspects which will need to be worked through.
I say bring it on. Mass usage and excessive usage of cars is destroying our communities, relationships, mental and physical health...
Given you can detect the people/hazards around you, should the system kill the occupants of the car to save more lives?I honestly don't know how you'd make that design choice. It's easy to say it'd never happen, or just ignore it, but then you can't really... it's easy to find counter examples. Crash the car or swerve and hit a group of cyclists? Humans currently make the same "decisions"...only they don't, we just react -normally to save ourselves. But when you can make a cold, calculated design decision - what do you do?
In most cases the computer will be reacting to the same information that the person gets just quicker. As with all moral questions like this what would you do?
As currently society values the human in the box over the rest then I'd assume most legal/government people would sign off on pedestrians as being acceptable collateral damage.
The big advantage would however be that the number of these "accidents" should be significantly reduced. In fact as most of car "Accidents" are actually crashes due to somebody doing something wrong if they can be removed the place will be safer.
Unfortunately there are plenty of complex phd worthy arguments in there, but simply look at the state of the roads. Changes need to be made and removing the driver is one of the best ideas.
the computer will be reacting to the same information that the person gets just quicker.
Right, only this isn't quite right. The person is *reacting* where as the computer is simply enacting a premeditated decision based on the information. The person isn't weighing up the odds in a calculated fashion.
FWIW I'm very much in favour of autonomous vehicles and AI, it's literally what I'm spending my life working on. I'm just also in favour of arguing on the internet 😉
As currently society values the human in the box over the rest then I'd assume most legal/government people would sign off on pedestrians as being acceptable collateral damage.
The big advantage would however be that the number of these "accidents" should be significantly reduced
I think you're right in the initial case, though as the technology improves I can see things swaying toward the more complex (ethically) situations I envisage.
FWIW the way I see things going is more like this:
First we'll see either "autonomous only" roads, or perhaps lanes on the motorway (perhaps segregated?), the aim being to encourage their use (potentially for vastly reduced emissions/congestion). I've always thought that the haulage industry is more likely to go auto first - and the bulk of their miles is on motorway anyway (and keep the driver for the end parts).
You avoid a lot of the issues here that way too, on the motorway it's a reasonable assumption that people shouldn't be there (indeed, that's why we currently whisk along at 70mph). An AI can potentially* see further, and certainly react faster, not be as blinded in weather* and thus reduce accidents, smooth out traffic jams thus saving fuel etc.
*depending on sensor configurations... lot of issues/debate here too.
Looks like the STW massif won't be able to buy a driverless car in good conscience any more:
[url= http://www.bbc.co.uk/news/technology-28851996 ]Google's self-driving cars are programmed to exceed speed limits by up to 10mph (16km/h), according to the project's lead software engineer.[/url]
😛
Kind of related, very interesting watch.
You avoid a lot of the issues here that way too, on the motorway it's a reasonable assumption that people shouldn't be there (indeed, that's why we currently whisk along at 70mph). An AI can potentially* see further, and certainly react faster, not be as blinded in weather* and thus reduce accidents, smooth out traffic jams thus saving fuel etc.
I can see this being more mainstream, program the sat-nav, drive to the motorway slipraod, engage the auto[s]pilot[/s][i]driver[/i], and it sounds an alarm a few minutes before your exit and the driver takes over again, if you don't respond it stops on the hard shoulder.
Whilst I'd trust it with nice predictable things like other cars. I'm not sure I'd strust it going past say a school at 8:30, a driver can look at someones expression and body language and see if they're about to walk out, a car would just see someone stood there.
Cool
