Forum menu
We donโt live in a perfect world, no software will be perfect and no human will be.ย In both situations we still donโt have a need for software driven cars.
except the approach you're advocating, maintaining the status quo, leaves the USA killing 36000 people a year on their roads. that's even more than get killed with guns. shouldn't we try to fix this problem if we can? if the introduction of driver-less cars means you kill one less human being prematurely isn't it worth it?
The police said that the vehicle was traveling 38 miles per hour in a 35 mile-per-hour zone, according to theย Chronicleโthough aย @33.4350531,-111.941492,3a,49.5y,347.01h,83.57t/data=!3m6!1e1!3m4!1sx-K4_17J8MVthFRapvIa2A!2e0!7i13312!8i6656">Google Street View shotย of the roadway taken last July shows a speed limit of 45 miles per hour along that stretch of road.
We donโt live in a perfect world, no software will be perfect and no human will be.ย In both situations we still donโt have a need for software driven cars.
Whilst there are all kinds of arguments for and against, it cannot be denied that our roads are a dangerous place, and always have been, with thousands of people killed and seriously injured in the UK alone, every year. And it might come as a surprise to some, but we have a pretty good standard of driving compared to many countries.
Most of those accidents are caused by human factors, so there is a lot of sense in removing the human.
Clearly software, and the engineers creating it, are not infallible. It comes with its own set of inadequacies, which are still somewhat unknown. However, with the danger already present on our roads, we should always strive to improve conditions.
In a capitalist society money will always be the driving force. But to make money, a product must have value. There is very much a need to make our roads safer.
Butcher, that right.ย But software wont be the answer when used in the real world.
In the real world softwareย will fail.
Better training of drivers would be a good idea.ย Or we just accept that driving can be dangerous.ย Bit like riding a mountain bike, you can kill yourself.
If technology can make cars safer then we should be focusing on how to integrate it into cars with drivers. Eg. making speed limits compulsory.
Who the **** let Uber put driverless cars on the roads?
Every single thing about that company's MO is dodgy as hell and will use every legal and illegal trick in the book to further their own ends - acting fast to stay ahead of regulation. The most irresponsible form of capitalism.
As a software engineer - I think its a bonkers idea for so many reasons.
Top reason - machine vision just isn't sophisticated enough. I don't mean the cameras...no processer and software algorithm will ever get anywhere close to beating the human brain in its ability to make sense of the world around it. The road network and the world in which it exists isย just too complicated - there is too much going on - too many objects look too similar - ย an infinite number of possible scenes to try an interpret. The human brain can make sense of all this clutter in an instant, put it all in context, filter the noiseย and immediately derive meaning in order to react. Its hard enough to get a machine to recognise "this is a lamp post", "this is a person". Its always going to be screwing up, misinterpreting and making mistakes - fatal ones - and it will do for at least the next 100 years I reckon.
Second Reason...this is primarily because machine vision doesn't work - The framework that the car exists in (the roads) needs to be vastly simplified and standardisedย before youย even try this...as in the roads need to be designedย for autonomous cars (not the other way around) . there needs to be sensors that the car can talk to every 100 yards to work out where it is. Realistically pedestrians cant be allowed anywhere near. you need to limit to an absolute minimum the variables that the car has to deal with. Think about it - we still have people driving trains! All they have to control is a throttle and a brake, the only time they are near people is in a few well defined locations (stations)
We do have a few roads currentlyย that driverless cars have a chance of working on...motorways. I think it might be possible for motorways. But why! what is so terrible about driving a car - I get really bored sitting in the passenger seat - I don't get the need at all.
Nothing to add, but if you think about it, this car has now killed more people than the Terminator...cos the Terminator is a science fiction characer...but maybe skynet is coming...
Butcher, that right.ย But software wont be the answer when used in the real world.
In the real world softwareย will fail.
Better training of drivers would be a good idea.ย Or we just accept that driving can be dangerous.ย Bit like riding a mountain bike, you can kill yourself.
In the real world people fail all the time. Like it or not, this is happening.
As a software engineer โ I think its a bonkers idea for so many reasons.
Top reason โ machine vision just isnโt sophisticated enough. I donโt mean the camerasโฆno processer and software algorithm will ever get anywhere close to beating the human brain in its ability to make sense of the world around it.
A number of massive tech corporations (not least of which Google) disagree with you. As above, this is going to happen. It won't happen for every household, everywhere in the world at once, but it is happening.
Second Reasonโฆthis is primarily because machine vision doesnโt work โ The framework that the car exists in (the roads) needs to be vastly simplified and standardised before you even try thisโฆ
They are already trying it. There are fully autonomous cars on our roads and more and more US states and countries are opening up their legislation in order to facilitate the testing and ultimately the switch to autonomous cars. The ipad generation aren't interested in driving, they'd rather play with their phone, which is what they do now, while driving.
If self-driving cars take over as our primary form of transport, they'll kill an awful lot of people. Just not in a spectacular way that generates news headlines.
We need to stop building towns and cities on the self-fulfilling assumption people will travel by car. There is no future in which humans can sit down all day without paying an enormous health price. If driverless cars appear in streets anything like todayโs, we risk falling into the most pathetic of robot uprisings, where they transport us helpfully from place to place while we remain inactive, growing fat and increasing our risk of cancer and diabetes.
"As a software engineer โ I think its a bonkers idea for so many reasons."
Googles car uses LIDAR as well. This is what it sees:
/2013%2F05%2F03%2F51%2Fgoogle_car.cbddd.jpg)
Top reason โ machine vision just isnโt sophisticated enough. I donโt mean the camerasโฆno processer and software algorithm will ever get anywhere close to beating the human brain in its ability to make sense of the world around it.
Is it not a case of they don't necessarily need to know what they're seeing but they do need to know its there and not hit it?
As an aside an algorithm has now been developed which means that using AI using LIDAR can see round corners (the processing part of the algorithm is fast but the collecting data bit is very slow so it's a ways off being useful yet) which I think is pretty cool!
They are already trying it
I know - that is why it is bonkers!
expect more carnage - or more likely - expect a lot of money to be wasted producing something that isn't safe and is eventually scrapped
As a human I think putting human-driven cars on the road is a terrible idea and the sooner they are replaced by autonomous vehicles the better for everyone.
[i]Trimix wrote:[/i]
Or we just accept that driving can be dangerous.
So we shouldn't bother doing anything to make it less dangerous?
[i]ndthornton wrote:[/i]
As a software engineer โ I think its a bonkers idea for so many reasons.
All of which appear to be based on the premise that driverless cars have to be perfectly safe. I've got news for you - the current system screws up misinterprets and makes mistakes, fatal ones. It's kind of like the scenario where you're out walking and come across a bear - you only have to be able to run faster than the other person, not the bear. As mentioned numerous times on this thread we already accept a ridiculously high death rate on the roads and any argument against autonomous cars on safety grounds applies even more to human drivers. I'm fairly confident that current autonomous cars are already safer on average than human drivers.
Think about it โ we still have people driving trains! All they have to control is a throttle and a brake, the only time they are near people is in a few well defined locations (stations)
DLR. Though a lot of the trains on the mainline network could run completely autonomously (and largely do) - the tech is all in place.
But why! what is so terrible about driving a car โ I get really bored sitting in the passenger seat โ I donโt get the need at all.
Well apart from the thousands of people killed on the roads every year, no, no need at all ๐
expect a lot of money to be wasted producing something that isnโt safe and is eventually scrapped
That's pretty much the opposite of what I'm expecting. And as pointed out (repeatedly) it doesn't need to be perfectly safe, just safer than the current mess we have.
[i]ndthornton wrote:[/i]
expect more carnage โ or more likely โ expect a lot of money to be wasted producing something that isnโt safe and is eventually scrapped
What, more than we have already? I'm still not sure you appreciate just how flawed the current system is.
no processer and software algorithm will ever get anywhere close to beating the human brain in its ability to make sense of the world around it
as a software engineer i'm surprised you take that view. you'd be denying the idea that software and processors develop, which clearly they do, very quickly.
Googles car uses LIDAR as well. This is what it sees:
But what does it interpret form all that - that's the important bit. I can interpret that an intersection in a busy town. not quiteย so easy for a machine.
Even if it interprets perfectly, I think you can agree its lacking in detail. Would you be comfortable driving pas a schoolย if that was the view through your eyes?
Is it not a case of they donโt necessarily need to know what theyโre seeing but they do need to know its there and not hit it?
Then expect to spend a lot of time sat in the road not moving drumming your fingers on the dashboardย ๐
Iโm fairly confident that current autonomous cars are already safer on average than human drivers.
And what are you basing that on. Not exactly huge pool of data out there in terms of autonomous car safety in the real world is there? In comparison there are far too many cars on the road for you to make a judgement on how dangerous they are just be reading the news.
Last year in the US there were 1.18 fatalities per 100 million vehicle miles.
Have we had a 100 million autonomous vehicle miles yet for this 1 fatality?
I don't know the answer to that and its statistically insignificant anyway with just 1 fatality- but Id be interested to know?
I wonder what rules the people in the 'driverless' cars have, in terms of when they should take control.
For example if you see a pedestrian looking like they might cross without looking properly (on the phone, drifting to kerb edge), do you:
a). Take over and slow down just in case (as you probably would if driving a normal car)
b). Wait until the car is a certain distance away to see if the car spots the issue (probably not if the pedestrian is still on the pavement but maybe it's a factor in some systems)
c). Wait and see what happens hoping you can brake quickly enough if the pedestrian does suddenly step out onto the road and the car doesn't auto brake?
Maybe it depends how experienced the guardian 'driver' is, I can see if you keep stepping in early (before a situation occurs) the car is never going to learn and you're not going to gather useful data. Over time you probably start to trust the car decision making more and give it more leeway, but obviously leave it too late and an accident is going to happen.
But what does it interpret form all that
Probably more than you do.
It'll have seen everything on and around the junction (and for 100m down the road as well) and also be tracking it in real time.
Iโm fairly confident that current autonomous cars are already safer on average than human drivers.
This might have been true last week. Doubtful now.
We need to stop building towns and cities on the self-fulfilling assumption people will travel by car.
This I do agree with. Ultimately we will accept a number of accidents because of the benefits the road network brings to our society. It goes without saying that it's a fundamental piece of infrastructure, so much so, that any alternative is to most people unthinkable.
We probably really should be thinking about the alternatives though. Our roads are a hostile and unpleasant environment, and in an age where we can be in various places without actually physically travelling, there is less need for the cars to be on them. We have the potential to work local again and reduce our reliance on cars. We can encourage healthier, less damaging forms of transport. And something where an autonomous model might excel is in public transport. There's a lot of scope for improvement in the wider picture and I think we have to look well beyond the way we travel now.
Thereโs some more context here: Tempe police chief says early probe shows no fault by Uber:. Seems like she stepped out in front of the car.
Edit: to remove random garbage put in by the forum software.
Please, don't let whoever coded this forum anywhere near the team developing the driverless car software
I wonder what rules the people in the โdriverlessโ cars have, in terms of when they should take control.
For example if you see a pedestrian looking like they might cross without looking properly (on the phone, drifting to kerb edge), do you:
People often assume that the decision is the difficult bit. In realty just identifying the fact that there IS a human and it IS looking at its phone...This level of detail is impossible to start with.
I don't understand why we think making road transport autonomous is just round the corner when we haven't even been able to make railways autonomous which should be several orders of magnitude easier than making cars and tracks autonomous.
My view is that there is no way an autonomous vehicle can interpret the road and its surroundings as well as a good, experienced, driver. Without knowing it, good drivers often spot hazard cues long before they are identifiable as actual hazards and the brain is a very good filter of useful/irrelevant information.
However, it's likely that an autonomous vehicle will be significantly better than a large number of the other drivers who grace our roads.
If we ever get to a future where autonomous vehicles are far more prevalent, one side effect is that there will gradually be fewer of these experienced drivers, as people generally spend less of their time behind the wheel. You'll end up with an average standard which may be slightly higher than at present, and hopefully with a slightly lower mortality.
Expecting these vehicles to cut massively the number of casualties is unrealistic.
[i]ndthornton wrote:[/i]
People often assume that the decision is the difficult bit. In realty just identifying the fact that there IS a human and it IS looking at its phoneโฆThis level of detail is impossible to start with.
No, it really isn't. What sort of software do you do, because you seem to have a very poor understanding of just how good sensing and AI systems are nowadays. It's an area a lot of work has been done as part of this development (one I touched upon with work I did many years ago, but it's moved on a lot since then) and it's pretty amazing how well they can interpret the environment.
I'm basing my judgement of the relative safety on my knowledge of how good those systems are and how poor the average human is (if you search you'll find plenty of reports of how they did better than humans would have in incidents they encountered, and it's a piece of piss to find reports of where people were killed by human drivers in incidents autonomous cars would easily have avoided).
[i]chakaping wrote:[/i]
This might have been true last week. Doubtful now.
๐ - you're basing that on a single incident which from all available information it's very unlikely would have resulted in a different outcome had a human been driving?
We make judgements on pedestrians as we drive and drive accordingly. We even change according to the time of day and who is likely to be about.
A 40 year old walking along in work clothes at lunch time is pretty low risk unless Greggs is on the other side of the road.
Kids could do anything anytime.
Old people are slow but sometime deaf, can't see well and don't judge speed so well.
Drunks and bag ladies are likely to step out and wave a digit at you
Pub and nightclub turnout is best crawled past.
So we make a calculation, adapt speed, and maybe cover the brake, and maybe give a wide berth, and maybe even put the hazards on to warn people behind us.
We're so good at it that people who really want to kill themselves generally choose a train rather than a car or a truck. We might stop or slow down enough to only injure them. In future suicidal people will be able to choose any vehicle which will at least give train drivers a less traumatic time.
[i]uponthedowns wrote:[/i]
I donโt understand why we think making road transport autonomous is just round the corner when we havenโt even been able to make railways autonomous which should be several orders of magnitude easier than making cars and tracks autonomous.
Already done that one. DLR. I also know some people who work on systems for mainline trains, and those could be fully autonomous now on some lines, the driver is pretty much redundant in reality.
So what's stopping it happening? ASLEF?
Martinhutch
My view is that there is no way an autonomous vehicle can interpret the road and its surroundings as well as a good, experienced, driver.
What about mathematics? Can a machine be better at calculations than a human? What about chess? Can a machine be better than the best human at chess? What about Go?
There are superhuman AI's which are better than humans in many ways. Driving will just be the next thing on the list machines will be better at.
In almost any critical life or death decision, I'll take an algorithm over human "judgement". There will come a time when human will look back and think it reckless that we let them behind the wheel of non-autonomous vehicles.
aracer
Are you saying that its possible both in terms of hardware and software to create a system that can resolve....
a = "Child distracted by social media"
b = "Child looking carefully at traffic"
c = "Any one of an infinite number of similar looking scenarios"
...and be able to proveย compliance 100% of the timeย in 100%ย of these scenarios with 100% reliability (because that is the level of verification required to get new technology on toย a production vehicle)
Ill go back to the trains...automation would be easy - almost trivial. The fact that it hasn't happened should be a big alarm bell considering the problem and the risk is many orders of magnitude bigger with cars.
As for what I do - Cant say as a lot of it is classified - but Iv worked on autonomous vehicles that use LIDAR, machine vision and many other sensors. Would I still cycle to work if these vehicles were driving on the road - hahahahahano
Iv also spent a lot of time in the automotive sector which is safety critical - so I know how much verification and testing is required to change one line of code to modify the colour of the handbrake warning light !
Driverless cars...
Your living in a dream world Neo ๐
[i]FuzzyWuzzy wrote:[/i]
For example if you see a pedestrian looking like they might cross without looking properly (on the phone, drifting to kerb edge), do you:
a). Take over and slow down just in case (as you probably would if driving a normal car)
๐ - you're having a laugh. A lot of people on here might, because we're interested enough in driving safely to care about and pay attention to these things. I'm less than convinced that even an average driver would. Certainly the norm is that pedestrians stepping out is a completely unpredictable thing and would be a successful defence.
[i]Edukator wrote:[/i]
So we make a calculation, adapt speed, and maybe cover the brake, and maybe give a wide berth, and maybe even put the hazards on to warn people behind us.
Weโre so good at it that people who really want to kill themselves generally choose a train rather than a car or a truck. We might stop or slow down enough to only injure them. In future suicidal people will be able to choose any vehicle which will at least give train drivers a less traumatic time.
and again, good drivers might, but they're not the ones killing people mostly, and they're outnumbered. FWIW I also think you're wrong to assume that autonomous cars won't make those judgements and act accordingly.
[i]martinhutch wrote:[/i]
My view is that there is no way an autonomous vehicle can interpret the road and its surroundings as well as a good, experienced, driver. Without knowing it, good drivers often spot hazard cues long before they are identifiable as actual hazards and the brain is a very good filter of useful/irrelevant information.
I fundamentally disagree there - I don't see an obvious reason why computers shouldn't be able to perform that well at processing the information (with the advantage of far superior sensors).
However, itโs likely that an autonomous vehicle will be significantly better than a large number of the other drivers who grace our roads.
Which is the fundamental truth
Expecting these vehicles to cut massively the number of casualties is unrealistic.
Not even when the vast majority of casualties are caused by poor drivers, or drivers failing to look properly etc.? What is the realistic chance of an autonomous car completely failing to spot a cyclist in front of them and running straight into them, or of one pulling out of a side road in front of a cyclist?
What about mathematics? Can a machine be better at calculations than a human? What about chess? Can a machine be better than the best human at chess? What aboutย Go?
If you can turn my High Street into a board where grannies can only go forwards, not backwards, and schoolkids move three steps forward and one to the side, then you're onto a winner. Chess is a game with very simple rules and multiple possibilities all deriving from those simple rules. Live traffic is a game with some rules, which are frequently not followed in a variety of ways, and pieces that sprint onto the board mid-game or hide behind roadsigns.
I'm sure the ability of these autonomous vehicles will improve. The tech is still relatively in its infancy. But unlike chess, where the ability to beat humans simply meant outmatching them in terms of pure calculations, computers have to find a way to get better than a species with millions of years of adaptations designed to detect and predict hazards and sift out the genuine threat from all the foliage.
[i]uponthedowns wrote:[/i]
So whatโs stopping it happening? ASLEF?
I don't know exactly, but I suspect that's part of it. Though there's also the issue that trains are held to a far higher safety standard than roads and there's the perception that having a driver as well as the autonomous system results in another layer of protection.
I note that I'm also not suggesting that the drivers do nothing - AFAIK they still do a lot of the easy stuff themselves, it's just that the autonomous systems would take over in the event of a safety related issue. Certainly there's not a mainline track in the country where a driver heart attack should result in a crash even without a dead man's handle.
[i]ndthornton wrote:[/i]
aracer
Are you saying that its possible both in terms of hardware and software to create a system that can resolveโฆ.
a = โChild distracted by social mediaโ
b = โChild looking carefully at trafficโ
c = โAny one of an infinite number of similar looking scenariosโ
โฆand be able to proveย compliance 100% of the timeย in 100%ย of these scenarios with 100% reliability (because that is the level of verification required to get new technology on toย a production vehicle)
I'm saying they can not only do better than the average driver right now (the one who isn't even paying any attention to the child), but that even if we're not already there now it can do better than even the best driver. Though there we go again with the attempt to run faster than the bear rather than just run faster than the other bloke. The sort of technology you're applying those rules to is also the sort of thing which adds another point of failure rather than replacing the biggest existing weakness.
Ill go back to the trainsโฆautomation would be easy โ almost trivial. The fact that it hasnโt happened should be a big alarm bell considering the problem and the risk is many orders of magnitude bigger with cars.
You appear to be ignoring that they can and are. DLR.
In almost any critical life or death decision, Iโll take an algorithm over human โjudgementโ.
Whose judgement designed the algorithm?
[i]dissonance wrote:[/i]
In almost any critical life or death decision, Iโll take an algorithm over human โjudgementโ.
Whose judgement designed the algorithm?
Somebody* with plenty of time to consider the options from a full set of information rather than trying to decide in a split second from a limited subset.
*actually we're talking multiple levels of review here over the course of months
What about mathematics? Can a machine be better at calculations than a human? What about chess? Can a machine be better than the best human at chess? What aboutย Go?
If you can turn my High Street into a board where grannies can only go forwards, not backwards, and schoolkids move three steps forward and one to the side, then youโre onto a winner. Chess is a game with very simple rules and multiple possibilities all deriving from those simple rules.
That wasn't really my point. In really simple terms driving is about judging speed and distance. Every conceivable variable that a human is considering will be judged by a computer that isn't guessing, or if it is, it's guessing based on better information that the average driver has, and they won't get distracted, tired or angry.
Autonomous cars will be anticipating every possible scenario and variable and using information a human doesn't have access to - a really simple example, people don't have night vision or 360 degree vision.
computers have to find a way to get better than a species with millions of years of adaptations designed to detect and predict hazards and sift out the genuine threat from all the foliage.
Lol. It's almost like you think humans are good at driving.
My main worry with self-driving vehicles is not with whatever new perfectly functionning models are put on the roads, it's what happens when stuff goes wrong. The cars will be dependnt on a mass of sensors feeding signals into a computer. We have examples of extremely well maintained machines where this is the case - planes. And they crash just because of a bit of ice in a tube. And even if there are still two pilots on the plane it still crashes because the pilots et confused when they have to go back to manual.
Take a look at current cars, most of the problems are electronic. "fail safe" will be programmed in you say. But will it? Will the car stop every time there's a bit of mud on the lens or the radar signal is jammed. Will it just slow down or stop in the middle of the lane or will it continue. If it does something will all the other cars make the right decisions about what to do because we're talking about lots of them, and maybe thaey all have their radars messed up by the same paracite. When one car runs a red light because it got confused will all the others respond appropriately.
I can see a future for autonomous traffic systems but I think that speed limits will have to be dropped to make it work. Autonomous trams work at low speed because they are slow enough and cautious enough to be safe around pedestrians. Allow autonomous vehicles to go as fast as humans go currentlyย and there will be fatalities - maybe less than with humans at the wheel but a whole lot less acceptable
Whose judgement designed the algorithm?
There isn't "an algorithm". It's machine learning, completely different kettle of fish.
Edukator
My main worry with self-driving vehicles is not with whatever new perfectly functionning models are put on the roads, itโs what happens when stuff goes wrong.
Limp mode.
ย Will the car stop every time thereโs a bit of mud on the lens or the radar signal is jammed. Will it just slow down or stop in the middle of the lane or will it continue.
Self cleaning lenses aside, perhaps in the event of a sensor failure the car might prompt the useless meatbag in the driver's seat to put down his or her phone and drive the car?
I can see a future for autonomous traffic systems but I think that speed limits will have to be dropped to make it work.
I know you live in eternal hope of lowered speedlimits Edukator but AV's will mean speed limits are increased massively - current speed limits are based around general human reaction times and abilities. Once this tech becomes common place it'll only be old dumb, human drivers who have to adhere to current limits. Everyone else will be able to go as fast as their tax band allows.