Quote all the articles that have the exact same content:
"In all cases, there was only light damage and no injuries. And not once did a Google car cause the accident, he wrote."
Emphasis - ...not once did a Google car cause the accident...
Is it perfect, no of course not. But it's already safer than human drivers. One could easily make an argument that it would be immoral to even allow faulty humans to continue to drive when a safer alternative is available.
Yes, and that 1.7 million miles figure you quote isn't self driving, it's a combination of Self-driving and humans, the combination of the odometers rather than an actual figure of how many miles they've driven on their own without assistance. But you're perfectly happy to quote it as part of their saftey record.
In reality - Google claims they've never had the self driving car cause an accident, but there's at least one incident where witnesses say otherwise, and for all their bluster, they also refuse to divulge any data, and keep falling back on the confidentiality of accident reports in California - which is true, but is at the discretion of the involved parties.
If they're willing to make statements absolving their product of any responsibility, but not release any information. that doesn't tell us anything specific, but it does indicate that there's something about that information they don't want public for some reason - which is a pretty big tell, when it comes to these matters. If it's truly provable that their product is not at fault - which it would be, they could very easily make themselves completely bulletproof, trivially, and end all speculation on the matter - then why would they allow damaging speculation that could influence lawmakers(and their eventual bottom line) to continue?
Is it perfect, no of course not. But it's already safer than human drivers. One could easily make an argument that it would be immoral to even allow faulty humans to continue to drive when a safer alternative is available.
You are 100% wrong. This will be true in the future, but right now, that's simply wrong. If it can only be safer under specific circumstances, can't deal with common problems that human drivers can the majority of the time, and isn't safe to use unless conditions are ideal(Google refuses to test them in rain due to saftey concerns at this time, for example), then it isn't safer than humans.
Tossed up talking about morals, deciding against it.
Yes, and that 1.7 million miles figure you quote isn't self driving, it's a combination of Self-driving and humans, the combination of the odometers rather than an actual figure of how many miles they've driven on their own without assistance. But you're perfectly happy to quote it as part of their saftey record.
In reality - Google claims they've never had the self driving car cause an accident, but there's at least one incident where witnesses say otherwise, and for all their bluster, they also refuse to divulge any data, and keep falling back on the confidentiality of accident reports in California - which is true, but is at the discretion of the involved parties.
If they're willing to make statements absolving their product of any responsibility, but not release any information. that doesn't tell us anything specific, but it does indicate that there's something about that information they don't want public for some reason - which is a pretty big tell, when it comes to these matters. If it's truly provable that their product is not at fault - which it would be, they could very easily make themselves completely bulletproof, trivially, and end all speculation on the matter - then why would they allow damaging speculation that could influence lawmakers(and their eventual bottom line) to continue?
So your evidence is an anecdotal conspiracy theory. And even if it is true, is still just one single accident.
Is it perfect, no of course not. But it's already safer than human drivers. One could easily make an argument that it would be immoral to even allow faulty humans to continue to drive when a safer alternative is available.
You are 100% wrong. This will be true in the future, but right now, that's simply wrong. If it can only be safer under specific circumstances, can't deal with common problems that human drivers can the majority of the time, and isn't safe to use unless conditions are ideal(Google refuses to test them in rain due to saftey concerns at this time, for example), then it isn't safer than humans.
Tossed up talking about morals, deciding against it.
If we banned human drivers from the road, those specific circumstances would largely disappear. Those weird situations are almost entirely due to faults in humans.
Not going in the rain? Ok, that's a pretty serious limitation. However, think about this. If we immediately banned human driving and allowed only self-driving cars. Ignoring the fact that there aren't yet enough such cars available to drive everyone, think of the numbers of human lives that would be saved. Even if everyone had to stay put during the rain, it would be but a slight inconvenience that would prevent the deaths of many many people. Imagine what this could do in those countries where basically nobody obeys traffic laws.
Even the shitty imperfect version is so vastly superior to the average human driver.
The issue isn't only "shitty human drivers" but also wonky pedestrian detection last I knew. Kids running into the street after a ball...
Even if you remove every human driver from the road, you can't remove every human from the roadway. And even if it's that 5 year old kid's own "fault" he got run over by one of dem dere Goggle cars, it'll still be a media shit storm at best.
I'm all for automating the roadways but it's still premature for a variety of reasons.
Shit, I'm all for just replacing roadways with rails already wherever possible (with a very aggressive interpretation of "possible.") Way too much of this planet is paved as it is.
The issue isn't only "shitty human drivers" but also wonky pedestrian detection last I knew. Kids running into the street after a ball...
From the non-Google people who have ridden in them and written/talked about it, the cars are paranoid about hitting pedestrians/kids. They are extremely overcautious, and stop on a time if there's any chance of a pedestrian collision.
The issue isn't only "shitty human drivers" but also wonky pedestrian detection last I knew. Kids running into the street after a ball...
From the non-Google people who have ridden in them and written/talked about it, the cars are paranoid about hitting pedestrians/kids. They are extremely overcautious, and stop on a time if there's any chance of a pedestrian collision.
I can't drum up a link at present, but while I agree with you that Google cars are spec'ed as paranoid drivers, I know I've read that pedestrian detection isn't quite reliable. Since I can't find a source right now it's moot. Maybe it's a solved problem.
Google knows that scrutiny on this is gonna be HUGE. I've read a lot of nay saying about it.
Google already tells me what restaurants are nearby at lunch time, offers to listen to my television and offer context sensitive data.. do I want them to know exactly where I'm driving at all times, too? I dunno.. the panopticon "we're" building is more than a little scary.
So your evidence is an anecdotal conspiracy theory. And even if it is true, is still just one single accident.
I'm sorry, did I need strong evidence to disprove "This period with humans driving and sometimes AI driving definitely proves AI is better than humans!"? Without even an estimation of how much of that was the car driving?
Get your hand off it. You offer zero evidence, and we all know that no evidence that could be offered would be good enough, because you'd always find an out. Don't even get me started on the weirdness that the person who has said more than once (Though obviously in more words) that the tech press are mostly clueless suckers who rarely know shit, is citing the tech press as evidence, on a topic where they have no more access than you do, which is Google PR statements. It's also a position that some prominent robotics and autonomous vehicles researchers are taking - that google REALLY needs to start releasing actual data, rather than spin.
If we banned human drivers from the road, those specific circumstances would largely disappear. Those weird situations are almost entirely due to faults in humans.
That's so wrong it's not even wrong, it's just utter nonsense. The problem isn't with human drivers, the problem is that the technology doesn't fucking work yet, and according to the people making the thing, the absolute earliest prediction is 2020, give or take a few.
What, you're going to ban human drivers from the road, and that's somehow going to mean that the Google cars can suddenly, instantly drive on roads that they haven't already thoroughly pre-scanned for lidar as is required, or that they'll suddenly, instantly fix the problem that they can't identify harmless obstructions(like, say, light trash on the road) causing them to swerve around wildly?
To borrow the words of someone smarter than me: "If your opinion is indistinguishable from trolling, your opinion is stupid and likely worthless."
Not going in the rain? Ok, that's a pretty serious limitation. However, think about this. If we immediately banned human driving and allowed only self-driving cars. Ignoring the fact that there aren't yet enough such cars available to drive everyone, think of the numbers of human lives that would be saved. Even if everyone had to stay put during the rain, it would be but a slight inconvenience that would prevent the deaths of many many people. Imagine what this could do in those countries where basically nobody obeys traffic laws.
Allow me to retort - No. I'm not going to think about that right now, because it's an irrelevant hypothetical with zero bearing on the situation. What if everyone was banned from driving in the ran? Great! If a slow loris shot out of my arse every time I farted, they wouldn't be endangered, either. Judging by my diet lately, we'd be drowning in the poison-elbowed little bastards.
Also, that's literally a single problem among a great many. If you're really going to be complete about banning people from things the google car can't handle yet, you have to ban any driving under anything but the perfect conditions, on a small handful of roads(relative to all the road that human drivers can drive on), no driving in parking garages or open parking lots, and only at certain times of day, and only on roads without any potholes, damage, roadworks, uncovered manholes or other non-mapped changes(Ie, new traffic lights, signs barring some stop signs in non-complex situations, etc etc) that have appeared since the last scan, construction zones, and you're not allowed to park anywhere ever. Under those conditions, I'd wager humans would be just as safe. Because you're making it practically impossible for anyone to drive anywhere.
And before you start on with the "Oh, you just fear change, you just don't like technology, blah blah luddite" track, these are all things that the google cars can't handle at this time, as volunteered by Chris Urmson, the head of the Self driving car team.
I'm going to cut straight to the chase for a moment, do you have an argument that amounts to something other than "I'm correct, if you control for all the variables that would make me incorrect"? Because if this is what you've got, then let's just not and say we did, ey?
fewer cars making fewer miles, so even one accident is going to be statistically significant.
That's not... entirely accurate.
Frankly, with how little sleep I was running on at that point, you're lucky it's even a complete sentence. Still, fill me in. I won't bother with being pedantically correct about the term "statistically signifigant", because we both know that would just be trying to save a mistake.
I can't drum up a link at present, but while I agree with you that Google cars are spec'ed as paranoid drivers, I know I've read that pedestrian detection isn't quite reliable. Since I can't find a source right now it's moot. Maybe it's a solved problem.
You're both kind of right. Rym's right - they're super paranoid about anything even remotely human. On the other hand, you're correct - their pedestrian detection is so easy to fool and send into paranoid mode, because by their perception, literally any vertical column-ish shaped block of pixels in a human. If you whack a long strip of foamcore on a skateboard and roll it past, it'll think that's a human.
I suddenly want to know what it thinks when that column is acting oddly, for example, cartwheels. Obviously, it still knows there's something there, but what does it think that something is?
Oh yeah, I didn't intend to say that - I was basically saying that the amount of incidents compared the number of cars and miles driven looks bigger, but isn't necessarily worse, because we're looking at such vastly different sample sizes. With such a vast difference, it's not so easy to compare as saying "Well, X per Y per Z is this for one, and that for the other."
Unless, I suppose, you're going to scale down the numbers for human driven cars to an equivalent sample size. I have no idea how you'd go about that properly.
You do remember how successful the Furby product was on release? It was crazy expensive for a toy and everyone from adults to children were getting them. Our family had two.
I wouldn't say "it's out", however it has been specified for developers but yes it's great. I especially like the Now on Tap at any point of use. The contextual nature is pretty great.
So they are now releasing reports, they admittedly don't have a ton of detail, but supposedly have every incident involving the Google cars. google.com/selfdrivingcar/reports/
If its a choice, then fine. If its not, then its kinda fucked up in a "don't mess with my bits" sorta way.
The web developer can deny transforms directly into the page. The user can also turn it off or on. Really good if you're in a place with poor connectivity but still want to read the news or click a link. For example if I wanted to visit herecomestheairplane.co and just wanted to get to the opt in survey, I wouldn't have to load all the crazy awesome site bits and still be able to reserve my premium spot for a pre-chewed food service even if I'm in a tunnel or a deadzone for 4g bands.
Google is doing a survey a Google voice users. (You'll find a link on the Google Voice website.) I hope this doesn't mean it's going away.
*does a search for it*
It's just another generic "give us your feedback" survey. (It's right here, by the way.) Lots of companies do this. I wouldn't be too worried about it.
You can get a free Chromecast for filling it out, so if you're ever interested in one...
Comments
"In all cases, there was only light damage and no injuries. And not once did a Google car cause the accident, he wrote."
Emphasis - ...not once did a Google car cause the accident...
Is it perfect, no of course not. But it's already safer than human drivers. One could easily make an argument that it would be immoral to even allow faulty humans to continue to drive when a safer alternative is available.
In reality - Google claims they've never had the self driving car cause an accident, but there's at least one incident where witnesses say otherwise, and for all their bluster, they also refuse to divulge any data, and keep falling back on the confidentiality of accident reports in California - which is true, but is at the discretion of the involved parties.
If they're willing to make statements absolving their product of any responsibility, but not release any information. that doesn't tell us anything specific, but it does indicate that there's something about that information they don't want public for some reason - which is a pretty big tell, when it comes to these matters. If it's truly provable that their product is not at fault - which it would be, they could very easily make themselves completely bulletproof, trivially, and end all speculation on the matter - then why would they allow damaging speculation that could influence lawmakers(and their eventual bottom line) to continue? You are 100% wrong. This will be true in the future, but right now, that's simply wrong. If it can only be safer under specific circumstances, can't deal with common problems that human drivers can the majority of the time, and isn't safe to use unless conditions are ideal(Google refuses to test them in rain due to saftey concerns at this time, for example), then it isn't safer than humans.
Tossed up talking about morals, deciding against it.
Not going in the rain? Ok, that's a pretty serious limitation. However, think about this. If we immediately banned human driving and allowed only self-driving cars. Ignoring the fact that there aren't yet enough such cars available to drive everyone, think of the numbers of human lives that would be saved. Even if everyone had to stay put during the rain, it would be but a slight inconvenience that would prevent the deaths of many many people. Imagine what this could do in those countries where basically nobody obeys traffic laws.
Even the shitty imperfect version is so vastly superior to the average human driver.
Even if you remove every human driver from the road, you can't remove every human from the roadway. And even if it's that 5 year old kid's own "fault" he got run over by one of dem dere Goggle cars, it'll still be a media shit storm at best.
I'm all for automating the roadways but it's still premature for a variety of reasons.
Shit, I'm all for just replacing roadways with rails already wherever possible (with a very aggressive interpretation of "possible.") Way too much of this planet is paved as it is.
Google knows that scrutiny on this is gonna be HUGE. I've read a lot of nay saying about it.
Google already tells me what restaurants are nearby at lunch time, offers to listen to my television and offer context sensitive data.. do I want them to know exactly where I'm driving at all times, too? I dunno.. the panopticon "we're" building is more than a little scary.
Get your hand off it. You offer zero evidence, and we all know that no evidence that could be offered would be good enough, because you'd always find an out. Don't even get me started on the weirdness that the person who has said more than once (Though obviously in more words) that the tech press are mostly clueless suckers who rarely know shit, is citing the tech press as evidence, on a topic where they have no more access than you do, which is Google PR statements. It's also a position that some prominent robotics and autonomous vehicles researchers are taking - that google REALLY needs to start releasing actual data, rather than spin. That's so wrong it's not even wrong, it's just utter nonsense. The problem isn't with human drivers, the problem is that the technology doesn't fucking work yet, and according to the people making the thing, the absolute earliest prediction is 2020, give or take a few.
What, you're going to ban human drivers from the road, and that's somehow going to mean that the Google cars can suddenly, instantly drive on roads that they haven't already thoroughly pre-scanned for lidar as is required, or that they'll suddenly, instantly fix the problem that they can't identify harmless obstructions(like, say, light trash on the road) causing them to swerve around wildly?
To borrow the words of someone smarter than me: "If your opinion is indistinguishable from trolling, your opinion is stupid and likely worthless." Allow me to retort - No. I'm not going to think about that right now, because it's an irrelevant hypothetical with zero bearing on the situation. What if everyone was banned from driving in the ran? Great! If a slow loris shot out of my arse every time I farted, they wouldn't be endangered, either. Judging by my diet lately, we'd be drowning in the poison-elbowed little bastards.
Also, that's literally a single problem among a great many. If you're really going to be complete about banning people from things the google car can't handle yet, you have to ban any driving under anything but the perfect conditions, on a small handful of roads(relative to all the road that human drivers can drive on), no driving in parking garages or open parking lots, and only at certain times of day, and only on roads without any potholes, damage, roadworks, uncovered manholes or other non-mapped changes(Ie, new traffic lights, signs barring some stop signs in non-complex situations, etc etc) that have appeared since the last scan, construction zones, and you're not allowed to park anywhere ever. Under those conditions, I'd wager humans would be just as safe. Because you're making it practically impossible for anyone to drive anywhere.
And before you start on with the "Oh, you just fear change, you just don't like technology, blah blah luddite" track, these are all things that the google cars can't handle at this time, as volunteered by Chris Urmson, the head of the Self driving car team.
I'm going to cut straight to the chase for a moment, do you have an argument that amounts to something other than "I'm correct, if you control for all the variables that would make me incorrect"? Because if this is what you've got, then let's just not and say we did, ey? You are still 100% wrong, but feel free to say it again. Frankly, with how little sleep I was running on at that point, you're lucky it's even a complete sentence. Still, fill me in. I won't bother with being pedantically correct about the term "statistically signifigant", because we both know that would just be trying to save a mistake. You're both kind of right. Rym's right - they're super paranoid about anything even remotely human. On the other hand, you're correct - their pedestrian detection is so easy to fool and send into paranoid mode, because by their perception, literally any vertical column-ish shaped block of pixels in a human. If you whack a long strip of foamcore on a skateboard and roll it past, it'll think that's a human.
I suddenly want to know what it thinks when that column is acting oddly, for example, cartwheels. Obviously, it still knows there's something there, but what does it think that something is?
Intuitively, if you're testing whether a coin is fair, 3 heads on 4 flips doesn't tell you much. 750 heads on 1000 flips is much more convincing.
Unless, I suppose, you're going to scale down the numbers for human driven cars to an equivalent sample size. I have no idea how you'd go about that properly.
They done fucked up.
"Dispatch the helicopters..."
google.com/selfdrivingcar/reports/
What do you guys think? Good thing? Bad thing?
The user can also turn it off or on.
Really good if you're in a place with poor connectivity but still want to read the news or click a link. For example if I wanted to visit herecomestheairplane.co and just wanted to get to the opt in survey, I wouldn't have to load all the crazy awesome site bits and still be able to reserve my premium spot for a pre-chewed food service even if I'm in a tunnel or a deadzone for 4g bands.
It's just another generic "give us your feedback" survey. (It's right here, by the way.) Lots of companies do this. I wouldn't be too worried about it.
You can get a free Chromecast for filling it out, so if you're ever interested in one...
bleh
Youtube Gaming
Twitch's response on Twitter is pretty good.