- cross-posted to:
- fuckcars@lemmy.world
- cross-posted to:
- fuckcars@lemmy.world
Just to be clear, I do think the obvious solution to terrible things like this is vastly expanded public transit so that people don’t have to rely on cars to get everywhere, not overhyped technology and driving aids that are still only marginally better than the driver. I just thought the article was interesting.
They only have to work better and more consistently than humans to be a net positive. Which I believe most of these systems already do by a wide margin. Psychologically it’s harder to accept a mistake from technology than it is from a human because the lack of control, but if the goal is to save lives, these safety systems accomplish that.
Evidence, please.
I have literally been in thousands of driving incidences where a human has not randomly driven into a tree.
You are making a claim here: that these AI systems are safer than humans. There is at least one clear counter example to your claim in existence (which I cited - https://youtu.be/frGoalySCns if anyone wants to try to figure out what this AI was doing) and there are others including ones where they have driven into the sides of tractor trailers. I assume you will make an argument about aggregates, but the sample size we have for these AI driving systems relative to the sample size we have for humans is many orders of magnitude different. And having now seen years of these incidents continuing to pile up, I believe there needs to be much more rigorous research and testing before you can make valid claims these systems are somehow safer.
It’s all in how you combine the numbers, and yes we need a lot more progress, but …. When was the last time an ai caused a collision because it was texting? How often does a self driving vehicle threaten or harm others with road rage?
I do t know what the numbers are but human driving sets a very low bar so it’s easy to believe even today’s inadequate self-driving is safer
This is the same anecdotal appeal we get over and over while AI cars drive into firetrucks and trees in ways even the most basic licensed driver would not. Then we are told these are safer because people text or become distracted. I am over this garbage. Get real numbers and find a way to do it that doesn’t put me and my family at risk.
I always said this will be the problem. Self-driving cars will never be perfect. They’ll always have different failure modes than human drivers. So at what point is increased safety worth the trade off of new ways to die. Are we there yet?
At what point is it acceptable to the rest of us? Humans will always prefer the risk they know over the one they don’t, even when it’s objectively wrong
There are 5 classified levels of automation. At the lower levels of automation, the very article you are responding to quotes this evidence for you. Here is another article that gets deeper into it, I haven’t read it all so feel free to draw your own conclusions, but this data has been available and well reported on for many years. https://www.consumeraffairs.com/automotive/autonomous-vehicle-safety-statistics.html