• andyburke@fedia.io
    link
    fedilink
    arrow-up
    7
    ·
    5 hours ago

    A Tesla in FSD randomly just veered off the road into a tree. There is video. It makes no sense, very difficult to work out why the AI thought that looked like a good move.

    These tools this author is saying we have do not work how people claim they do.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 hours ago

      Tesla gets telemetry that should show exactly what happened. We need to require that to be collected with each accident so someone can look for patterns and improvements.

      But I’ll agree with the other guy that’s it’s still quite possible this is safer than human drivers already. It makes news because it seems like a ridiculous failure. But what happens when you compare it to the number of accidents caused by people falling asleep or getting distracted, or letting their rage out?

      The critical data is the cost in human lives, and it’s quite possible for technology to fail spectacularly while saving lives overall

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Tesla self-driving failures are in a class of their own because the asshat in charge didn’t want to outfit the cars with the needed sensors to provide reasonable self-driving capabilities.

    • MonkRome@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      5 hours ago

      They only have to work better and more consistently than humans to be a net positive. Which I believe most of these systems already do by a wide margin. Psychologically it’s harder to accept a mistake from technology than it is from a human because the lack of control, but if the goal is to save lives, these safety systems accomplish that.

      • andyburke@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        5 hours ago

        Evidence, please.

        I have literally been in thousands of driving incidences where a human has not randomly driven into a tree.

        You are making a claim here: that these AI systems are safer than humans. There is at least one clear counter example to your claim in existence (which I cited - https://youtu.be/frGoalySCns if anyone wants to try to figure out what this AI was doing) and there are others including ones where they have driven into the sides of tractor trailers. I assume you will make an argument about aggregates, but the sample size we have for these AI driving systems relative to the sample size we have for humans is many orders of magnitude different. And having now seen years of these incidents continuing to pile up, I believe there needs to be much more rigorous research and testing before you can make valid claims these systems are somehow safer.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          4 hours ago

          It’s all in how you combine the numbers, and yes we need a lot more progress, but …. When was the last time an ai caused a collision because it was texting? How often does a self driving vehicle threaten or harm others with road rage?

          I do t know what the numbers are but human driving sets a very low bar so it’s easy to believe even today’s inadequate self-driving is safer

          • andyburke@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            4 hours ago

            This is the same anecdotal appeal we get over and over while AI cars drive into firetrucks and trees in ways even the most basic licensed driver would not. Then we are told these are safer because people text or become distracted. I am over this garbage. Get real numbers and find a way to do it that doesn’t put me and my family at risk.