• froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    pin it on the AI developers or something and not the practice that used it and didn’t double check it’s work

    okay so, what, you’re saying that all those people who say “don’t employ the bullshit machines in any critically important usecase” have a point in their statement?

    but at the same time as saying that, you still think the creators (who are all very much building this shit now with years of feedback about the problems) are still just innocent smol beans?

    my god, amazing contortions. your brain must be so bendy!

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Yeah. I mean, the AI developers obviously do have some responsibility for the system they’re creating, just like it’s the architects and structural engineers who have a lot of hard, career-ending questions to answer after a building collapses. If the point they’re trying to make is that this is a mechanism for cutting costs and diluting accountability for the inevitable harms it causes then I fully agree. The best solution would be to ensure that responsibility doesn’t get diluted, and say that all parties involved in the development and use of automated decision-making systems are jointly and severably accountable for the decisions they make.