• TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    The fact that AI is “not perfect” is a HUGE FUCKING PROBLEM. Idiots across the world, and people who we’d expect to know better, are making monumental decisions based on AI that isn’t perfect, and routinely “hallucinates”. We all know this.

    Every time I think I’ve seen the lowest depths of mass stupidity, humanity goes lower.

    • Skyline969@piefed.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Think of the dumbest person you know. Not that one. Dumber. Dumber. Yeah, that one. Now realize that ChatGPT has said “you’re absolutely right” to them no less than a half dozen times today alone.

      If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them. If they could be like “this could be the right answer, but I wasn’t able to verify” and “no, I don’t think what you said is right, and here are reasons why”, people would cling to them less.

      • Canonical_Warlock@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 months ago

        If LLMs weren’t so damn sycophantic,

        Has anyone made a nonsycophantic chat bot? I would actually love a chatbot that would tell me to go fuck myself if I asked it to do something inane.

        Me: “Whats 9x5?”

        Chatbot: “I don’t know. Try using your fingers or something?”

        Edit: Wait, this is just glados.

        • Darkenfolk@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I am not a chatbot, but I can do daily “go fuck yourself’s” if your interested for only 9,99 a week.

          14,95 for premium, which involves me stalking your onlyfans and tailor fitting my insults to your worthless meat self.

          • Slashme@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I am not a chatbot

            Citation needed

            if your interested

            Ah, no, that’s a human error. Not a bot.

        • Zos_Kia@jlai.lu
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Honestly Claude is not that sycophantic. It often tells me I’m flat out wrong, and it generally challenges a lot of my decisions on projects. One thing I’ve also noticed on 4.6 is how often it will tell me “I don’t have the answer in my training data” and offer to do a web search rather than hallucinating an answer.

            • Zos_Kia@jlai.lu
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              Yes i saw that benchmark and was honestly not surprised with the results. It seems that Anthropic really focused on those issues above and beyond what was done in other labs.

    • Restaldt@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      If you thought people were dumb before LLMs… just know that now those people have offloaded what little critical thinking they were capable of to these models.

      The dumbest people you know are getting their opinions validated by automated sycophants.