• NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    The whole point of the Turing test, is that you should be unable to tell if you’re interacting with a human or a machine. Not 54% of the time. Not 60% of the time. 100% of the time. Consistently.

    They’re changing the conditions of the Turing test to promote an AI model that would get an “F” on any school test.

    • bob_omb_battlefield@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      But you have to select if it was human or not, right? So if you can’t tell, then you’d expect 50%. That’s different than “I can tell, and I know this is a human” but you are wrong… Now that we know the bots are so good, I’m not sure how people will decide how to answer these tests. They’re going to encounter something that seems human-like and then essentially try to guess based on minor clues… So there will be inherent randomness. If something was a really crappy bot then it wouldn’t ever fool anyone and the result would be 0%.