• TurboWafflz@lemmy.world
    link
    fedilink
    English
    arrow-up
    92
    ·
    15 hours ago

    I think the best thing to do is to not block them when they’re detected but poison them instead. Feed them tons of text generated by tiny old language models, it’s harder to detect and also messes up their training and makes the models less reliable. Of course you would want to do that on a separate server so it doesn’t slow down real users, but you probably don’t need much power since the scrapers probably don’t really care about the speed

    • phx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      ·
      12 hours ago

      Yeah that was my thought. Don’t reject them, that’s obvious and they’ll work around it. Feed them shit data - but not too obviously shit - and they’ll not only swallow it but eventually build up to levels where it compromises them.

      I’ve suggested the same for plain old non-AI data stealing. Make the data useless to them and cost more work to separate good from bad, and they’ll eventually either sod off or die.

      A low power AI actually seems like a good way to generate a ton of believable - but bad - data that can be used to fight the bad AI’s. It doesn’t need to be done real-time either as datasets can be generated in advance

    • sudo@programming.dev
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      14 hours ago

      The problem is primarily the resource drain on the server and tarpitting tactics usually increase that resource burden by maintaining the open connections.