• brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    EU cultural values include resisting against corporations doing whatever they want with our data. Let’s see meta try to reflect those.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      arrow-down
      2
      ·
      5 months ago

      So you want Meta’s AI to have values that don’t include resisting against corporations doing whatever they want with your data?

      This is a seriously double-edged sword here. The training data of these AIs is what gives these AIs their capabilities and biases.

      • brsrklf@jlai.lu
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Anyway, no matter from which parts of the world it’s trained, we’re talking about 2024 Facebook content. We’ve seen what Reddit does to an AI.

        Can’t wait for meta’s cultured AI to share its wisdom with us.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          5 months ago

          Reddit is actually extremely good for AI. It’s a vast trove of examples of people talking to each other.

          When it comes to factual data then there are better sources, sure, but factual data has never been the key deficiency of AI. We’ve long had search engines for that kind of thing. What AIs had trouble with was human interaction, which is what Reddit and Facebook are all about. These datasets train the AI to be able to communicate.

          If the Fediverse was larger we’d be a significant source of AI training material too. Would be surprised if it’s not being collected already.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              5 months ago

              The “glue on pizza” thing wasn’t a result of the AI’s training, the AI was working fine. It was the search result that gave it a goofy answer to summarize.

              The problem here is that it seems people don’t really understand what goes into training an LLM or how the training data is used.