• davel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    edit-2
    19 hours ago

    It was stupid to have that in there in the first place given that an absence of politics/political bias is impossible.

    Just for one thing: the politics of the LLM’s training data will be reflected in the output. And the vast majority of the English-language corpus available will reflect Western, imperial core liberal politics. They’re pro-capitalist, pro-imperialist, anti-socialist, Western chauvinist politics.

    • pivot_root@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      16 hours ago

      And the vast majority of the English-language corpus available will reflect Western, imperial core liberal politics.

      Oh, I’m sure that isn’t going to be a problem for their goals. They can always overrepresent training data from 2016-2020 and 2024-2028 to add some balance to the model’s political compass. /s