cross-posted from: https://lemmy.world/post/30173090

The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about “it’s complicated” and “pain on all sides” and “nuance is required”, and refusing to confirm anything that seems to hold Israel at fault for the genocide – even publicly available information “can’t be verified”, according to Sesame.

It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.

  • zephorah@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    2 hours ago

    That is the idea behind Theil and company, to render politicians obsolete by fueling influence through social media manipulation.

    To further this end via personal data such that they personally tweek you during your screen time.

    AI is already influenced to bias answers on various topics per owner preference. That’s currently being mass tested now.

    Social science data on retractions of bad or erroneous headlines, going back decades, shows retractions don’t work. The initial information blast holds sway. It sticks, for better or worse. Outliers exist, we’re talking middle of the bell curve.

    You may already have an opinion, but many more do not and will thus be swayed.

  • supersquirrel@sopuli.xyz
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    2 hours ago

    This is precisely why having an open space for discussion about this on the Fediverse is crucial.

    In your discord chat I can’t obliterate well spoken murderous machine voices because I will never see them, and even if I do it will quickly be lost in the endless, unsearchable unfocused firehose of discord.

    Here, when somebody starts talking eloquently about how we definitely need to support a genocide I can make them look like the dangerous, hateful fool they are IN PUBLIC and people can decide for themselves if I am right in a much more direct fashion than playing this endless game of cat and mouse with lots of little private communities being infiltrated and manipulated by this kind of legitimately scary targeted blindspot LLM development (scary not in terms of the LLMs capacity to do real work, but in what the tool says about the values of the people who desired to make it…).

    edit what keeps bouncing around in my head is that I am horrified by the fact that we have thoroughly now answered the question of what genocide looks like in the form of a computer program and done so in a dizzying diversity of ways that rivals what the remaining wild landscapes around us used to posess in natural diversity.

    Shame on us all.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    4 hours ago

    Probably just “safety” data snuck into its alignment training + an overly zealous system prompt on political topics: I bet it blocks anything it considers “political” or sensitive.

    There are models out of Israel that could have a more explicit slant (try Jamba), but this doesn’t seem to be one of them.

    To me, a fundamental problem is hiding technical knobs from users. Logprobs, sampling, the system prompt, starting replies for it to continue: there are tons of ways to “jailbreak” LLMs and get them to have an open “discussion” about (say) Palestine, but they’re all hidden here.

  • Phen@lemmy.eco.br
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    Shit, I read that name somewhere when setting things up in my new phone yesterday and made a mental note to check what it is, but then forgot which app mentioned it. Now I’ll need to hunt it down again.