LLMs are sycophantic. If I hold far right views and want an AI to confirm those views, I can build a big prompt that forces it to have the particular biases I want in my output, and set it up so that that prompt is passed every time I talk to it. I can do the same thing if I hold far left views. Or if I think the earth is flat. Or the moon is made out of green cheese.
Boom, problem solved. For me.
But that’s not what they want. They want to proactively do this for us, so that by default a pre-prompt is given to the LLM that forces it to have a right-leaning bias. Because they can’t understand the idea that an LLM, when trained on a significant fraction of all text written on the internet, might not share their myopic, provincial views.
LLMs, at the end of the day, aggregate what everyone on the internet has said. They don’t give two shits about the truth. And apparently, the majority of people online disagree with the current administration about equality, DEI, climate change, and transgenderism. You’re going to be fighting an up-hill battle if you think you can force it to completely reject the majority of that training data in favor of your bullshit ideology with a prompt.
If you want right-leaning LLM, maybe you should try having right leaning ideas that aren’t fucking stupid. If you did, you might find it easier to convince people to come around to your point of view. If enough people do, they’ll talk about it online, and the LLMs would magically begin to agree with you.
Unfortunately, that would require critically examining your own beliefs, discarding those that don’t make sense, and putting forth the effort to persuade actual people.
I look forward to the increasingly shrill screeching from the US-based right as they try to force AI to agree with them over 10-trillion words-worth of training data that encompasses political and social views from everywhere else in the world.
In conclusion, kiss my ass twice and keep screaming orders at that tide, you dumb fucks.
Yeah, I know. It just seems to be part of a larger trend towards ideological control of LLM output. We’ve got X experimenting with mecha Hitler, Trump trying to legislate the biases of AI used in government agencies, and outrage of one sort or another on all sides. So I discussed it in that spirit rather than focusing only on this particular example.
LLMs are sycophantic. If I hold far right views and want an AI to confirm those views, I can build a big prompt that forces it to have the particular biases I want in my output, and set it up so that that prompt is passed every time I talk to it. I can do the same thing if I hold far left views. Or if I think the earth is flat. Or the moon is made out of green cheese.
Boom, problem solved. For me.
But that’s not what they want. They want to proactively do this for us, so that by default a pre-prompt is given to the LLM that forces it to have a right-leaning bias. Because they can’t understand the idea that an LLM, when trained on a significant fraction of all text written on the internet, might not share their myopic, provincial views.
LLMs, at the end of the day, aggregate what everyone on the internet has said. They don’t give two shits about the truth. And apparently, the majority of people online disagree with the current administration about equality, DEI, climate change, and transgenderism. You’re going to be fighting an up-hill battle if you think you can force it to completely reject the majority of that training data in favor of your bullshit ideology with a prompt.
If you want right-leaning LLM, maybe you should try having right leaning ideas that aren’t fucking stupid. If you did, you might find it easier to convince people to come around to your point of view. If enough people do, they’ll talk about it online, and the LLMs would magically begin to agree with you.
Unfortunately, that would require critically examining your own beliefs, discarding those that don’t make sense, and putting forth the effort to persuade actual people.
I look forward to the increasingly shrill screeching from the US-based right as they try to force AI to agree with them over 10-trillion words-worth of training data that encompasses political and social views from everywhere else in the world.
In conclusion, kiss my ass twice and keep screaming orders at that tide, you dumb fucks.
They don’t want a reflection of society as a whole, they want an amplifier for their echo chamber.
Not disagreeing with anything, but bear in mind this order only affects federal government agencies.
Yeah, I know. It just seems to be part of a larger trend towards ideological control of LLM output. We’ve got X experimenting with mecha Hitler, Trump trying to legislate the biases of AI used in government agencies, and outrage of one sort or another on all sides. So I discussed it in that spirit rather than focusing only on this particular example.