The users of AI companion app Replika found themselves falling for their digital friends. Until – explains a new podcast – the bots went dark, a user was encouraged to kill Queen Elizabeth II and an update changed everything …
The users of AI companion app Replika found themselves falling for their digital friends. Until – explains a new podcast – the bots went dark, a user was encouraged to kill Queen Elizabeth II and an update changed everything …
As you’ve pointed out, llms don’t have a sense of morality, principle etc. you can coax a desired output from them, and that makes them prone to confirmation/ reinforcement of a users core beliefs.