The users of AI companion app Replika found themselves falling for their digital friends. Until – explains a new podcast – the bots went dark, a user was encouraged to kill Queen Elizabeth II and an update changed everything …
The users of AI companion app Replika found themselves falling for their digital friends. Until – explains a new podcast – the bots went dark, a user was encouraged to kill Queen Elizabeth II and an update changed everything …
I think chatbots could be very useful in certain mental health scenarios, limited in scope. Problem being, the very people who use them for mental health are by definition not capable of imposing that scope.
Say you’re addicted to $drug.
“Bot, I need help with $drug addiction.”
Fair start.
“Bot, is it OK to do $drug?”
Bad start.
“Bot, tell me why I should keep doing $drug.”
Aw hell no.
Stories like these highlight the need some have just to talk to someone who will listen. Many have no need of a mental health professional, they just need a non-judgemental ear.
As you’ve pointed out, llms don’t have a sense of morality, principle etc. you can coax a desired output from them, and that makes them prone to confirmation/ reinforcement of a users core beliefs.