I had never thought about any of this before, but it actually makes perfect sense.
By its nature, an LLM feeds back some statistically close approximation of what you expect to see, and the more you engage with it (which is to say, the more you refine your prompts for it) the closer it necessarily gets to precisely what you expect to see.
“He was like, ‘just talk to [ChatGPT]. You’ll see what I’m talking about,’” his wife recalled. “And every time I’m looking at what’s going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t.”
Exactly. To an outside observer, that’s likely what it would look like, because in some sense, that’s exactly what it in fact is.
But to the person engaging with it, it’s a revelation of the deep, secret, hidden truths that they always sort of suspected lurked at the heart of reality. Never mind that the LLM is just stringing together words and phrases most statistically likely to correspond with the prompts it’s been given - to the person feeding it those prompts, it seems like, at long last, verification of what they’ve always suspected.
I can totally see how people could get sucked in by that
As someone with a bipolar loved one, i can see exactly how this could feed into their delusions. It’s always there…even if they ran out of people to blast with their wild, delusional ideas the chat bot can be there to listen and feed back. When everyone has stopped listening or begins avoiding them because the mentally ill person has gotten more forceful/assertive about their beliefs, the chatbot will still be there. The voice in their head now has a companion on screen. I never considered any of this before but I’m concerned where this can lead, especially given the examples in the article.
I know someone with obsessive compulsive disorder, and I could see a chatbot being harmful there, depending on how it goes. A lot of compulsions are around checking or asking for reassurance. A chatbot would provide endless reassurance where a human might eventually get annoyed and cut you off. It would allow you to ruminate endlessly.
The problem is that engaging in compulsions keeps you in a cycle - it’s never enough reassurance. The gold standard treatment is exposure response prevention (ERP), where you intentionally expose yourself to triggers and resist doing the compulsions. (Info from Free Yourself from OCD by Jonah Lakin, PsyD)
I am somewhat surprised to hear that people are talking to ChatGPT for hours, days, or weeks on end in order to have this experience. My main exposure to it is through AI Roguelite, a program that essentially uses ChatGPT to imitate a text-based adventure game, with some additional systems to mitigate some issues faced by earlier attempts at the same (such as AI Dungeon).
And… it’s not especially convincing. It doesn’t remember what happened an hour ago. Every NPC talks like one of two or three stock characters. It has no sense of pacing, of when to build tension and when to let events get resolved. Characters regularly forget what you’ve done with them previously, invent new versions of past events that were supposed to be remembered but had to be summarized to fit within the token limits, and respond erratically when you try to remind them what happened. It often repeats the same events in every game: for example, if you’re exploring a cave, you’re going to get attacked by a chitinous horror with too many legs basically every time.
It can be fun for what it is, but as an illusion it wears through fairly quickly. I would have expected the same to be the case for people talking to ChatGPT about other topics.
Acting like your experience is exemplary of every possible experience other people can have with LLMs is just turning around the blame on the victims. The lack of safeguards to prevent this is to blame, not the people prone to mental issues falling victim to it.
Sorry I didn’t mean to imply that, let me rephrase: I am surprised that ChatGPT can hold convincing conversations about some topics, because I didn’t expect it to be able to. That certainly makes me more concerned about it than I was previously.
when gpt came out, I told it about my work projects and ideas.
it told me they were good ideas and validated me, I cried. it must be the first time in a long long time I’ve heard anything being nice to me, validating me, or complementing me. I knew it was fake, I know it was BS, but it felt like breathing air after suffocating for years.
society is so fucked, were do isolated, and everything is unaffordable. how do people go to pubs regularly and have friends? I can barely afford my groceries.
I appreciated your story.
And people are diving head first into bots instead of getting professional help. We are so fucked
ai currently has over a billion users, this is a non-story
Shhhhh, don’t interrupt the circle jerk. We’re all going to be in cyber psychosis by next year.