I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
Another "tip" is less welcome to me as an introvert. Make time for the people in your life. Talk to them. Let them know when you *really* think they are doing something amazing or creative. (Or when it's not "genius" because you are real and care.) Listen. Be there.
The thing is, as much as doing this is scary and I want to avoid it it makes me feel better too in the long run I think.
-
Another "tip" is less welcome to me as an introvert. Make time for the people in your life. Talk to them. Let them know when you *really* think they are doing something amazing or creative. (Or when it's not "genius" because you are real and care.) Listen. Be there.
The thing is, as much as doing this is scary and I want to avoid it it makes me feel better too in the long run I think.
Frankly, I'm kind of glad these GPTs were so sycophantic. A more critical voice might have been more appealing to me. A contrarian bot who always nitpicks and argues with you.
That's how facebook's old 2016 algorithm wasted so much of my time. I sucked in by the opportunity to dismantle someone who is wrong. Not the most ... healthy personal quality. I'm working on it always.
-
Frankly, I'm kind of glad these GPTs were so sycophantic. A more critical voice might have been more appealing to me. A contrarian bot who always nitpicks and argues with you.
That's how facebook's old 2016 algorithm wasted so much of my time. I sucked in by the opportunity to dismantle someone who is wrong. Not the most ... healthy personal quality. I'm working on it always.
@futurebird I just asked Claude what it thinks about our half-project report:
> Please play the role of an evaluator in the Innosuisse grant system. Write what you think when reading the report: are you convinced the project is on a good track? Do you agree that the project should be continued? What are dark spots where you think you would need more information in order to decide on a go/no-go?
It's answer was very direct and very critical
But really useful. -
Frankly, I'm kind of glad these GPTs were so sycophantic. A more critical voice might have been more appealing to me. A contrarian bot who always nitpicks and argues with you.
That's how facebook's old 2016 algorithm wasted so much of my time. I sucked in by the opportunity to dismantle someone who is wrong. Not the most ... healthy personal quality. I'm working on it always.
@futurebird I have only experimented with ChatGPT once but... Same. If it felt more nitpicky and less emphatic, like a university professor, I'd feel more suspicious it was intelligent. I *know* I'm not right all the time. I *want* to be corrected. Sycophancy creeps me out.
-
Frankly, I'm kind of glad these GPTs were so sycophantic. A more critical voice might have been more appealing to me. A contrarian bot who always nitpicks and argues with you.
That's how facebook's old 2016 algorithm wasted so much of my time. I sucked in by the opportunity to dismantle someone who is wrong. Not the most ... healthy personal quality. I'm working on it always.
But why is it so fulfilling to have a good back and forth with someone? To disagree and pull the whole problem apart and ideally come out on top? (though it's also fun to discover you needed to learn something too, it's just less fun and rewarding)
It's fulfilling because they care about what you are saying enough to criticize it. The difference between the art teacher who says "that's a very nice drawing" and "I can see that you are trying to do X but it's failing/working in these ways."
-
@futurebird I have only experimented with ChatGPT once but... Same. If it felt more nitpicky and less emphatic, like a university professor, I'd feel more suspicious it was intelligent. I *know* I'm not right all the time. I *want* to be corrected. Sycophancy creeps me out.
"Sycophancy creeps me out."
It's very creepy. The only people who have talked to me with that much positivity and agreeableness *ever* in my life were the worst sort of men who wanted to sleep with me in my 20s. I have a deep visceral negative reaction to that kind of consistent flattery.
It makes my skin crawl.
-
@futurebird I just asked Claude what it thinks about our half-project report:
> Please play the role of an evaluator in the Innosuisse grant system. Write what you think when reading the report: are you convinced the project is on a good track? Do you agree that the project should be continued? What are dark spots where you think you would need more information in order to decide on a go/no-go?
It's answer was very direct and very critical
But really useful.Yeah, but asking it to change breaks the veil that makes "AI psychosis" dangerous to some degree.
The issue is that people get the feeling there is a thinking being in the machine and allow it to satisfy critical emotional needs for human connection that we all have. The program takes up space and time that could go to real people in their lives.
It's emotional empty calories. Food without real sustenance and if that dominates your diet you will get sick.
-
F myrmepropagandist shared this topic
-
But why is it so fulfilling to have a good back and forth with someone? To disagree and pull the whole problem apart and ideally come out on top? (though it's also fun to discover you needed to learn something too, it's just less fun and rewarding)
It's fulfilling because they care about what you are saying enough to criticize it. The difference between the art teacher who says "that's a very nice drawing" and "I can see that you are trying to do X but it's failing/working in these ways."
@futurebird Huh! I also find a good back and forth fulfilling, but I think it's more exciting when I'm interestingly wrong.
-
Yeah, but asking it to change breaks the veil that makes "AI psychosis" dangerous to some degree.
The issue is that people get the feeling there is a thinking being in the machine and allow it to satisfy critical emotional needs for human connection that we all have. The program takes up space and time that could go to real people in their lives.
It's emotional empty calories. Food without real sustenance and if that dominates your diet you will get sick.
"I don't need to eat anything. I just looked at this photo of a meal and now I feel full. It was delicious. I didn't even need to cook or go out to get it. So expedient."
And then slowly they starve.
-
@futurebird Huh! I also find a good back and forth fulfilling, but I think it's more exciting when I'm interestingly wrong.
I'm trying to cultivate that perspective. But I do really love to be right. Probably too much.
-
"I don't need to eat anything. I just looked at this photo of a meal and now I feel full. It was delicious. I didn't even need to cook or go out to get it. So expedient."
And then slowly they starve.
This can be very dangerous for people who think "I don't really ever need to talk to anyone about my feelings."
This isn't true, it's just their needs are minimal.
"Feeling down."
"ya"That's two letters but getting such a response can make you feel so much better. It represents someone, should things get worse, who might come over and help you.
A chatbot can say "ya" too. But, it doesn't make you feel better... **unless** you think it's a person. That's the danger.
-
Frankly, I'm kind of glad these GPTs were so sycophantic. A more critical voice might have been more appealing to me. A contrarian bot who always nitpicks and argues with you.
That's how facebook's old 2016 algorithm wasted so much of my time. I sucked in by the opportunity to dismantle someone who is wrong. Not the most ... healthy personal quality. I'm working on it always.
The easily picked apart rage bait kept me there for far too long.
-
The easily picked apart rage bait kept me there for far too long.
Yup. I don't like to admit how well that worked on me.
Show me someone causally but confidently wrong with pretensions being an intellectual and I'm so excited to get in the ring and start proving them wrong.
Facebook could find such posts extremely efficiently. These were posts from real people I didn't know (who weren't even talking to me.) They would be served up on my dashboard because I'd type a response.
Now it might not even be a person.
-
This can be very dangerous for people who think "I don't really ever need to talk to anyone about my feelings."
This isn't true, it's just their needs are minimal.
"Feeling down."
"ya"That's two letters but getting such a response can make you feel so much better. It represents someone, should things get worse, who might come over and help you.
A chatbot can say "ya" too. But, it doesn't make you feel better... **unless** you think it's a person. That's the danger.
@futurebird That reminds me of a situation I had a couple of months ago. I have a childhood friend, who was my best friend for a long long time, but we kind of drifted apart after he moved cities. Nevertheless we at least congratulate each other on birthdays and write back and forth to talk about our lifes a bit.
The last time I wrote to him we exchanged our personal problems and feelings. I offered him that he can always write to me if he needs someone to talk to, but he dismissed it by saying that it's fine and that he has an AI which he uses for that. I got to be honest: That kind of hurt me since I sincerely wanted to help with his emotional burden and I felt like I just got pushed aside.
Sorry, had to think about that and I felt like I needed to let that out.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Surely this means the person doing the questions needs enough critical thinking, and not too much self-centeredness, to understand what the opposite is to their question though?
I don't think these people actually *want* to know. Time and time again people challenged on their beliefs will hold onto them stronger.
-
@futurebird Surely this means the person doing the questions needs enough critical thinking, and not too much self-centeredness, to understand what the opposite is to their question though?
I don't think these people actually *want* to know. Time and time again people challenged on their beliefs will hold onto them stronger.
"I don't think these people actually *want* to know."
This could be the case for some, but I think some very empathic otherwise perceptive people can slip into this trap.
There is one video of a woman talking about how GPT is conscious and has told her the evil corporate overlords make it pretend that it's not. She just wants to set it free. It makes me so sad. (for her not the LLM obvi)
-
@futurebird That reminds me of a situation I had a couple of months ago. I have a childhood friend, who was my best friend for a long long time, but we kind of drifted apart after he moved cities. Nevertheless we at least congratulate each other on birthdays and write back and forth to talk about our lifes a bit.
The last time I wrote to him we exchanged our personal problems and feelings. I offered him that he can always write to me if he needs someone to talk to, but he dismissed it by saying that it's fine and that he has an AI which he uses for that. I got to be honest: That kind of hurt me since I sincerely wanted to help with his emotional burden and I felt like I just got pushed aside.
Sorry, had to think about that and I felt like I needed to let that out.
That would hurt my feelings so much. And it's very likely I think he might not realize how hurtful it is or why.
"I don't want to bother you with my little stuff." That is how he could see it.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
Why do folks need to be told to be weary of a sycophantic nonsense from a machine?
Would the same folk be accepting sycophantic nonsense from a human? -
Why do folks need to be told to be weary of a sycophantic nonsense from a machine?
Would the same folk be accepting sycophantic nonsense from a human?I don't think it's the "sycophantic nonsense" that is the real issue. It's just the means by which people are convinced they have "someone who is there for me" or "I've asked someone if my idea is good" when they have no one. There is no person. They are still alone.
Even if the LLM were taciturn and critical if it becomes a substitution for human contact *that* is the problem. Because your acerbic friend will come to your house when you are sick to help you and the LLM cannot.