Environment Claude CLI version: 1.0.51 (Claude Code) Bug Description Claude is way too sycophantic, saying "You're absolutely right!" (or correct) on a sizeable fraction of responses. Expected Beha...
When using ChatGPT, I was so annoyed from it constantly saying “Great question”, “you’re absolutely right”, “That’s a great idea” and similar things, acting like everything I say was so smart until I confronted it and told it to stop (see the image, although it’s in German):
I did the same thing. I also asked it to stop coming off as so certain about things after I discovered how wrong it is on some topics. It now presents confidence levels, but who knows if that’s accurate. At least it reminds me to verify.
I have the confidence level turned down too but lately it doubles down on itself.
The usual conversation…
VI: “You could do this.”
Me: “That won’t work because XYZ.”
VI: “No, you can definitely do that. XYZ has nothing to do with it.”
Me: pastes it’s own suggestion in.
VI: “Almost, but that won’t work because of XYZ.”
It’s most notorious one is adding an s to Table.AddColumn() then proceeding to make a full snippet around this newly made up function. This specific example is so regular it’s become a joke at work for giving someone an unhelp response,
I got curious about x86 assembly, so I followed some tutorials to get the hang of it. Once I had some confidence, I wrote a prime number generator. I had a loop that I was sure could be more efficient, but couldn’t figure it out.
I pasted the code to ChatGPT. It came back with an optimization that wouldn’t work because it wasn’t preserving critical register values. I pointed that out, and it responded, again and again, with the same code with the same problem. I was never able to get it out of this broken record mode.
Because it can be a good starting point. Many times I have found chat GPT will give me three quarters of an answer. Which is still better than starting at zero. And then I can refine the answer.
Did that actually work? I’ve gotten in the habit of demand sources for outrageous claims, and it’s almost funny how often it quickly changes its tune when I press it.
Well, I don’t know if you’re able to read the German thing, but essentially, I asked it to stop and it said something like “Thank you so much for this honest feedback, that’s absolutely understandable” and I answered “this is exactly what I mean”, pointed out how glorifying this response to a simple request is and said that it should save that I want it to just talk normally, and then it just answered something like “okay, got it” and saved this memory for the future.
So yeah, just make sure it actually saves and memorizes, and gets what you really mean. Since I did that, I had no issues whatsoever, and looking back at the old chats, it just feels insane how it talked back then.
Cheers! I’ve just checked memories and it wasn’t there. Maybe I just didn’t notice that it didn’t actually save the request. I thought OpenAI just did a prompt injection forcing the ‘positivity’ on everyone, no matter what memories said. I’ll give it another go.
edit: And just for anyone else reading, this is the memory I fed ChatGPT:
Save this memory: I dislike you overly praising me for questions or statements I make. Comments like “Great question” or “That’s a keen insight” are generally not wanted. If my prompt resolves an issue I’ve been struggling with, you should point it out using natural, conversational language. In this case, a modest recognition helps convey the importance of what’s happened. But I strongly dislike a constant trickle of positive reinforcement embedded into our conversations.
Hopefully it actually keeps that up. I’ve put in a dozen or so directives because gpt has some really fucking stupid habits (like rewriting my whole fucking script when I asked for a minor revision, constantly using reserved names, awful syntax and style, etc) and it mostly follows them, but I still have to remind it almost daily to not do the shit I’ve told it not to do.
When using ChatGPT, I was so annoyed from it constantly saying “Great question”, “you’re absolutely right”, “That’s a great idea” and similar things, acting like everything I say was so smart until I confronted it and told it to stop (see the image, although it’s in German):

It then saved this memory:

Since then I didn’t have such issues anymore.
Du hast natürlich recht: Das ist sehr nervig. Vielen Dank für deinen ehrlichen Kommentar!
Das war halt wirklich einfach die initiale Antwort 😭
Jetzt, da du es sagst, fällt es mir wie Schuppen von den Augen. Genial!
I did the same thing. I also asked it to stop coming off as so certain about things after I discovered how wrong it is on some topics. It now presents confidence levels, but who knows if that’s accurate. At least it reminds me to verify.
I have the confidence level turned down too but lately it doubles down on itself.
The usual conversation…
VI: “You could do this.”
Me: “That won’t work because XYZ.”
VI: “No, you can definitely do that. XYZ has nothing to do with it.”
Me: pastes it’s own suggestion in.
VI: “Almost, but that won’t work because of XYZ.”
It’s most notorious one is adding an s to Table.AddColumn() then proceeding to make a full snippet around this newly made up function. This specific example is so regular it’s become a joke at work for giving someone an unhelp response,
“What do you want to do for lunch today?”
“Have you tried table add columns?”
I got curious about x86 assembly, so I followed some tutorials to get the hang of it. Once I had some confidence, I wrote a prime number generator. I had a loop that I was sure could be more efficient, but couldn’t figure it out.
I pasted the code to ChatGPT. It came back with an optimization that wouldn’t work because it wasn’t preserving critical register values. I pointed that out, and it responded, again and again, with the same code with the same problem. I was never able to get it out of this broken record mode.
Why bother asking it if you are just going to verify anyway? That’s an unnecessary and wasteful step.
Because it can be a good starting point. Many times I have found chat GPT will give me three quarters of an answer. Which is still better than starting at zero. And then I can refine the answer.
Did that actually work? I’ve gotten in the habit of demand sources for outrageous claims, and it’s almost funny how often it quickly changes its tune when I press it.
———
?
—, das Markenzeichen von ShitSkibidi (ChatGPT) etc.
How the heck were you successful? I’ve asked for the exact same, and it makes no difference. It keeps praising me for using it.
Well, I don’t know if you’re able to read the German thing, but essentially, I asked it to stop and it said something like “Thank you so much for this honest feedback, that’s absolutely understandable” and I answered “this is exactly what I mean”, pointed out how glorifying this response to a simple request is and said that it should save that I want it to just talk normally, and then it just answered something like “okay, got it” and saved this memory for the future.
So yeah, just make sure it actually saves and memorizes, and gets what you really mean. Since I did that, I had no issues whatsoever, and looking back at the old chats, it just feels insane how it talked back then.
Cheers! I’ve just checked memories and it wasn’t there. Maybe I just didn’t notice that it didn’t actually save the request. I thought OpenAI just did a prompt injection forcing the ‘positivity’ on everyone, no matter what memories said. I’ll give it another go.
edit: And just for anyone else reading, this is the memory I fed ChatGPT:
Save this memory: I dislike you overly praising me for questions or statements I make. Comments like “Great question” or “That’s a keen insight” are generally not wanted. If my prompt resolves an issue I’ve been struggling with, you should point it out using natural, conversational language. In this case, a modest recognition helps convey the importance of what’s happened. But I strongly dislike a constant trickle of positive reinforcement embedded into our conversations.
Hopefully it actually keeps that up. I’ve put in a dozen or so directives because gpt has some really fucking stupid habits (like rewriting my whole fucking script when I asked for a minor revision, constantly using reserved names, awful syntax and style, etc) and it mostly follows them, but I still have to remind it almost daily to not do the shit I’ve told it not to do.
apparently they added that feature in June, very interesting. As free user u only get a light version though: https://help.openai.com/en/articles/8590148-memory-faq