@futurebird
It's particularly telling when you get the same LLM giving different answers to the same question because you phrased the question differently. People wouldn't do that, they give the same response regardless of phrasing because they are answering the actual logic instead of the phrasing.
There can't be any rational thought when LLMs give different answers. It has to just be pattern matching depending on the phrases themselves rather than the reality the phrases describe.
(I don't mean changing the bias of the question by the way, I just mean a neutral question phrased in a trivially different way.)