https://arxiv.org/abs/2501.11120 @futurebird Re: metacognition

marshray@infosec.exchange
Posts
-
https://arxiv.org/abs/2501.11120 @futurebird Re: metacognition -
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@futurebird I wish I could.
But it would just make people angry, and I feel like I’ve done enough of that today already. -
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@neckspike @futurebird @AT1ST Absolutely not. I consistently speak out against the term “NPC” as I feel it is the essence of dehumanization.
I’m going around begging people to come up with a solid argument that our happiness does in fact bring something important and irreplaceable to the universe.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@neckspike @futurebird @AT1ST In all seriousness, don’t take this the wrong way but:
So what?
Why is that important?
What do you mean by “know what it is saying”?Do you know what you are saying, or are you just repeating arguments that you have read before?
But maybe you do have a meaningful distinction here.
If so, what question can we ask it, and how should we interpret the response, to tell the difference?The pizza thing is not particularly interesting because it’s just a cultural literacy test. It’s common for humans new to an unfamiliar culture to be similarly pranked. And that was a particularly cheap AI.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@futurebird @AT1ST The temptation to do that is great.
I try to recognize when I’m posting reflexively and not hit ‘Publish’, because it feels like those posts are largely not adding value.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@trochee @futurebird OK, so you just “emitted a word sequence shaped sorta like the word sequences that comes out when metacognition happens.”
And since it’s an argument we’ve all seen before, I can just dismiss your “word sequence” out-of-hand.
See how that works?
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@AT1ST @futurebird “It’s not actually doing X, it’s just generating text that’s indistinguishable from doing X.”
is classic human cope.The ability to produce text as it “should look like” is a thing that humans get accredited degrees and good paying jobs for demonstrating.
Good enough is almost always better than perfect, and 80% is usually good enough.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@futurebird Not at face value, no.
But it can actually be pretty good at metacognition, much better than your average human, once it’s pointed in the right direction.
Since current AI’s assist with the training of next generation AI’s, I think there’s a high likelihood of a positive feedback cycle. I don’t think anyone knows where the limit is.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@futurebird I can’t invest 20 minutes to find out if he gets there, but last I checked, ChatGPT was just generating a text prompt which was then thrown over the wall to a completely disconnected image generation model (Dall-E), then declaring success without basis.
It wasn’t very good at image prompting either.