"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
Yes, exactly.
They seem (willingly? accidentally?) to think that they are having an earnest two-way conversation, when they're really just watching a non-sentient spreadsheet change numbers depending on what they say.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
Chat GPT will say things if you ask it what it did, the answers will be similar to texts it has processed about describing how things are done, some in the context of describing how a computer program might do something.
It might even give you a good run down of how LLMs work mixed in there, it might not. But it's not able to ... interrogate it's own process this simply isn't possible. It's not part of how it's designed.
-
F myrmepropagandist shared this topic
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
@futurebird I can’t invest 20 minutes to find out if he gets there, but last I checked, ChatGPT was just generating a text prompt which was then thrown over the wall to a completely disconnected image generation model (Dall-E), then declaring success without basis.
It wasn’t very good at image prompting either.
-
@futurebird I can’t invest 20 minutes to find out if he gets there, but last I checked, ChatGPT was just generating a text prompt which was then thrown over the wall to a completely disconnected image generation model (Dall-E), then declaring success without basis.
It wasn’t very good at image prompting either.
He's more interested in the philosophy which is interesting but that bit of the video made me think "no you don't know how this thing really works... you can't just ask it how it did things and take those answers seriously."
-
Half a million views and from a day ago. Good lord.
When photography was invented many people thought it might capture the human soul... and that's about how this will sound some day.
-
Chat GPT will say things if you ask it what it did, the answers will be similar to texts it has processed about describing how things are done, some in the context of describing how a computer program might do something.
It might even give you a good run down of how LLMs work mixed in there, it might not. But it's not able to ... interrogate it's own process this simply isn't possible. It's not part of how it's designed.
@futurebird As far as I know, the same thing happens with rationalizing. You ask a man "Why did you do this or that?" and they always have an answer, even if it was an unconscious move.
-
@futurebird As far as I know, the same thing happens with rationalizing. You ask a man "Why did you do this or that?" and they always have an answer, even if it was an unconscious move.
LOL.
-
LOL.
@futurebird Maybe ChatGPT just do what the Wernicke and Broca brain areas do.
-
@futurebird Maybe ChatGPT just do what the Wernicke and Broca brain areas do.
Can those areas tell you how they process language?
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
@futurebird he consistently speaks outside his specialty, and it's been pretty consistently inaccurate when it's a field i know about.
Most public intellectuals start extending beyond their competencies like that. They build a trusting following while speaking authoritatively on some subject, then exhaust that source of content & start speaking with confidence and authority where they should not.
Bill Nye seems like a pretty consistent generalist science communicator. Rare.
-
@futurebird he consistently speaks outside his specialty, and it's been pretty consistently inaccurate when it's a field i know about.
Most public intellectuals start extending beyond their competencies like that. They build a trusting following while speaking authoritatively on some subject, then exhaust that source of content & start speaking with confidence and authority where they should not.
Bill Nye seems like a pretty consistent generalist science communicator. Rare.
I've never seen him before. I wish he'd explain more about what Hume was going on about. That part seemed more informative but then what I know about philosophy is limited.
-
@futurebird he consistently speaks outside his specialty, and it's been pretty consistently inaccurate when it's a field i know about.
Most public intellectuals start extending beyond their competencies like that. They build a trusting following while speaking authoritatively on some subject, then exhaust that source of content & start speaking with confidence and authority where they should not.
Bill Nye seems like a pretty consistent generalist science communicator. Rare.
I don't know why but people just asking chat GPT how it works and taking the answers as good enough drives me nuts.
Maybe because when I tell a doctor I'm in pain *I* might not be believed, but a bunch of matrices producing a response maximized for "meeting the expectations of the asker?" let's treat that like it's gospel.
LORDY.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
I actually started posting again on mastodon (2021-22?) because I was so riled up about aihype and how many people were so deeply fooled about what LLMs are.
-
I've never seen him before. I wish he'd explain more about what Hume was going on about. That part seemed more informative but then what I know about philosophy is limited.
@futurebird i saw him doing grifty-feeling stuff on some YouTube videos. There aren't many people who focus on the philosophy of science. I don't agree with all his positions there, but they're reasonably internally consistent.
That behavior seems to be what platforms encourage. Joe Rogan basically followed the same path, building trust and rapport with his audience & using that to convince people of all sorts of things (and sell supplements).
I wish there was a prominent media theorist today
-
He's more interested in the philosophy which is interesting but that bit of the video made me think "no you don't know how this thing really works... you can't just ask it how it did things and take those answers seriously."
@futurebird Not at face value, no.
But it can actually be pretty good at metacognition, much better than your average human, once it’s pointed in the right direction.
Since current AI’s assist with the training of next generation AI’s, I think there’s a high likelihood of a positive feedback cycle. I don’t think anyone knows where the limit is.
-
@futurebird i saw him doing grifty-feeling stuff on some YouTube videos. There aren't many people who focus on the philosophy of science. I don't agree with all his positions there, but they're reasonably internally consistent.
That behavior seems to be what platforms encourage. Joe Rogan basically followed the same path, building trust and rapport with his audience & using that to convince people of all sorts of things (and sell supplements).
I wish there was a prominent media theorist today
@cykonot @futurebird There was a really neat podcast called the Sci Phi podcast where a grad student was interviewing a bunch of philosophers of science and I loved it, but I can't find it any more
Edit: looks like it is still going! https://sciphipodcast.org/podcast
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
@futurebird The thing everyone should remember is that neural networks are sometimes good at 'doing' the thing they were trained to do, but the GPTs of the world were not trained to produce correct output, they were trained to produce output that is convincing. If they convince you, be wary
I once saw someone who asked their ostensibly locally hosted llm if it was on their machine, and it said no, and they believed it
-
@futurebird Not at face value, no.
But it can actually be pretty good at metacognition, much better than your average human, once it’s pointed in the right direction.
Since current AI’s assist with the training of next generation AI’s, I think there’s a high likelihood of a positive feedback cycle. I don’t think anyone knows where the limit is.
When you say "a positive feedback cycle" ...towards what? What is the feedback and what is it going towards positively?
-
@futurebird The thing everyone should remember is that neural networks are sometimes good at 'doing' the thing they were trained to do, but the GPTs of the world were not trained to produce correct output, they were trained to produce output that is convincing. If they convince you, be wary
I once saw someone who asked their ostensibly locally hosted llm if it was on their machine, and it said no, and they believed it
@cxxvii @futurebird Well, to be clear, the issue isn't the training. A lot of stuff is thrown into the latest training methods to try to make them more accurate. The much more fundamental problem isn't the training, but the actual mechanism itself which -- no matter how good or accurate the training -- simply can't reliably produce the correct output.
-
@cxxvii @futurebird Well, to be clear, the issue isn't the training. A lot of stuff is thrown into the latest training methods to try to make them more accurate. The much more fundamental problem isn't the training, but the actual mechanism itself which -- no matter how good or accurate the training -- simply can't reliably produce the correct output.
Thank you.
It's like making a machine designed to show people objects that look just like airplanes... and expecting those planes to have engines and be able fly.
But, you never set out to make a program to design machines for flight. Just a program that would show people photos, videos, plans, descriptions that match their *expectations* of airplanes.