"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
-
Half a million views and from a day ago. Good lord.
When photography was invented many people thought it might capture the human soul... and that's about how this will sound some day.
-
Chat GPT will say things if you ask it what it did, the answers will be similar to texts it has processed about describing how things are done, some in the context of describing how a computer program might do something.
It might even give you a good run down of how LLMs work mixed in there, it might not. But it's not able to ... interrogate it's own process this simply isn't possible. It's not part of how it's designed.
@futurebird As far as I know, the same thing happens with rationalizing. You ask a man "Why did you do this or that?" and they always have an answer, even if it was an unconscious move.
-
@futurebird As far as I know, the same thing happens with rationalizing. You ask a man "Why did you do this or that?" and they always have an answer, even if it was an unconscious move.
LOL.
-
LOL.
@futurebird Maybe ChatGPT just do what the Wernicke and Broca brain areas do.
-
@futurebird Maybe ChatGPT just do what the Wernicke and Broca brain areas do.
Can those areas tell you how they process language?
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
@futurebird he consistently speaks outside his specialty, and it's been pretty consistently inaccurate when it's a field i know about.
Most public intellectuals start extending beyond their competencies like that. They build a trusting following while speaking authoritatively on some subject, then exhaust that source of content & start speaking with confidence and authority where they should not.
Bill Nye seems like a pretty consistent generalist science communicator. Rare.
-
@futurebird he consistently speaks outside his specialty, and it's been pretty consistently inaccurate when it's a field i know about.
Most public intellectuals start extending beyond their competencies like that. They build a trusting following while speaking authoritatively on some subject, then exhaust that source of content & start speaking with confidence and authority where they should not.
Bill Nye seems like a pretty consistent generalist science communicator. Rare.
I've never seen him before. I wish he'd explain more about what Hume was going on about. That part seemed more informative but then what I know about philosophy is limited.
-
@futurebird he consistently speaks outside his specialty, and it's been pretty consistently inaccurate when it's a field i know about.
Most public intellectuals start extending beyond their competencies like that. They build a trusting following while speaking authoritatively on some subject, then exhaust that source of content & start speaking with confidence and authority where they should not.
Bill Nye seems like a pretty consistent generalist science communicator. Rare.
I don't know why but people just asking chat GPT how it works and taking the answers as good enough drives me nuts.
Maybe because when I tell a doctor I'm in pain *I* might not be believed, but a bunch of matrices producing a response maximized for "meeting the expectations of the asker?" let's treat that like it's gospel.
LORDY.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
I actually started posting again on mastodon (2021-22?) because I was so riled up about aihype and how many people were so deeply fooled about what LLMs are.
-
I've never seen him before. I wish he'd explain more about what Hume was going on about. That part seemed more informative but then what I know about philosophy is limited.
@futurebird i saw him doing grifty-feeling stuff on some YouTube videos. There aren't many people who focus on the philosophy of science. I don't agree with all his positions there, but they're reasonably internally consistent.
That behavior seems to be what platforms encourage. Joe Rogan basically followed the same path, building trust and rapport with his audience & using that to convince people of all sorts of things (and sell supplements).
I wish there was a prominent media theorist today
-
He's more interested in the philosophy which is interesting but that bit of the video made me think "no you don't know how this thing really works... you can't just ask it how it did things and take those answers seriously."
@futurebird Not at face value, no.
But it can actually be pretty good at metacognition, much better than your average human, once it’s pointed in the right direction.
Since current AI’s assist with the training of next generation AI’s, I think there’s a high likelihood of a positive feedback cycle. I don’t think anyone knows where the limit is.
-
@futurebird i saw him doing grifty-feeling stuff on some YouTube videos. There aren't many people who focus on the philosophy of science. I don't agree with all his positions there, but they're reasonably internally consistent.
That behavior seems to be what platforms encourage. Joe Rogan basically followed the same path, building trust and rapport with his audience & using that to convince people of all sorts of things (and sell supplements).
I wish there was a prominent media theorist today
@cykonot @futurebird There was a really neat podcast called the Sci Phi podcast where a grad student was interviewing a bunch of philosophers of science and I loved it, but I can't find it any more
Edit: looks like it is still going! https://sciphipodcast.org/podcast
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"
NO. It has no idea if it's telling the truth or not and when it says "I can simulate what this would be like"
This guy is pretty sharp about philosophy but people really really really does not *get* how this works.
"Chat GPT told me this is what it did"
No! It told you what you thought it should say if you asked it what it did!
@futurebird The thing everyone should remember is that neural networks are sometimes good at 'doing' the thing they were trained to do, but the GPTs of the world were not trained to produce correct output, they were trained to produce output that is convincing. If they convince you, be wary
I once saw someone who asked their ostensibly locally hosted llm if it was on their machine, and it said no, and they believed it
-
@futurebird Not at face value, no.
But it can actually be pretty good at metacognition, much better than your average human, once it’s pointed in the right direction.
Since current AI’s assist with the training of next generation AI’s, I think there’s a high likelihood of a positive feedback cycle. I don’t think anyone knows where the limit is.
When you say "a positive feedback cycle" ...towards what? What is the feedback and what is it going towards positively?
-
@futurebird The thing everyone should remember is that neural networks are sometimes good at 'doing' the thing they were trained to do, but the GPTs of the world were not trained to produce correct output, they were trained to produce output that is convincing. If they convince you, be wary
I once saw someone who asked their ostensibly locally hosted llm if it was on their machine, and it said no, and they believed it
@cxxvii @futurebird Well, to be clear, the issue isn't the training. A lot of stuff is thrown into the latest training methods to try to make them more accurate. The much more fundamental problem isn't the training, but the actual mechanism itself which -- no matter how good or accurate the training -- simply can't reliably produce the correct output.
-
@cxxvii @futurebird Well, to be clear, the issue isn't the training. A lot of stuff is thrown into the latest training methods to try to make them more accurate. The much more fundamental problem isn't the training, but the actual mechanism itself which -- no matter how good or accurate the training -- simply can't reliably produce the correct output.
Thank you.
It's like making a machine designed to show people objects that look just like airplanes... and expecting those planes to have engines and be able fly.
But, you never set out to make a program to design machines for flight. Just a program that would show people photos, videos, plans, descriptions that match their *expectations* of airplanes.
-
@cxxvii @futurebird Well, to be clear, the issue isn't the training. A lot of stuff is thrown into the latest training methods to try to make them more accurate. The much more fundamental problem isn't the training, but the actual mechanism itself which -- no matter how good or accurate the training -- simply can't reliably produce the correct output.
@nazokiyoubinbou @cxxvii @futurebird The way that I think about it is "Getting a correct answer doesn't say very much about the likelihood of generating a correct answer in the future."
-
Thank you.
It's like making a machine designed to show people objects that look just like airplanes... and expecting those planes to have engines and be able fly.
But, you never set out to make a program to design machines for flight. Just a program that would show people photos, videos, plans, descriptions that match their *expectations* of airplanes.
@futurebird @cxxvii Right. I make this differentiation because A. I want to be clear that no matter how they might advertise this or that method is more accurate, it will always fail due to the underlying issue and B. some people think the tech just isn't fully developed, but its underlying mechanism can NEVER improve without changing to something else entirely.
(Well, as a side note, many do actually make an effort to train in more accuracy, just, the fundamental issue always comes back to bite them. They legit are trying, it just can't work.)
-
I actually started posting again on mastodon (2021-22?) because I was so riled up about aihype and how many people were so deeply fooled about what LLMs are.
(For passersby) Some things it seems people have a really hard time grokking are that there are no necessary relations between any training set + prompt combo and a particular result. There is no way to predict what the output will be or to backtrace why the output was what it was.
But, at the same time, the synthetic text is not merely random in relation to the training set + prompt combo. The synthetic text is random within a certain distribution. Humans are terrible at thinking about statistics and aihype plays off of that.
The above does not make LLMs uniquely useful. It makes LLMs uniquely useless. If there's a use-case that LLMs are actually, reliably good at I haven't heard of it.
Today I relaxed by watching this interview with @emilymbender and it's almost painful to watch the hosts's faces as she systematically undercuts just about every use case.
-
@nazokiyoubinbou @cxxvii @futurebird The way that I think about it is "Getting a correct answer doesn't say very much about the likelihood of generating a correct answer in the future."
@griotspeak @nazokiyoubinbou @cxxvii @futurebird A way I've found helpful to explain to people is:
No matter how much they dial it in, it will ALWAYS be at least a little better at making an answer that sounds right than an answer that is right.