@FediThing
-
I think people are in denial about the level of brute force operating here. There is a reason it uses so much electricity. If you make big enough tables you can make plausible text about *anything*
But it's just ... plausible text that seems like the kind of answer people would expect.
-
I think people are in denial about the level of brute force operating here. There is a reason it uses so much electricity. If you make big enough tables you can make plausible text about *anything*
But it's just ... plausible text that seems like the kind of answer people would expect.
Maybe they don't want to admit to themselves they're fooled? Or are they in love with the idea of it being sentient for some reason?
It helps so much if you ask LLMs about something you know much better than other people, whatever it is. That opens a lot of people's eyes about what is really going on, the bullshit is easier to see.
-
Maybe they don't want to admit to themselves they're fooled? Or are they in love with the idea of it being sentient for some reason?
It helps so much if you ask LLMs about something you know much better than other people, whatever it is. That opens a lot of people's eyes about what is really going on, the bullshit is easier to see.
Well the video started out probing the limitations and I thought that was interesting. But, then there is this huge foundational error in what an LLM *is* ... it doesn't reflect, it has no logic, it produces responses that are the best possible fit for the context. That is all.
-
F myrmepropagandist shared this topic
-
Well the video started out probing the limitations and I thought that was interesting. But, then there is this huge foundational error in what an LLM *is* ... it doesn't reflect, it has no logic, it produces responses that are the best possible fit for the context. That is all.
It's particularly telling when you get the same LLM giving different answers to the same question because you phrased the question differently. People wouldn't do that, they give the same response regardless of phrasing because they are answering the actual logic instead of the phrasing.
There can't be any rational thought when LLMs give different answers. It has to just be pattern matching depending on the phrases themselves rather than the reality the phrases describe.
(I don't mean changing the bias of the question by the way, I just mean a neutral question phrased in a trivially different way.)
-
I think people are in denial about the level of brute force operating here. There is a reason it uses so much electricity. If you make big enough tables you can make plausible text about *anything*
But it's just ... plausible text that seems like the kind of answer people would expect.
@futurebird @FediThing it's the million monkeys with typewriters, just with a bit of weighting
-
@futurebird @FediThing it's the million monkeys with typewriters, just with a bit of weighting
Very artful weighting, with extra hints for almost every contingency.
Maybe it would help to think of it as a mirror that tries to show you what you want based on a vast sea of data about the kinds of things people consider sensible, reasonable in context responses?
I've played with little LLM models and there is nothing in how you set them up that ought to make you think they could give meaningful responses to these kinds of questions.
NOTHING.
-
Maybe they don't want to admit to themselves they're fooled? Or are they in love with the idea of it being sentient for some reason?
It helps so much if you ask LLMs about something you know much better than other people, whatever it is. That opens a lot of people's eyes about what is really going on, the bullshit is easier to see.
"Maybe they don't want to admit to themselves they are fooled?"
I'm really starting to wonder if this is a big part of why these explanations just don't land. If it makes anyone feel better I was fooled by some AI music. I thought it was something a person made never even considered that it might be generated.
There isn't anything wrong with being "fooled" -- but, treating these tools like they can do things that they can't do is just a recipe for learning nothing.
-
"Maybe they don't want to admit to themselves they are fooled?"
I'm really starting to wonder if this is a big part of why these explanations just don't land. If it makes anyone feel better I was fooled by some AI music. I thought it was something a person made never even considered that it might be generated.
There isn't anything wrong with being "fooled" -- but, treating these tools like they can do things that they can't do is just a recipe for learning nothing.
@futurebird @FediThing I was fooled by an AI book, a brand new publication on something I know about, by an author I hadnβt heard of, recommended to me by an algorithm. About halfway through reading it I thought: I donβt feel there is human intelligence behind this. Thereβs something off. But it was such a weird feeling I dismissed this.
I keep it on my desk as a visual prompt.
-
It's particularly telling when you get the same LLM giving different answers to the same question because you phrased the question differently. People wouldn't do that, they give the same response regardless of phrasing because they are answering the actual logic instead of the phrasing.
There can't be any rational thought when LLMs give different answers. It has to just be pattern matching depending on the phrases themselves rather than the reality the phrases describe.
(I don't mean changing the bias of the question by the way, I just mean a neutral question phrased in a trivially different way.)
@FediThing @futurebird Humans will give you a different answer every time you ask an *identically phrased* question. Humans are not robots. They will use different words and say "uh" at different times. That's because, like LLM's, they are sensitive to a multitude of initial conditions beyond just the words of a prompt.
-
@FediThing @futurebird Humans will give you a different answer every time you ask an *identically phrased* question. Humans are not robots. They will use different words and say "uh" at different times. That's because, like LLM's, they are sensitive to a multitude of initial conditions beyond just the words of a prompt.
A person will phrase things differently, but there is often an idea they are trying to put into words. There are many ways to express the same idea. With an LLM you might not always get the same *idea.* It's not an equivalent answer phrased in a new way. It's a new answer. Most people aren't just choosing sets of words that fit the context ...most of the time... I hope.
Though, there was someone on this thread who made me wonder about that.
-
A person will phrase things differently, but there is often an idea they are trying to put into words. There are many ways to express the same idea. With an LLM you might not always get the same *idea.* It's not an equivalent answer phrased in a new way. It's a new answer. Most people aren't just choosing sets of words that fit the context ...most of the time... I hope.
Though, there was someone on this thread who made me wonder about that.
Yup. It would be like asking a local for directions in an unfamiliar town, and them giving you totally different directions depending on how you ask.
People who understand the question won't do this, but LLMs might because they don't actually understand any questions.