No one is enough of a polymath and no one has enough time to avoid trusting others. This isn’t really a bad thing, but we have to be open to the reality that there are some things that “everyone knows” that are simply wrong.
futurebird@sauropods.win
Posts
-
“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources. -
“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.I think with topics where one isn’t an expert it can be more important to know what “most people” in your social circle *think* is true than it can be to know what is really true.
Knowing an iconoclastic truth, but not having the expertise to explain it to others isn’t very useful. Moreover without that expertise you will struggle to evaluate the validity of the unpopular opinion.
So, people reach for the popular option and hope it is correct.
-
“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.@the5thColumnist @Urban_Hermit
I’m trying to avoid loaded phrases like “bullshitting machine” I’ve had a lot of people roll their eyes and shut down on me because “well you just hate AI” as if this is a matter of personal taste.
In reality I struggle to see how I could find value in exposing my curiosity to a system with these limitations. I will insulate myself from those times when a simple obvious question brings me up short— it just seems really dangerous to me.
-
“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.@the5thColumnist @Urban_Hermit
People, even people who have terrible mostly wrong ideas tend to have some guiding set of values. Even if you don’t learn much from their opinions you can learn about the philosophy that informs those opinions.
Asking an LLM *why* it said any fact or opinion is pointless. It will supply a response that sounds like a human justification but the real “reason” is always the same “it was the response you were most likely to accept as correct”
-
Who do you turn to for financial news?He’s *really* funny.
-
“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.I think when some people are presented with these kinds of errors they think “the LMM just made a factual mistake” they think with more data and “better software” this will not be a problem. They don’t see that what it is doing is *foundationally different* from what they are asking it to do.
That it has fallen to random CS educators, and people interested in these models to desperately try to impress upon the public the way they are being tricked makes me angry.
-
“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.The most exciting and pivotal moments in research are those times when the results do not meet your expectations.
We live for those moments.
If an LLM is not reliable enough for you to trust unexpected results then it is not reliable enough to tell you anything new: it’s incapable of telling you anything that you don’t at (some level) already know.
2/2
-
“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources. I’m not blindly following just anything that says”
People feel that this being a “responsible user of new technology”
I think it is actually the opposite.
1/2 -
Who do you turn to for financial news?For someone who doesn't invest in anything I'm endlessly fascinated by "financial instruments" and all of the weird and sometimes perverse things that can be done with money.
I don't think it's that important to keep track of all of this in great depth. It's like a soap opera for me I guess.
-
Who do you turn to for financial news?Who do you turn to for financial news? My favorites are:
Patrick Boyle: He is an academic out of NYC and a college teacher in business. He has a remarkable dry sense of humor and explains modern markets very clearly and carefully. He doesn't shy away from "bad news" Isn't interested in hype.
Coffeezilla: A pretty prominent guy who cut his teeth chasing down crypto and rug-pull scams. For some reason he can get anyone to talk to him.
There are others but these two I'm cheerful to recommend.
-
It's that time of year, when Stage 0 ("just ignore it – it's ages away 🤷♂️") has turned to Stage 1 ("oh, if I must, but what?Is there an audio book version? If so is there a place you'd rather we'd buy it?
In general is there a better way to buy this from your perspective?
Looks like a very cool book.
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.Sorry I thought you were referencing the original post.
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.If it says there are 67 articles that mention topic X, but you don't know if that number is correct, it's just a guess based on context and the bulk of text (and LLMs are also bad at following commands such as "consider only these sources" ... ) what is the point of saying the number.
Maybe could you ask if a topic is mentioned "frequently" or "infrequently" but beyond that I think it's deceptive and useless.
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.I don't think this guy is an enthusiast, he's just using a tool in a way that seems reasonable and that seems to give the results he wants without really knowing what those results really represent.
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.Wouldn't you need to ask it about each article individually and track the results?
Not just give it a stack of articles and ask "how many of the articles mentioned X" ?
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.Damn thing will sit there and tell you that's what it's doing.
But it can't count! It still can't count. I feel like I'm going crazy. Am I the only person who cares that the machine can't even count?
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task."I suspect in the hands of someone who knows what they're doing it might be possible to extract interesting insights from how the model is grouping terms."
This is totally possible. But I don't think this is what that would look like?
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.LLMs really just shouldn't output that kind of data.
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.The companies offering these products seem to be delighted that people are confused and using them to do things they simply aren't really doing.
-
This guy generally does interesting work, but he's used an LLM to analyze the trends in a "creation science" journal over time, and I just don't think LLMs are effective for this kind of statistical task.I mean LLMs are based on statistics, and they will produce results that look like frequency charts. But these charts only attempt to approximate the expected content. They aren't based on counting articles that meet any set of criteria.
It's... nonsense, and not even people who pride themselves on spotting nonsense seem to understand this.