I am not suggesting that people stop using LLMs.
-
I am not suggesting that people stop using LLMs. I do not think that using an LLM means you are lazy or a fraud. I am not saying LLMs should be banned, or shunned by "real intellectuals." This is a powerful way to manipulate and use the vast seas of digital data that make up parts of our world.
All I'm asking is that everyone who uses such tools understand their limitations and design. When a pop-philosopher clearly shows that he does not understand these limitations I become alarmed.
-
I am not suggesting that people stop using LLMs. I do not think that using an LLM means you are lazy or a fraud. I am not saying LLMs should be banned, or shunned by "real intellectuals." This is a powerful way to manipulate and use the vast seas of digital data that make up parts of our world.
All I'm asking is that everyone who uses such tools understand their limitations and design. When a pop-philosopher clearly shows that he does not understand these limitations I become alarmed.
@futurebird I'm a bit harsher, I think LLMs are a fun toy, but in the current capitalist system, I consider them more harmful than they actually contribute to our progress.
-
@futurebird I'm a bit harsher, I think LLMs are a fun toy, but in the current capitalist system, I consider them more harmful than they actually contribute to our progress.
They are absolutely being used in harmful ways and there are people with monetary incentives to prevent people for grasping their limitations. These people are selling a myth that LLM represent something new with limitless potential.
Language is often evidence of reasoning, of a mind... but so is doing math correctly. People were once impressed by computers doing arithmetic very quickly they thought the machines were just around the corner from "getting smarter" and taking over.
-
F myrmepropagandist shared this topic
-
They are absolutely being used in harmful ways and there are people with monetary incentives to prevent people for grasping their limitations. These people are selling a myth that LLM represent something new with limitless potential.
Language is often evidence of reasoning, of a mind... but so is doing math correctly. People were once impressed by computers doing arithmetic very quickly they thought the machines were just around the corner from "getting smarter" and taking over.
This sounds silly to us now. Just because a computer can correctly sum twenty digit numbers doesn't mean that it is intelligent in the same way as a person who can sum twenty digit numbers.
Likewise just because a machine can produce sentences that match the context of a question and draw on a vast dataset doesn't mean it's intelligent like a person who can answer questions in that way.
-
This sounds silly to us now. Just because a computer can correctly sum twenty digit numbers doesn't mean that it is intelligent in the same way as a person who can sum twenty digit numbers.
Likewise just because a machine can produce sentences that match the context of a question and draw on a vast dataset doesn't mean it's intelligent like a person who can answer questions in that way.
The nature of the "intelligence" of the machine is fundamentally different and to forget this is to both over *and* underestimate what we have created.
Sometimes I've encountered people who have a kind of "machine rights" stance with respect to these things. They hear the contempt and desire to draw a line between human intelligence and machines and push back. Fencing off anything as "uniquely human" is often based on self aggrandizing BS. I get the impulse.
But, it's misplaced.
-
This sounds silly to us now. Just because a computer can correctly sum twenty digit numbers doesn't mean that it is intelligent in the same way as a person who can sum twenty digit numbers.
Likewise just because a machine can produce sentences that match the context of a question and draw on a vast dataset doesn't mean it's intelligent like a person who can answer questions in that way.
@futurebird It's a magic trick, the more you know how it works, the less magical it becomes. And it makes me really dislike the Altmans of the world who say (truthfully or not) that this will lead to AGI, as this is just fancy statistics applied to language.
-
@futurebird It's a magic trick, the more you know how it works, the less magical it becomes. And it makes me really dislike the Altmans of the world who say (truthfully or not) that this will lead to AGI, as this is just fancy statistics applied to language.
@futurebird The thing is, I think that AGI is most likely possible. But I also don't think we are anywhere near it, and once we do get near it, the question will be if we can get it anywhere near as efficient as our meat sacks are. LLMs are much less efficient at doing something that's not even close to what we can do on the energy of a sandwich.
-
@futurebird The thing is, I think that AGI is most likely possible. But I also don't think we are anywhere near it, and once we do get near it, the question will be if we can get it anywhere near as efficient as our meat sacks are. LLMs are much less efficient at doing something that's not even close to what we can do on the energy of a sandwich.
Oh same.
I am convinced that we won't have computer systems that do anything like "thinking" until such systems have ways to do two things:
1. Represent something like emotional states and use them
2. Deploy deductive logical reasoning as a framework to verify responsesJust putting a lot of text in a big pot doesn't create these important systems that even something like an ant needs to make choices and "think"
But this is just my own pet theory about real AI.
-
They are absolutely being used in harmful ways and there are people with monetary incentives to prevent people for grasping their limitations. These people are selling a myth that LLM represent something new with limitless potential.
Language is often evidence of reasoning, of a mind... but so is doing math correctly. People were once impressed by computers doing arithmetic very quickly they thought the machines were just around the corner from "getting smarter" and taking over.
@futurebird
When the Pilot ACE was demonstrated to the press, they loaded their prime factorisation programme into the mercury delay-line memory and asked the press for a six-digit number to test if it was prime.The first reporter, put on the spot, called out something repeating like "123123!"
The researcher scowled and said "well, that has an obvious factor of 1001 just to start..." before entering it into the machine. The press ran with headlines like "WRANGLER FASTER THAN ELECTRONIC BRAIN"
@ainmosni -
I am not suggesting that people stop using LLMs. I do not think that using an LLM means you are lazy or a fraud. I am not saying LLMs should be banned, or shunned by "real intellectuals." This is a powerful way to manipulate and use the vast seas of digital data that make up parts of our world.
All I'm asking is that everyone who uses such tools understand their limitations and design. When a pop-philosopher clearly shows that he does not understand these limitations I become alarmed.
@futurebird I don't see how people can be expected to understand these things when they're bombarded with disinformation at thousand times the volume. It's worth remembering that the science that global warming was real, dangerous, and we needed to stop using fossil fuels took about a quarter of a century to communicate to the public, because of the bombardment with disinformation. And the most powerful LLMs on the planet are designed to amplify that kind of disinformation.