ugh I remember this mf from 90s usenet, he would pontificate endlessly but never seemed to actually work on anything
-
ugh I remember this mf from 90s usenet, he would pontificate endlessly but never seemed to actually work on anything
-
ugh I remember this mf from 90s usenet, he would pontificate endlessly but never seemed to actually work on anything
@regehr some folks I respect believe him, but history kind of shows that if anything *can* be built, it will be … so we better get to working on risk mitigation here beyond hoping somebody doesn’t do the thing.
-
@regehr some folks I respect believe him, but history kind of shows that if anything *can* be built, it will be … so we better get to working on risk mitigation here beyond hoping somebody doesn’t do the thing.
I am relying on the idea that it cannot be built. Not by this people. My bet is that AGI is still 250 years out. Science history is actually on my side there: The effort needed to built "artificial creatures" has been underestimated since at least since 1800 (plus/minus). I'd be surprised if it came out different this time around.
And the OpenAI "everything is a neural network the rest will emerge / can be trained" totally ignores previous work on possible architectures of mind.
After the bubble bursts, the topic will so toxic and touched again earliest 50 years later (2075). In the time in between we'll be busy to push back other apocalypses ... so it's even likely we'll not be in the mood in 2075 to start a new AI research program.
For people who think (technological) progress is steady upward, I'd like to point to the space program.
-
I am relying on the idea that it cannot be built. Not by this people. My bet is that AGI is still 250 years out. Science history is actually on my side there: The effort needed to built "artificial creatures" has been underestimated since at least since 1800 (plus/minus). I'd be surprised if it came out different this time around.
And the OpenAI "everything is a neural network the rest will emerge / can be trained" totally ignores previous work on possible architectures of mind.
After the bubble bursts, the topic will so toxic and touched again earliest 50 years later (2075). In the time in between we'll be busy to push back other apocalypses ... so it's even likely we'll not be in the mood in 2075 to start a new AI research program.
For people who think (technological) progress is steady upward, I'd like to point to the space program.
@glitzersachen "current approaches won't scale to ASI" seems plausible (though not so plausible I want to bet the farm on it), but you totally lost me at "...and then there will be a fifty-year AI winter". I give it five years max after the current AI bubble bursts before the next one starts inflating.
-
@glitzersachen "current approaches won't scale to ASI" seems plausible (though not so plausible I want to bet the farm on it), but you totally lost me at "...and then there will be a fifty-year AI winter". I give it five years max after the current AI bubble bursts before the next one starts inflating.
@pozorvlak @glitzersachen @darkuncle @regehr
I will bet the farm on it. Or the condo... or whatever.
intelligence is hard just like robotics is hard.
We have programs that can make plausible text if you give them nearly all the text ever made. The world isn't made of text. Thinking isn't text.
What we don't have are systems that can reason deductively while adjusting their foundational assumptions inductively. The whole approach isn't even right.
-
F myrmepropagandist shared this topic
-
@pozorvlak @glitzersachen @darkuncle @regehr
I will bet the farm on it. Or the condo... or whatever.
intelligence is hard just like robotics is hard.
We have programs that can make plausible text if you give them nearly all the text ever made. The world isn't made of text. Thinking isn't text.
What we don't have are systems that can reason deductively while adjusting their foundational assumptions inductively. The whole approach isn't even right.
@pozorvlak @glitzersachen @darkuncle @regehr
And you can't have thinking without the layer of emotion. Not because reasoning is emotionally motivated, but it's obviously important, so you'd need to build that in to the system.
These people think the whole brain is just emergent and not tailored to managing the human body in human contexts over deep time.
It's nonsense!
-
@pozorvlak @glitzersachen @darkuncle @regehr
And you can't have thinking without the layer of emotion. Not because reasoning is emotionally motivated, but it's obviously important, so you'd need to build that in to the system.
These people think the whole brain is just emergent and not tailored to managing the human body in human contexts over deep time.
It's nonsense!
@pozorvlak @glitzersachen @darkuncle @regehr
For most of human history paragraphs of text have been a reliable sign that there is a thinking human mind that reasoned to create that text. This isn't true anymore.
But text is just like footprints. It's not the thing itself. And it's possible to fake convincing footprints and possible to fake text.
That is all that is happening.
-
@pozorvlak @glitzersachen @darkuncle @regehr
For most of human history paragraphs of text have been a reliable sign that there is a thinking human mind that reasoned to create that text. This isn't true anymore.
But text is just like footprints. It's not the thing itself. And it's possible to fake convincing footprints and possible to fake text.
That is all that is happening.
@pozorvlak @glitzersachen @darkuncle @regehr
I remember when there was a debate about if people who couldn't use language were really able to think. Wildly ableist stuff. In the course of the debate some people said that if they didn't "hear" a voice kind of like narration in their mind they weren't thinking.
Which is wild to me as someone whose thoughts are these things I struggle to condense into the limited and awkward strictures of words.
-
@pozorvlak @glitzersachen @darkuncle @regehr
I remember when there was a debate about if people who couldn't use language were really able to think. Wildly ableist stuff. In the course of the debate some people said that if they didn't "hear" a voice kind of like narration in their mind they weren't thinking.
Which is wild to me as someone whose thoughts are these things I struggle to condense into the limited and awkward strictures of words.
@pozorvlak @glitzersachen @darkuncle @regehr
When I read words from others I imagine all their big thoughts. A poem with a few dozen words can contain whole universes of emotion and ideas.
Unless it's a machine, then I imagine big matrices and all the imaginations it gobbled up to make them so it could imitate poetry.
-
@pozorvlak @glitzersachen @darkuncle @regehr
When I read words from others I imagine all their big thoughts. A poem with a few dozen words can contain whole universes of emotion and ideas.
Unless it's a machine, then I imagine big matrices and all the imaginations it gobbled up to make them so it could imitate poetry.
@pozorvlak @glitzersachen @darkuncle @regehr
And the other wild thing is that the importance of that text changes if a person simply points to it after reading it and declares "this is what I meant!"
Ok, now I care about it more. Because text is coffee straw and the mind is industrial vat full of the thickest of milkshakes.
-
@pozorvlak @glitzersachen @darkuncle @regehr
When I read words from others I imagine all their big thoughts. A poem with a few dozen words can contain whole universes of emotion and ideas.
Unless it's a machine, then I imagine big matrices and all the imaginations it gobbled up to make them so it could imitate poetry.
@futurebird I agree that current LLMs are not conscious. But nor are they simple Markov chain text generators - are you familiar with Anthropic's work on transformer circuits? Plus, "current approaches" includes hybrid systems like AlphaGeometry which combine neural networks and symbolic theorem provers. Like I said, I don't think we'll get to AGI simply by iterating on what we have now. But I didn't think we'd see an AI get an IMO gold medal this soon either.
-
@futurebird I agree that current LLMs are not conscious. But nor are they simple Markov chain text generators - are you familiar with Anthropic's work on transformer circuits? Plus, "current approaches" includes hybrid systems like AlphaGeometry which combine neural networks and symbolic theorem provers. Like I said, I don't think we'll get to AGI simply by iterating on what we have now. But I didn't think we'd see an AI get an IMO gold medal this soon either.
@pozorvlak @glitzersachen @darkuncle @regehr
I'm am aware of the variation, but I don't think any of it is really grappling with the complexity of what doing what some of them are claiming they are doing would mean.
I feel they keep showing us text, which we are biased to see as "evidence of reasoning" then claiming bigfoot exists.
And when it falls short we're told a little story about adaptive algorithms. And "soon soon soon"
It's been soon for decades. I'm so tired.
-
@pozorvlak @glitzersachen @darkuncle @regehr
I remember when there was a debate about if people who couldn't use language were really able to think. Wildly ableist stuff. In the course of the debate some people said that if they didn't "hear" a voice kind of like narration in their mind they weren't thinking.
Which is wild to me as someone whose thoughts are these things I struggle to condense into the limited and awkward strictures of words.
@futurebird
Ahh thank you for expressing this so well - I tend to say I think in thoughts, not words, and I only produce words when I need to communicate the thoughts externally. But the thoughts are kinda.. linkages and relationships between things, and patterns, I think. Like a great big relational database in my head which exists without words needing to be involved. -
@futurebird
Ahh thank you for expressing this so well - I tend to say I think in thoughts, not words, and I only produce words when I need to communicate the thoughts externally. But the thoughts are kinda.. linkages and relationships between things, and patterns, I think. Like a great big relational database in my head which exists without words needing to be involved.If there were a technology that would allow one to experience the thoughts of another person I wonder what we'd learn from those experiences?
Let's say you could produce a rough map of the state of a nervous system (not just the brain, thinking is a function of the whole body I suspect) and somehow transmit and remap it to another person. So you would feel and think for a moment some analog of another mind. Would it be Beautiful? Alienating?
Or is such a mapping impossible?
-
If there were a technology that would allow one to experience the thoughts of another person I wonder what we'd learn from those experiences?
Let's say you could produce a rough map of the state of a nervous system (not just the brain, thinking is a function of the whole body I suspect) and somehow transmit and remap it to another person. So you would feel and think for a moment some analog of another mind. Would it be Beautiful? Alienating?
Or is such a mapping impossible?
Now I'm thinking of a horror scifi. The machine projects another person's mind on to you but the result is that you just basically become that person and it takes years to recover. Since the only way to really bridge that gap would be to erase yourself.
So the mad scientist who just wants to be understood ends up turning everyone into someone who is, like her, tortured by a sense of isolation.
OK I've clearly gone on a tangent and I should be working on the book now anyways.
-
If there were a technology that would allow one to experience the thoughts of another person I wonder what we'd learn from those experiences?
Let's say you could produce a rough map of the state of a nervous system (not just the brain, thinking is a function of the whole body I suspect) and somehow transmit and remap it to another person. So you would feel and think for a moment some analog of another mind. Would it be Beautiful? Alienating?
Or is such a mapping impossible?
@futurebird @3TomatoesShort
One time, I woke up and just didn't feel like I was me. I felt OK, but it was very odd. I can't remember exactly what it did feel like, because being me is more or less the shape all my memories fit into.So it might be like that: temporary depersonalisation that you couldn't integrate into an experience you'd had, since you weren't really you at the time.