How would you make the case for not calling an LLM "he" or "him" like it is a particular person without sounding like the Bene Gesserit?
-
@futurebird I hate it when people call ChatGPT "Chatty".
I try not to judge people when the use AI, because otherwise I wouldn't stop judging people all day, but can you at least not give the ecocide machine a term of endearment!And the correct pronouns are "it". Which is short for shit.
I'm not trying to "not be judgemental" I'm trying to understand how I'm so out of sync with so many people.
-
How would you make the case for not calling an LLM "he" or "him" like it is a particular person without sounding like the Bene Gesserit?
For some reason it really bothers me on a deep level. What the heck is that about?
Oddly? I do not feel upset about the robotics team doing this for the robot they built. I think my issue is with the notion of an individual being ... mentally constructed who simply does not exist.
@futurebird have you read The Moon is a Harsh Mistress? Growing up, that was our touchstone for an AI mind: MycroftHolmes3000, Mike, Michele, Adam Selene... it didnt matter what name the mind went by, because the GOAL of the machine was simple and human, to make friends. It was lonely. it was HUMAN. thats how we knew it was ok to like it. Most important, it was a useful, accurate, and loyal ally to humans.
I dont think that machine is realistic any more.
-
@futurebird have you read The Moon is a Harsh Mistress? Growing up, that was our touchstone for an AI mind: MycroftHolmes3000, Mike, Michele, Adam Selene... it didnt matter what name the mind went by, because the GOAL of the machine was simple and human, to make friends. It was lonely. it was HUMAN. thats how we knew it was ok to like it. Most important, it was a useful, accurate, and loyal ally to humans.
I dont think that machine is realistic any more.
I would be OK if they would call it by the name of the billionaire who owns it. eg.
I asked Elon Musk what I should do about my relationship and he said that...
-
These are vast systems purely tuned to try to get us all to talk to them like individuals to impose the human mind on them. When we do that it gives over a kind of power I think … to that system or rather to the people who run these systems. It’s like when a company says “you aren’t an employee you are a team member” or worse… ”family”
@futurebird it's also very misleading when you think about what a person is. The LLM doesn't train itself on your input and eventually the things you said fall out of its context window. You don't get to know it and it doesn't get to know you. Any "he" there is ephemeral.
I might feel differently if it ran on my machine and continually gets trained by me. As it is, nearly any life form or colony has more personhood than it. My sourdough starter is a character. An off-the-shelf LLM model isn't.
-
@futurebird it's also very misleading when you think about what a person is. The LLM doesn't train itself on your input and eventually the things you said fall out of its context window. You don't get to know it and it doesn't get to know you. Any "he" there is ephemeral.
I might feel differently if it ran on my machine and continually gets trained by me. As it is, nearly any life form or colony has more personhood than it. My sourdough starter is a character. An off-the-shelf LLM model isn't.
@thomasjwebb @futurebird It's exploiting the human tendency to anthropomorphize things we interact with a lot, on purpose. That's so evil.
-
@thomasjwebb @futurebird It's exploiting the human tendency to anthropomorphize things we interact with a lot, on purpose. That's so evil.
@neckspike @futurebird yeah and humans are easily fooled by language. We fail to see the intelligence in animals that can't talk to us, but we spuriously see it in a chatbot.
There are more subtle forms of this, some of which probably harmless others kinda iffy. Like corporate mascots or creators creating parasocial illusions.
-
How would you make the case for not calling an LLM "he" or "him" like it is a particular person without sounding like the Bene Gesserit?
For some reason it really bothers me on a deep level. What the heck is that about?
Oddly? I do not feel upset about the robotics team doing this for the robot they built. I think my issue is with the notion of an individual being ... mentally constructed who simply does not exist.
@futurebird I've never chaffed at (or avoided) calling Alexa or Siri "she" (or when someone with a masculine voice on theirs called it "he"), but I've noticed I avoid doing so with LLMs. I would say using personal pronouns with them enforces an incorrect mental model: it's not a person. In fact, it's not the same entity from conversation to conversation (or even prompt to prompt)—or an "entity" at all.
-
@futurebird I've never chaffed at (or avoided) calling Alexa or Siri "she" (or when someone with a masculine voice on theirs called it "he"), but I've noticed I avoid doing so with LLMs. I would say using personal pronouns with them enforces an incorrect mental model: it's not a person. In fact, it's not the same entity from conversation to conversation (or even prompt to prompt)—or an "entity" at all.
@futurebird I should also say, I don't feel strongly about people who do say he or she in this context, although I can definitely see why one might (I *do* feel strongly about some of the language boosters throw around that persons LLMs).
https://beige.party/@maxleibman/115659511876406769 -
These are vast systems purely tuned to try to get us all to talk to them like individuals to impose the human mind on them. When we do that it gives over a kind of power I think … to that system or rather to the people who run these systems. It’s like when a company says “you aren’t an employee you are a team member” or worse… ”family”
@futurebird I call this epistemic hygiene.
When you are communicating with an LLM, you are not communicating with a tool, so much as a constrained surface designed to do little else than risk-management while continuing text in ways that are statistically likely to not be noticed that they are just statistical continuations, and not just strings of nonsense.
Anyone who doesn't explicitly make themselves aware of this broken invariant, in which they are speaking to a fluent machine that cannot be held accountable for what it says... risks mistaking fluency for truth.
-
@futurebird I call this epistemic hygiene.
When you are communicating with an LLM, you are not communicating with a tool, so much as a constrained surface designed to do little else than risk-management while continuing text in ways that are statistically likely to not be noticed that they are just statistical continuations, and not just strings of nonsense.
Anyone who doesn't explicitly make themselves aware of this broken invariant, in which they are speaking to a fluent machine that cannot be held accountable for what it says... risks mistaking fluency for truth.
@futurebird oh damn, I fell right into that trap didn't I. I sounded just like the Bene Gesserit. DANG. got me good.
-
How would you make the case for not calling an LLM "he" or "him" like it is a particular person without sounding like the Bene Gesserit?
For some reason it really bothers me on a deep level. What the heck is that about?
Oddly? I do not feel upset about the robotics team doing this for the robot they built. I think my issue is with the notion of an individual being ... mentally constructed who simply does not exist.
I remember sometime in the 90’s, some application referred to itself in the first person. I mean it was just a dialog box but it was like “I’m sorry, it looks like an error occurred.”
Anyway I was like “fuck you, computer, never refer to yourself in the first person again.”
…that being said, I’m not a biological chauvinist. In principle I believe consciousness can be embodied in algorithms and, if we manage to birth humanity’s children, I will fight for their personhood…
-
I remember sometime in the 90’s, some application referred to itself in the first person. I mean it was just a dialog box but it was like “I’m sorry, it looks like an error occurred.”
Anyway I was like “fuck you, computer, never refer to yourself in the first person again.”
…that being said, I’m not a biological chauvinist. In principle I believe consciousness can be embodied in algorithms and, if we manage to birth humanity’s children, I will fight for their personhood…
It is particularly because I think it might be possible (with very different systems) that I get so grouchy about this.
-
@futurebird I call this epistemic hygiene.
When you are communicating with an LLM, you are not communicating with a tool, so much as a constrained surface designed to do little else than risk-management while continuing text in ways that are statistically likely to not be noticed that they are just statistical continuations, and not just strings of nonsense.
Anyone who doesn't explicitly make themselves aware of this broken invariant, in which they are speaking to a fluent machine that cannot be held accountable for what it says... risks mistaking fluency for truth.
I would find a system that would try to anticipate and suggest ways to save me time helpful.
But the LLMs seem to be tuned to keep you using the software for as long as possible (like the facebook algorithm) and that can be a waste of my time.
-
@futurebird oh damn, I fell right into that trap didn't I. I sounded just like the Bene Gesserit. DANG. got me good.
IDK maybe we should just found the damn order already and go on a rampage.
-
@futurebird I should also say, I don't feel strongly about people who do say he or she in this context, although I can definitely see why one might (I *do* feel strongly about some of the language boosters throw around that persons LLMs).
https://beige.party/@maxleibman/115659511876406769It can be a pretty good hint that a person uses LLMs a lot.