Congrats to The Guardian on hiring Doctorow to explain enshittification to the non-tech masses.
-
@futurebird @zeldman
In a big sense it does already manipulate people, at least by always agreeing with people and confirming their ideas; this makes many people believe they have brilliant ideas in any field, only to discover sooner or later they are shitty and do not work in the real world, thus causing depressions and suicides.I have to remember that people also talked to ELIZA for a long time too. So, this isn't something new, but it seems a lot more destructive.
Would having someone who is hooked on AI watch another person using it, someone with ideas totally different from their own, help them snap out of it?
"I know it's not popular, but could the earth be flat?"
GPT: "Generally most sources will say that the earth is round, but I can see you like to take on bold controversial ideas." -
I have to remember that people also talked to ELIZA for a long time too. So, this isn't something new, but it seems a lot more destructive.
Would having someone who is hooked on AI watch another person using it, someone with ideas totally different from their own, help them snap out of it?
"I know it's not popular, but could the earth be flat?"
GPT: "Generally most sources will say that the earth is round, but I can see you like to take on bold controversial ideas." -
I have to remember that people also talked to ELIZA for a long time too. So, this isn't something new, but it seems a lot more destructive.
Would having someone who is hooked on AI watch another person using it, someone with ideas totally different from their own, help them snap out of it?
"I know it's not popular, but could the earth be flat?"
GPT: "Generally most sources will say that the earth is round, but I can see you like to take on bold controversial ideas."@futurebird @jones @zeldman There's a certain essential credence from being able to talk.
LLMs are outright eloquent (producing close-to-real utterance patterns) while there's no there, there. The result is either no credence (it's a soulless husk) or much credence (eloquent words!).
It's not especially wrong to think of LLMs as a credibility scam; it sounds like you should take it seriously. (Careful scientific use is summoning demons for knowledge; you have to be able to detect the lies.)
-
@futurebird @jones @zeldman There's a certain essential credence from being able to talk.
LLMs are outright eloquent (producing close-to-real utterance patterns) while there's no there, there. The result is either no credence (it's a soulless husk) or much credence (eloquent words!).
It's not especially wrong to think of LLMs as a credibility scam; it sounds like you should take it seriously. (Careful scientific use is summoning demons for knowledge; you have to be able to detect the lies.)
@graydon @futurebird @jones @zeldman The most pernicious aspect of the new AI is how aggressively it mimics human behaviour completely unrelated to the content it provides, in order to get around our cognitive filters and steal unearned credibility. It’s an intentional con, and it makes me so angry.
-
@graydon @futurebird @jones @zeldman The most pernicious aspect of the new AI is how aggressively it mimics human behaviour completely unrelated to the content it provides, in order to get around our cognitive filters and steal unearned credibility. It’s an intentional con, and it makes me so angry.
@michaelgemar @graydon @jones @zeldman
"I'm so glad you've noticed this flaw in my writing style and I will try to do better. I will take on a more neutral tone in the future. Should we try that prompt again?"
-
@michaelgemar @graydon @jones @zeldman
"I'm so glad you've noticed this flaw in my writing style and I will try to do better. I will take on a more neutral tone in the future. Should we try that prompt again?"
-
@futurebird @graydon @jones @zeldman Exactly! No LLM is ever “glad”, because no LLM has emotions. (No LLM should ever use “I”, as it’s not a person.)
I feel like we need a version of the Butlerian Jihad: “Thou shalt not make a machine that feigns humanity.”
@michaelgemar @graydon @jones @zeldman
Imagine a hammer, if you will, and when you aim for a nail it often swerves and hits your finger... but don't worry! It apologizes and says it will do better. Just give it one more chance.
And you end up feeling bad for the poor hammer since you want to just give up on it and all your fingers are broken.
-
Please consider this scary thought: chatGPT has not yet started the enshittification pivot. That is yet to come. It is, for now, free of ads, free of clutter, and free of (obvious) manipulation.
But of course that must change as they move into the "more profitable" part of their growth cycle.
What will advertising look like in that environment. One that is already too manipulative? One that is already playing fast and loose with the truth.
shudder with me at what is to come
@futurebird @zeldman I actually dont think that ChatGPT's enshotifcation pivot, if it does come, will look normal. Its not a service that people actually _depend_ upon the way Facebook, Twitter, Amazon, etc. are/were. For most, it's just a curiosity. You can do an enshitfication cycle on a thing people dont truly give a shit about.
-
@futurebird @zeldman I actually dont think that ChatGPT's enshotifcation pivot, if it does come, will look normal. Its not a service that people actually _depend_ upon the way Facebook, Twitter, Amazon, etc. are/were. For most, it's just a curiosity. You can do an enshitfication cycle on a thing people dont truly give a shit about.
There are a group of whales who use it a LOT. Like people who just talk to it all day. It's not typical use though and I have questions about if it's ethical to even ... provide whatever that is.
-
@futurebird @zeldman I actually dont think that ChatGPT's enshotifcation pivot, if it does come, will look normal. Its not a service that people actually _depend_ upon the way Facebook, Twitter, Amazon, etc. are/were. For most, it's just a curiosity. You can do an enshitfication cycle on a thing people dont truly give a shit about.
OK I heard this story on a podcast, so it's *just* a story.
BUT, there was a mom driving with her wife and kids. The parents are getting divorced, in part due to her using LLMs too much. They start having an ugly argument. Other mom says "let's not... in front of the kids" (I think the story is told from her side)
She proceeds to ask her LLM to back up her side of the argument with leading prompts in *front* of the kids to prove her wife is in the wrong.
...
-
OK I heard this story on a podcast, so it's *just* a story.
BUT, there was a mom driving with her wife and kids. The parents are getting divorced, in part due to her using LLMs too much. They start having an ugly argument. Other mom says "let's not... in front of the kids" (I think the story is told from her side)
She proceeds to ask her LLM to back up her side of the argument with leading prompts in *front* of the kids to prove her wife is in the wrong.
...
"LLM wouldn't it be a terrible idea to say ..." that kind of thing.
I kind of hope it was just a made up story.
And Yet.
-
OK I heard this story on a podcast, so it's *just* a story.
BUT, there was a mom driving with her wife and kids. The parents are getting divorced, in part due to her using LLMs too much. They start having an ugly argument. Other mom says "let's not... in front of the kids" (I think the story is told from her side)
She proceeds to ask her LLM to back up her side of the argument with leading prompts in *front* of the kids to prove her wife is in the wrong.
...
@futurebird @d_j_fitzgerald @zeldman i read that in an article too. I think it was reporting in Futurism. Not clearly sourced reporting, but reporting nonethless…
-
@futurebird @d_j_fitzgerald @zeldman i read that in an article too. I think it was reporting in Futurism. Not clearly sourced reporting, but reporting nonethless…
-
It was two married women not a guy and a woman I guess.
-
Please consider this scary thought: chatGPT has not yet started the enshittification pivot. That is yet to come. It is, for now, free of ads, free of clutter, and free of (obvious) manipulation.
But of course that must change as they move into the "more profitable" part of their growth cycle.
What will advertising look like in that environment. One that is already too manipulative? One that is already playing fast and loose with the truth.
shudder with me at what is to come
@futurebird @zeldman how do we know there isn't some advertising already? The replies could already be biased towards an advertiser. Done in a subtle fashion it would be hard to detect!
-
@futurebird @zeldman how do we know there isn't some advertising already? The replies could already be biased towards an advertiser. Done in a subtle fashion it would be hard to detect!
@DoctorDNS @zeldman This is what I keep anticipating— not just for this but corporate social media. Things like subtle promotion of organic posts that are positive about a brand— or even that suggest the *need* for a brand. Or maybe showing you that you have multiple friends who took a vacation to make it seem like you ought to do that too.
The possibilities for manipulation are endless!
There was some evidence facebook was doing this 8 years ago what has it become?