Twitter generated child sexual abuse material via its bot..
-
Twitter generated child sexual abuse material via its bot.. and then hid behind the bot when apologising.
โSincerely, Grokโ.
Hold executives accountable.
-
Twitter generated child sexual abuse material via its bot.. and then hid behind the bot when apologising.
โSincerely, Grokโ.
Hold executives accountable.
How did the computer have the kind of training data needed to make such an image?
No one cares about this?
-
How did the computer have the kind of training data needed to make such an image?
No one cares about this?
@futurebird web content scrapers are fairly dumb and take whatever they ended up with in front of them, I guess? @GossiTheDog
-
@futurebird web content scrapers are fairly dumb and take whatever they ended up with in front of them, I guess? @GossiTheDog
I think this is as much an advertisement for what grok "could" do as an apology.
I guess that's a little cynical, but I don't think it's out of line.
-
Twitter generated child sexual abuse material via its bot.. and then hid behind the bot when apologising.
โSincerely, Grokโ.
Hold executives accountable.
This is so gross.
-
F myrmepropagandist shared this topic
-
How did the computer have the kind of training data needed to make such an image?
No one cares about this?
Early access Epstein files? I am only half kidding here.
-
Early access Epstein files? I am only half kidding here.
@GhostOnTheHalfShell @GossiTheDog
That's something I'll never like about the way these models that don't run on your own machine work. You don't really know what has been "mixed in" to the final image you are looking at.
Even for innocent things, they could have all sorts of influences.
And none of these companies carefully vetted their training sets. They just didn't care.
-
Early access Epstein files? I am only half kidding here.
@GhostOnTheHalfShell @GossiTheDog
This is giving me an idea for a horror story.
The face of a murdered child haunts the machine, comes backs to find the killer.
-
How did the computer have the kind of training data needed to make such an image?
No one cares about this?
LLM doesnt need to be trained on such content to be able to generate them.
It learns how things would look and can generate something it has never seen before - much like humans.I find it a quite hypocritical that media itself sexualizes teens and preteens under the banner of sexual freedom, while showing fake outrage.
The movie Cuties, meant to show the dangers of this ends up being a medium of exploitation itself.
-
LLM doesnt need to be trained on such content to be able to generate them.
It learns how things would look and can generate something it has never seen before - much like humans.I find it a quite hypocritical that media itself sexualizes teens and preteens under the banner of sexual freedom, while showing fake outrage.
The movie Cuties, meant to show the dangers of this ends up being a medium of exploitation itself.
"LLM doesn't need to be trained on such content to be able to generate them."
People say this but how do you know it is true?
-
"LLM doesn't need to be trained on such content to be able to generate them."
People say this but how do you know it is true?
Because thats how the math works.
It takes random noise, and checks if it looks like the target description.
Then modulates the noise and repeats until its satisfied that it looks like what the user described. -
"LLM doesn't need to be trained on such content to be able to generate them."
People say this but how do you know it is true?
@futurebird
The same way you can use words to describe something to someone who has never been exposed to that thing and they imagine it only using intuition from their own model of the world.Look, these things are mammal-brain-like, but with very weird training/life-experience, and devoid of life.
@rep_movsd @GossiTheDog -
Because thats how the math works.
It takes random noise, and checks if it looks like the target description.
Then modulates the noise and repeats until its satisfied that it looks like what the user described.There are things that these generators do well, and things that they struggle with, and things they simply can't generate. These limitations are set by the training data.
It's easy to come up with a prompt for an engine that it just can't manage to do since it had nothing to reference.
-
There are things that these generators do well, and things that they struggle with, and things they simply can't generate. These limitations are set by the training data.
It's easy to come up with a prompt for an engine that it just can't manage to do since it had nothing to reference.
The models are getting better by the hour.
AI gets details wrong, but in general they are almost as good as any artist who can do photorealism.
Also prompting techniques matter a lot
-
The models are getting better by the hour.
AI gets details wrong, but in general they are almost as good as any artist who can do photorealism.
Also prompting techniques matter a lot
But you could only state that it could generate something not in the training data... if you knew what was in the training data. But that is secret. So you don't know. You don't know if there is a near identical image to the one produced in the training data.
-
But you could only state that it could generate something not in the training data... if you knew what was in the training data. But that is secret. So you don't know. You don't know if there is a near identical image to the one produced in the training data.
Fair enough, but I am pretty sure that a model that is trained on both images of children and adults, will very easily be able to create images of children in adult like clothes and so forth.
Its possible to put some guardrails on what the AI can be asked to do, but only as much as you can put guardrails on any intelligent being who tends to want to do a task for a reward.
-
Fair enough, but I am pretty sure that a model that is trained on both images of children and adults, will very easily be able to create images of children in adult like clothes and so forth.
Its possible to put some guardrails on what the AI can be asked to do, but only as much as you can put guardrails on any intelligent being who tends to want to do a task for a reward.
OK you came at me with "Because thats how the math works." a moment ago, yet *you* may think these programs are doing things they can't.
'Intelligence working towards a reward' is a bad metaphor. (Why some see the apology, think it means something.)
They will say "exclude X from influencing your next response" Or "tell me how you arrived at that result" and think, because an LLM will give a coherent-sounding response, it is really doing what they ask.
It can't.
-
Fair enough, but I am pretty sure that a model that is trained on both images of children and adults, will very easily be able to create images of children in adult like clothes and so forth.
Its possible to put some guardrails on what the AI can be asked to do, but only as much as you can put guardrails on any intelligent being who tends to want to do a task for a reward.
"Its possible to put some guardrails on what the AI can be asked to do."
How?
-
@futurebird
The same way you can use words to describe something to someone who has never been exposed to that thing and they imagine it only using intuition from their own model of the world.Look, these things are mammal-brain-like, but with very weird training/life-experience, and devoid of life.
@rep_movsd @GossiTheDog@RustedComputing @futurebird @rep_movsd @GossiTheDog these this are absolutely not in any way brain like.
-
@RustedComputing @futurebird @rep_movsd @GossiTheDog these this are absolutely not in any way brain like.
@kevingranade @RustedComputing @rep_movsd @GossiTheDog
"mammal brain"