Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

  1. Home
  2. Uncategorized
  3. Twitter generated child sexual abuse material via its bot..
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.

Twitter generated child sexual abuse material via its bot..

Scheduled Pinned Locked Moved Uncategorized
25 Posts 9 Posters 4 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R rep_movsd

    @futurebird @GossiTheDog

    Because thats how the math works.

    It takes random noise, and checks if it looks like the target description.
    Then modulates the noise and repeats until its satisfied that it looks like what the user described.

    myrmepropagandistF This user is from outside of this forum
    myrmepropagandistF This user is from outside of this forum
    myrmepropagandist
    wrote last edited by
    #13

    @rep_movsd @GossiTheDog

    There are things that these generators do well, and things that they struggle with, and things they simply can't generate. These limitations are set by the training data.

    It's easy to come up with a prompt for an engine that it just can't manage to do since it had nothing to reference.

    R 1 Reply Last reply
    0
    • myrmepropagandistF myrmepropagandist

      @rep_movsd @GossiTheDog

      There are things that these generators do well, and things that they struggle with, and things they simply can't generate. These limitations are set by the training data.

      It's easy to come up with a prompt for an engine that it just can't manage to do since it had nothing to reference.

      R This user is from outside of this forum
      R This user is from outside of this forum
      rep_movsd
      wrote last edited by
      #14

      @futurebird @GossiTheDog

      The models are getting better by the hour.

      AI gets details wrong, but in general they are almost as good as any artist who can do photorealism.

      Also prompting techniques matter a lot

      myrmepropagandistF 1 Reply Last reply
      0
      • R rep_movsd

        @futurebird @GossiTheDog

        The models are getting better by the hour.

        AI gets details wrong, but in general they are almost as good as any artist who can do photorealism.

        Also prompting techniques matter a lot

        myrmepropagandistF This user is from outside of this forum
        myrmepropagandistF This user is from outside of this forum
        myrmepropagandist
        wrote last edited by
        #15

        @rep_movsd @GossiTheDog

        But you could only state that it could generate something not in the training data... if you knew what was in the training data. But that is secret. So you don't know. You don't know if there is a near identical image to the one produced in the training data.

        R 1 Reply Last reply
        0
        • myrmepropagandistF myrmepropagandist

          @rep_movsd @GossiTheDog

          But you could only state that it could generate something not in the training data... if you knew what was in the training data. But that is secret. So you don't know. You don't know if there is a near identical image to the one produced in the training data.

          R This user is from outside of this forum
          R This user is from outside of this forum
          rep_movsd
          wrote last edited by
          #16

          @futurebird @GossiTheDog

          Fair enough, but I am pretty sure that a model that is trained on both images of children and adults, will very easily be able to create images of children in adult like clothes and so forth.

          Its possible to put some guardrails on what the AI can be asked to do, but only as much as you can put guardrails on any intelligent being who tends to want to do a task for a reward.

          myrmepropagandistF 2 Replies Last reply
          0
          • R rep_movsd

            @futurebird @GossiTheDog

            Fair enough, but I am pretty sure that a model that is trained on both images of children and adults, will very easily be able to create images of children in adult like clothes and so forth.

            Its possible to put some guardrails on what the AI can be asked to do, but only as much as you can put guardrails on any intelligent being who tends to want to do a task for a reward.

            myrmepropagandistF This user is from outside of this forum
            myrmepropagandistF This user is from outside of this forum
            myrmepropagandist
            wrote last edited by
            #17

            @rep_movsd @GossiTheDog

            OK you came at me with "Because thats how the math works." a moment ago, yet *you* may think these programs are doing things they can't.

            'Intelligence working towards a reward' is a bad metaphor. (Why some see the apology, think it means something.)

            They will say "exclude X from influencing your next response" Or "tell me how you arrived at that result" and think, because an LLM will give a coherent-sounding response, it is really doing what they ask.

            It can't.

            Frank SchimmelA 1 Reply Last reply
            0
            • R rep_movsd

              @futurebird @GossiTheDog

              Fair enough, but I am pretty sure that a model that is trained on both images of children and adults, will very easily be able to create images of children in adult like clothes and so forth.

              Its possible to put some guardrails on what the AI can be asked to do, but only as much as you can put guardrails on any intelligent being who tends to want to do a task for a reward.

              myrmepropagandistF This user is from outside of this forum
              myrmepropagandistF This user is from outside of this forum
              myrmepropagandist
              wrote last edited by
              #18

              @rep_movsd @GossiTheDog

              "Its possible to put some guardrails on what the AI can be asked to do."

              How?

              1 Reply Last reply
              0
              • FElon&Felon47🇺🇦🇨🇦🇩🇰🇹🇼R FElon&Felon47🇺🇦🇨🇦🇩🇰🇹🇼

                @futurebird
                The same way you can use words to describe something to someone who has never been exposed to that thing and they imagine it only using intuition from their own model of the world.

                Look, these things are mammal-brain-like, but with very weird training/life-experience, and devoid of life.
                @rep_movsd @GossiTheDog

                Kevin GranadeK This user is from outside of this forum
                Kevin GranadeK This user is from outside of this forum
                Kevin Granade
                wrote last edited by
                #19

                @RustedComputing @futurebird @rep_movsd @GossiTheDog these this are absolutely not in any way brain like.

                myrmepropagandistF 1 Reply Last reply
                0
                • Kevin GranadeK Kevin Granade

                  @RustedComputing @futurebird @rep_movsd @GossiTheDog these this are absolutely not in any way brain like.

                  myrmepropagandistF This user is from outside of this forum
                  myrmepropagandistF This user is from outside of this forum
                  myrmepropagandist
                  wrote last edited by
                  #20

                  @kevingranade @RustedComputing @rep_movsd @GossiTheDog

                  "mammal brain"

                  1 Reply Last reply
                  0
                  • myrmepropagandistF myrmepropagandist

                    @rep_movsd @GossiTheDog

                    "LLM doesn't need to be trained on such content to be able to generate them."

                    People say this but how do you know it is true?

                    David Chisnall (*Now with 50% more sarcasm!*)D This user is from outside of this forum
                    David Chisnall (*Now with 50% more sarcasm!*)D This user is from outside of this forum
                    David Chisnall (*Now with 50% more sarcasm!*)
                    wrote last edited by
                    #21

                    @futurebird @rep_movsd @GossiTheDog

                    One way to think of these models (note: this is useful but not entirely accurate and contains some important oversimplifications) is that they are modelling an n-dimensional space of possible images. The training defines a bunch of points in that space and they interpolate into the gaps. It’s possible the there are points in the space that come from the training data and contain adults in sexually explicit activities, and others that show children. Interpolating between them would give CSAM, assuming the latent space is set up that way.

                    myrmepropagandistF 2 Replies Last reply
                    0
                    • David Chisnall (*Now with 50% more sarcasm!*)D David Chisnall (*Now with 50% more sarcasm!*)

                      @futurebird @rep_movsd @GossiTheDog

                      One way to think of these models (note: this is useful but not entirely accurate and contains some important oversimplifications) is that they are modelling an n-dimensional space of possible images. The training defines a bunch of points in that space and they interpolate into the gaps. It’s possible the there are points in the space that come from the training data and contain adults in sexually explicit activities, and others that show children. Interpolating between them would give CSAM, assuming the latent space is set up that way.

                      myrmepropagandistF This user is from outside of this forum
                      myrmepropagandistF This user is from outside of this forum
                      myrmepropagandist
                      wrote last edited by futurebird@sauropods.win
                      #22

                      @david_chisnall @rep_movsd @GossiTheDog

                      This has always been possible, it was just slow. I think the innovation of these systems is building what amounts to search indexes for the atomized training data by doing a huge amount of pre-processing "training" (starting to think that term is a little misleading) this allows this kind of result to be generated fast enough to make it a viable application.

                      1 Reply Last reply
                      1
                      0
                      • David Chisnall (*Now with 50% more sarcasm!*)D David Chisnall (*Now with 50% more sarcasm!*)

                        @futurebird @rep_movsd @GossiTheDog

                        One way to think of these models (note: this is useful but not entirely accurate and contains some important oversimplifications) is that they are modelling an n-dimensional space of possible images. The training defines a bunch of points in that space and they interpolate into the gaps. It’s possible the there are points in the space that come from the training data and contain adults in sexually explicit activities, and others that show children. Interpolating between them would give CSAM, assuming the latent space is set up that way.

                        myrmepropagandistF This user is from outside of this forum
                        myrmepropagandistF This user is from outside of this forum
                        myrmepropagandist
                        wrote last edited by
                        #23

                        @david_chisnall @rep_movsd @GossiTheDog

                        This is what I've learned by working with the public libraries I could find, and reading about how these things work.

                        To really know if an image isn't in the training data (or something very close to it) we'd need to compare it to the training data and we *can't* do that.

                        The training data are secret.

                        All that (maybe stolen) information is a big "trade secret."

                        So, when we are told "this isn't like anything in the data" the source is "trust me bro"

                        myrmepropagandistF 1 Reply Last reply
                        1
                        0
                        • myrmepropagandistF myrmepropagandist

                          @david_chisnall @rep_movsd @GossiTheDog

                          This is what I've learned by working with the public libraries I could find, and reading about how these things work.

                          To really know if an image isn't in the training data (or something very close to it) we'd need to compare it to the training data and we *can't* do that.

                          The training data are secret.

                          All that (maybe stolen) information is a big "trade secret."

                          So, when we are told "this isn't like anything in the data" the source is "trust me bro"

                          myrmepropagandistF This user is from outside of this forum
                          myrmepropagandistF This user is from outside of this forum
                          myrmepropagandist
                          wrote last edited by
                          #24

                          @david_chisnall @rep_movsd @GossiTheDog

                          It's that trust that I'm talking about here. The process makes sense to me. But, I've also seen prompts that stump these things. I've seen prompts that make it spit out images that are identical to existing images.

                          1 Reply Last reply
                          0
                          • myrmepropagandistF myrmepropagandist

                            @rep_movsd @GossiTheDog

                            OK you came at me with "Because thats how the math works." a moment ago, yet *you* may think these programs are doing things they can't.

                            'Intelligence working towards a reward' is a bad metaphor. (Why some see the apology, think it means something.)

                            They will say "exclude X from influencing your next response" Or "tell me how you arrived at that result" and think, because an LLM will give a coherent-sounding response, it is really doing what they ask.

                            It can't.

                            Frank SchimmelA This user is from outside of this forum
                            Frank SchimmelA This user is from outside of this forum
                            Frank Schimmel
                            wrote last edited by
                            #25

                            @futurebird @rep_movsd @GossiTheDog

                            An honest response would be kind of boring…
                            you: tell me how you arrived at that result
                            LLM: I did a lot of matrix multiplications

                            1 Reply Last reply
                            0

                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • 1
                            • 2
                            • Login

                            • Don't have an account? Register

                            • Login or register to search.
                            Powered by NodeBB Contributors
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups