Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

  1. Home
  2. Uncategorized
  3. the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.

the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.

Scheduled Pinned Locked Moved Uncategorized
83 Posts 38 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • 💀L 💀

    @miki @KatS we're not talking about influences, but more akin to "retracing".

    Besides, there are real implications regarding free software licenses and AI generated slop, so it's not exclusively a moral dilemma, but a legal one too.

    legal != the right thing to do necessarily, but mangling a bunch of intellectual property that's not yours through a statistical computer program isn't exactly comparable with an aspiring artist learning to draw.

    💀L This user is from outside of this forum
    💀L This user is from outside of this forum
    💀
    wrote last edited by
    #39

    @miki i'm curious to know: how much do you know exactly about how LLMs/generative AI works?

    @KatS

    mikiM 1 Reply Last reply
    0
    • 💀L 💀

      @miki i'm curious to know: how much do you know exactly about how LLMs/generative AI works?

      @KatS

      mikiM This user is from outside of this forum
      mikiM This user is from outside of this forum
      miki
      wrote last edited by
      #40

      @lucydev @KatS Currently a PHD student in the field. Have papers published. Presented at conferences.

      💀L 1 Reply Last reply
      0
      • mikiM miki

        @lucydev @KatS Currently a PHD student in the field. Have papers published. Presented at conferences.

        💀L This user is from outside of this forum
        💀L This user is from outside of this forum
        💀
        wrote last edited by
        #41

        @miki @KatS damn, you're studying data science or ML?

        How come you place so much trust in this tech? You must have a reason i presume

        mikiM 💀L 2 Replies Last reply
        0
        • 💀L 💀

          @miki @KatS damn, you're studying data science or ML?

          How come you place so much trust in this tech? You must have a reason i presume

          mikiM This user is from outside of this forum
          mikiM This user is from outside of this forum
          miki
          wrote last edited by
          #42

          @lucydev @KatS Because I use it every day, and I can see how much it helps. And to be fair, it primarily helps people who get X done, not the doers of X. Just as automated telephones primarily help those who want to make phone calls (by making them cheaper, faster and much more convenient), not the phone operators who helped to make them in the past.

          Kat (post-Hallowe'en edition)K 2 Replies Last reply
          0
          • 💀L 💀

            @miki @KatS damn, you're studying data science or ML?

            How come you place so much trust in this tech? You must have a reason i presume

            💀L This user is from outside of this forum
            💀L This user is from outside of this forum
            💀
            wrote last edited by
            #43

            @miki @KatS don't get the wrong idea, this is pure curiosity.

            I had the suspicion that the less someone knows about LLMs or ML, the more they think the tech is capable of, but that suspicion must be false, since you're around

            mikiM 1 Reply Last reply
            0
            • mikiM miki

              @lucydev @KatS Because I use it every day, and I can see how much it helps. And to be fair, it primarily helps people who get X done, not the doers of X. Just as automated telephones primarily help those who want to make phone calls (by making them cheaper, faster and much more convenient), not the phone operators who helped to make them in the past.

              Kat (post-Hallowe'en edition)K This user is from outside of this forum
              Kat (post-Hallowe'en edition)K This user is from outside of this forum
              Kat (post-Hallowe'en edition)
              wrote last edited by
              #44

              @miki @lucydev "People who get X done."
              How about "people who want X done"? Wouldn't that be more accurate?

              1 Reply Last reply
              0
              • mikiM miki

                @lucydev @KatS Because I use it every day, and I can see how much it helps. And to be fair, it primarily helps people who get X done, not the doers of X. Just as automated telephones primarily help those who want to make phone calls (by making them cheaper, faster and much more convenient), not the phone operators who helped to make them in the past.

                Kat (post-Hallowe'en edition)K This user is from outside of this forum
                Kat (post-Hallowe'en edition)K This user is from outside of this forum
                Kat (post-Hallowe'en edition)
                wrote last edited by
                #45

                @miki @lucydev Wait, you're comparing art (visual, written or musical) to operating a telephone switchboard?

                mikiM 1 Reply Last reply
                0
                • Kat (post-Hallowe'en edition)K Kat (post-Hallowe'en edition)

                  @miki @lucydev Wait, you're comparing art (visual, written or musical) to operating a telephone switchboard?

                  mikiM This user is from outside of this forum
                  mikiM This user is from outside of this forum
                  miki
                  wrote last edited by
                  #46

                  @KatS @lucydev Yes. At least the kind of art that people want to consume (and not want to make).

                  💀L Kat (post-Hallowe'en edition)K 2 Replies Last reply
                  0
                  • mikiM miki

                    @KatS @lucydev Yes. At least the kind of art that people want to consume (and not want to make).

                    💀L This user is from outside of this forum
                    💀L This user is from outside of this forum
                    💀
                    wrote last edited by
                    #47

                    @miki @KatS that explains a lot

                    1 Reply Last reply
                    0
                    • mikiM miki

                      @KatS @lucydev Yes. At least the kind of art that people want to consume (and not want to make).

                      Kat (post-Hallowe'en edition)K This user is from outside of this forum
                      Kat (post-Hallowe'en edition)K This user is from outside of this forum
                      Kat (post-Hallowe'en edition)
                      wrote last edited by
                      #48

                      @miki @lucydev Wow.
                      It'll make for more efficient communication in future if you make it explicitly clear that you're democratising the commissioning of things, and working hard to devalue artistry in all its forms.

                      Talking about "democratising art" is typically read as making it easier for people to make art.
                      This is what leads to this kind of convoluted exchange.

                      1 Reply Last reply
                      0
                      • 💀L 💀

                        @miki @KatS don't get the wrong idea, this is pure curiosity.

                        I had the suspicion that the less someone knows about LLMs or ML, the more they think the tech is capable of, but that suspicion must be false, since you're around

                        mikiM This user is from outside of this forum
                        mikiM This user is from outside of this forum
                        miki
                        wrote last edited by
                        #49

                        @lucydev @KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.

                        I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.

                        They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.

                        💀L Kat (post-Hallowe'en edition)K 2 Replies Last reply
                        0
                        • mikiM miki

                          @lucydev @KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.

                          I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.

                          They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.

                          💀L This user is from outside of this forum
                          💀L This user is from outside of this forum
                          💀
                          wrote last edited by
                          #50

                          @miki @KatS so you're betting unfoundedly that the tech is gonna work right one day?

                          mikiM 1 Reply Last reply
                          0
                          • mikiM miki

                            @lucydev @KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.

                            I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.

                            They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.

                            Kat (post-Hallowe'en edition)K This user is from outside of this forum
                            Kat (post-Hallowe'en edition)K This user is from outside of this forum
                            Kat (post-Hallowe'en edition)
                            wrote last edited by
                            #51

                            @miki @lucydev How much thought do you give to the externalities of these things? Their less-desirable impact on the world in which we're trying to live?

                            💀L 1 Reply Last reply
                            0
                            • 💀L 💀

                              @miki @KatS so you're betting unfoundedly that the tech is gonna work right one day?

                              mikiM This user is from outside of this forum
                              mikiM This user is from outside of this forum
                              miki
                              wrote last edited by
                              #52

                              @lucydev @KatS Nothing is ever gonna work right, not even humans. Different technologies are at different points on the price-to-mistakes curve, our job is to find a combination that minimizes price while also minimizing mistakes and harm caused.

                              E.G. it is definitely true that humans are much, much better psychologists than LLMs, but LLLMs are free, much more widely available in abusive environments, speak your language, even if you are in a foreign country, and work at 4AM on a Saturday when you get dumped by your partner. Human psychologists do not. Very often, the choice isn't between an LLM and a human, the real choice is between an LLM and nothing (and the richer you are, the less true this is, hence the "class divide" in opinions about tech). And I'm genuinely unsure which option wins here, but considering the rate of change over the last 3 years, I woulndn't bet towards "nothing" winning for long.

                              1 Reply Last reply
                              0
                              • Kat (post-Hallowe'en edition)K Kat (post-Hallowe'en edition)

                                @miki @lucydev How much thought do you give to the externalities of these things? Their less-desirable impact on the world in which we're trying to live?

                                💀L This user is from outside of this forum
                                💀L This user is from outside of this forum
                                💀
                                wrote last edited by
                                #53

                                @KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.

                                Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")

                                The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.

                                i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".

                                you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.

                                Kat (post-Hallowe'en edition)K mikiM 2 Replies Last reply
                                0
                                • 💀L 💀

                                  @KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.

                                  Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")

                                  The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.

                                  i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".

                                  you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.

                                  Kat (post-Hallowe'en edition)K This user is from outside of this forum
                                  Kat (post-Hallowe'en edition)K This user is from outside of this forum
                                  Kat (post-Hallowe'en edition)
                                  wrote last edited by
                                  #54

                                  @lucydev @miki Similar: I'm not a "denier" - I'm utterly hostile to this mission of eliminating human expertise, knowledge and artistry.

                                  This is pretty impressive, given that I don't even like humans all that much.

                                  💀L 1 Reply Last reply
                                  0
                                  • Kat (post-Hallowe'en edition)K Kat (post-Hallowe'en edition)

                                    @lucydev @miki Similar: I'm not a "denier" - I'm utterly hostile to this mission of eliminating human expertise, knowledge and artistry.

                                    This is pretty impressive, given that I don't even like humans all that much.

                                    💀L This user is from outside of this forum
                                    💀L This user is from outside of this forum
                                    💀
                                    wrote last edited by
                                    #55

                                    @KatS

                                    lmao same actually xD

                                    Wanna be friends?

                                    @miki

                                    Kat (post-Hallowe'en edition)K 1 Reply Last reply
                                    0
                                    • 💀L 💀

                                      @KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.

                                      Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")

                                      The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.

                                      i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".

                                      you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.

                                      mikiM This user is from outside of this forum
                                      mikiM This user is from outside of this forum
                                      miki
                                      wrote last edited by
                                      #56

                                      @lucydev @KatS Autocorrect on steroids is basically GPT-3 tech. There's a lot more that goes into modern LLMs. A lot of the improvements are due to reinforcement learning, where LLMs learn to predict tokens that actually achieve some outcome, E.G. code that passes tests, answer that is judged "good" by a domain expert. There's still token prediction involved of course, but it somehow turns out that token prediction can get better scores than any human at (unseen) math olympiad questions. And people still say it's not in any way intelligent...

                                      Kat (post-Hallowe'en edition)K 💀L 2 Replies Last reply
                                      0
                                      • 💀L 💀

                                        @KatS

                                        lmao same actually xD

                                        Wanna be friends?

                                        @miki

                                        Kat (post-Hallowe'en edition)K This user is from outside of this forum
                                        Kat (post-Hallowe'en edition)K This user is from outside of this forum
                                        Kat (post-Hallowe'en edition)
                                        wrote last edited by
                                        #57

                                        @lucydev Well, I like your pinned post about hope having dirt on her face. Yes, I think we'll get on.

                                        I'm not sure this is how the proponents of that tech expected it to bring people together, but here we are.

                                        1 Reply Last reply
                                        0
                                        • mikiM miki

                                          @lucydev @KatS Autocorrect on steroids is basically GPT-3 tech. There's a lot more that goes into modern LLMs. A lot of the improvements are due to reinforcement learning, where LLMs learn to predict tokens that actually achieve some outcome, E.G. code that passes tests, answer that is judged "good" by a domain expert. There's still token prediction involved of course, but it somehow turns out that token prediction can get better scores than any human at (unseen) math olympiad questions. And people still say it's not in any way intelligent...

                                          Kat (post-Hallowe'en edition)K This user is from outside of this forum
                                          Kat (post-Hallowe'en edition)K This user is from outside of this forum
                                          Kat (post-Hallowe'en edition)
                                          wrote last edited by
                                          #58

                                          @miki @lucydev The last thing I think I can usefully add to this thread is that you sound very much like the kind of person Michael Crichton wrote about.

                                          I recommend watching Westworld some time - the movie, that is. I've never seen the series based on it.

                                          1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • 1
                                          • 2
                                          • 3
                                          • 4
                                          • 5
                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups