Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

  1. Home
  2. Uncategorized
  3. the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.

the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.

Scheduled Pinned Locked Moved Uncategorized
83 Posts 38 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • πŸ’€L πŸ’€

    @miki @KatS so you're betting unfoundedly that the tech is gonna work right one day?

    mikiM This user is from outside of this forum
    mikiM This user is from outside of this forum
    miki
    wrote last edited by
    #52

    @lucydev @KatS Nothing is ever gonna work right, not even humans. Different technologies are at different points on the price-to-mistakes curve, our job is to find a combination that minimizes price while also minimizing mistakes and harm caused.

    E.G. it is definitely true that humans are much, much better psychologists than LLMs, but LLLMs are free, much more widely available in abusive environments, speak your language, even if you are in a foreign country, and work at 4AM on a Saturday when you get dumped by your partner. Human psychologists do not. Very often, the choice isn't between an LLM and a human, the real choice is between an LLM and nothing (and the richer you are, the less true this is, hence the "class divide" in opinions about tech). And I'm genuinely unsure which option wins here, but considering the rate of change over the last 3 years, I woulndn't bet towards "nothing" winning for long.

    1 Reply Last reply
    0
    • Kat (post-Hallowe'en edition)K Kat (post-Hallowe'en edition)

      @miki @lucydev How much thought do you give to the externalities of these things? Their less-desirable impact on the world in which we're trying to live?

      πŸ’€L This user is from outside of this forum
      πŸ’€L This user is from outside of this forum
      πŸ’€
      wrote last edited by
      #53

      @KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.

      Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")

      The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.

      i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".

      you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.

      Kat (post-Hallowe'en edition)K mikiM 2 Replies Last reply
      0
      • πŸ’€L πŸ’€

        @KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.

        Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")

        The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.

        i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".

        you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.

        Kat (post-Hallowe'en edition)K This user is from outside of this forum
        Kat (post-Hallowe'en edition)K This user is from outside of this forum
        Kat (post-Hallowe'en edition)
        wrote last edited by
        #54

        @lucydev @miki Similar: I'm not a "denier" - I'm utterly hostile to this mission of eliminating human expertise, knowledge and artistry.

        This is pretty impressive, given that I don't even like humans all that much.

        πŸ’€L 1 Reply Last reply
        0
        • Kat (post-Hallowe'en edition)K Kat (post-Hallowe'en edition)

          @lucydev @miki Similar: I'm not a "denier" - I'm utterly hostile to this mission of eliminating human expertise, knowledge and artistry.

          This is pretty impressive, given that I don't even like humans all that much.

          πŸ’€L This user is from outside of this forum
          πŸ’€L This user is from outside of this forum
          πŸ’€
          wrote last edited by
          #55

          @KatS

          lmao same actually xD

          Wanna be friends?

          @miki

          Kat (post-Hallowe'en edition)K 1 Reply Last reply
          0
          • πŸ’€L πŸ’€

            @KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.

            Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")

            The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.

            i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".

            you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.

            mikiM This user is from outside of this forum
            mikiM This user is from outside of this forum
            miki
            wrote last edited by
            #56

            @lucydev @KatS Autocorrect on steroids is basically GPT-3 tech. There's a lot more that goes into modern LLMs. A lot of the improvements are due to reinforcement learning, where LLMs learn to predict tokens that actually achieve some outcome, E.G. code that passes tests, answer that is judged "good" by a domain expert. There's still token prediction involved of course, but it somehow turns out that token prediction can get better scores than any human at (unseen) math olympiad questions. And people still say it's not in any way intelligent...

            Kat (post-Hallowe'en edition)K πŸ’€L 2 Replies Last reply
            0
            • πŸ’€L πŸ’€

              @KatS

              lmao same actually xD

              Wanna be friends?

              @miki

              Kat (post-Hallowe'en edition)K This user is from outside of this forum
              Kat (post-Hallowe'en edition)K This user is from outside of this forum
              Kat (post-Hallowe'en edition)
              wrote last edited by
              #57

              @lucydev Well, I like your pinned post about hope having dirt on her face. Yes, I think we'll get on.

              I'm not sure this is how the proponents of that tech expected it to bring people together, but here we are.

              1 Reply Last reply
              0
              • mikiM miki

                @lucydev @KatS Autocorrect on steroids is basically GPT-3 tech. There's a lot more that goes into modern LLMs. A lot of the improvements are due to reinforcement learning, where LLMs learn to predict tokens that actually achieve some outcome, E.G. code that passes tests, answer that is judged "good" by a domain expert. There's still token prediction involved of course, but it somehow turns out that token prediction can get better scores than any human at (unseen) math olympiad questions. And people still say it's not in any way intelligent...

                Kat (post-Hallowe'en edition)K This user is from outside of this forum
                Kat (post-Hallowe'en edition)K This user is from outside of this forum
                Kat (post-Hallowe'en edition)
                wrote last edited by
                #58

                @miki @lucydev The last thing I think I can usefully add to this thread is that you sound very much like the kind of person Michael Crichton wrote about.

                I recommend watching Westworld some time - the movie, that is. I've never seen the series based on it.

                1 Reply Last reply
                0
                • mikiM miki

                  @lucydev @KatS Autocorrect on steroids is basically GPT-3 tech. There's a lot more that goes into modern LLMs. A lot of the improvements are due to reinforcement learning, where LLMs learn to predict tokens that actually achieve some outcome, E.G. code that passes tests, answer that is judged "good" by a domain expert. There's still token prediction involved of course, but it somehow turns out that token prediction can get better scores than any human at (unseen) math olympiad questions. And people still say it's not in any way intelligent...

                  πŸ’€L This user is from outside of this forum
                  πŸ’€L This user is from outside of this forum
                  πŸ’€
                  wrote last edited by
                  #59

                  @miki @KatS if i memorize every possible answer to a specific test, i can pass too. doesn't mean i know shit about fuck.

                  There's no actual thinking or reasoning involved (and no, reasoning models don't actually "reason"), so yeah, an LLM isn't actually intelligent, it just shows how flawed our tests for intelligence are.

                  To get some actual intelligence, thinking or reasoning involved, I'd reckon we'd have to fundamentally change something in the architecture of LLMs, and use a fuckton more computing resources for a single model, and considering how much energy the current tech already wastes, and the whole shtick that made LLMs (and more broadly generative AI) work in the first place is "we discovered that there comes a point where the output gets better when we throw rediculous amounts of compute resources on the problem", and it's already getting super difficult to run and maintain.

                  Honestly, either you're unreasonably optimistic, or you've never taken a look at how things actually work under the hood, but I really recommend you to take a closer look at the technology you praise so much.

                  A couple things you could take a look at (without an AI summarizer, otherwise you'd learn jack shit):

                  Attention is all you need, which is the paper that sparked all that AI craze and the development of GPT models and The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
                  , which takes a closer look and tests reasoning models to infer strengths and weaknesses of reasoning models with all sorts of levels in problem complexity.

                  Honestly, before you make any claims about where the tech could be and what it could do, you should have a look at how things actually work under the hood and have a rough idea of how things work, otherwise, no offense, you're just talking out of your arse.

                  mikiM 1 Reply Last reply
                  0
                  • πŸ’€L πŸ’€

                    @miki @KatS if i memorize every possible answer to a specific test, i can pass too. doesn't mean i know shit about fuck.

                    There's no actual thinking or reasoning involved (and no, reasoning models don't actually "reason"), so yeah, an LLM isn't actually intelligent, it just shows how flawed our tests for intelligence are.

                    To get some actual intelligence, thinking or reasoning involved, I'd reckon we'd have to fundamentally change something in the architecture of LLMs, and use a fuckton more computing resources for a single model, and considering how much energy the current tech already wastes, and the whole shtick that made LLMs (and more broadly generative AI) work in the first place is "we discovered that there comes a point where the output gets better when we throw rediculous amounts of compute resources on the problem", and it's already getting super difficult to run and maintain.

                    Honestly, either you're unreasonably optimistic, or you've never taken a look at how things actually work under the hood, but I really recommend you to take a closer look at the technology you praise so much.

                    A couple things you could take a look at (without an AI summarizer, otherwise you'd learn jack shit):

                    Attention is all you need, which is the paper that sparked all that AI craze and the development of GPT models and The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
                    , which takes a closer look and tests reasoning models to infer strengths and weaknesses of reasoning models with all sorts of levels in problem complexity.

                    Honestly, before you make any claims about where the tech could be and what it could do, you should have a look at how things actually work under the hood and have a rough idea of how things work, otherwise, no offense, you're just talking out of your arse.

                    mikiM This user is from outside of this forum
                    mikiM This user is from outside of this forum
                    miki
                    wrote last edited by
                    #60

                    @lucydev @KatS I have very specifically said "unseen questions."

                    If memorizing answers was a viable strategy to pass that test, humans would have done so.

                    If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world... I don't know what to tell you.

                    πŸ’€L ? 2 Replies Last reply
                    0
                    • mikiM miki

                      @lucydev @KatS I have very specifically said "unseen questions."

                      If memorizing answers was a viable strategy to pass that test, humans would have done so.

                      If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world... I don't know what to tell you.

                      πŸ’€L This user is from outside of this forum
                      πŸ’€L This user is from outside of this forum
                      πŸ’€
                      wrote last edited by
                      #61

                      @miki @KatS > If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world...

                      How reliable is that source? And if that's true, is it really reasonable to bet everything on this, and let this do all your work when a) you end up completely dependent on the tech and b) utterly destroy the environment in that process?

                      Real world problems may be less complex but might require much more context.

                      Oh, and don't get me started on accountability. There's a reason why curl is closing their bug bounty program.

                      πŸ’€L mikiM 2 Replies Last reply
                      0
                      • πŸ’€L πŸ’€

                        @miki @KatS > If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world...

                        How reliable is that source? And if that's true, is it really reasonable to bet everything on this, and let this do all your work when a) you end up completely dependent on the tech and b) utterly destroy the environment in that process?

                        Real world problems may be less complex but might require much more context.

                        Oh, and don't get me started on accountability. There's a reason why curl is closing their bug bounty program.

                        πŸ’€L This user is from outside of this forum
                        πŸ’€L This user is from outside of this forum
                        πŸ’€
                        wrote last edited by
                        #62

                        @miki @KatS oh right, and what was the sample size of the test?

                        An N of 1 is worth fuck all

                        1 Reply Last reply
                        0
                        • πŸ’€L πŸ’€

                          @miki @KatS > If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world...

                          How reliable is that source? And if that's true, is it really reasonable to bet everything on this, and let this do all your work when a) you end up completely dependent on the tech and b) utterly destroy the environment in that process?

                          Real world problems may be less complex but might require much more context.

                          Oh, and don't get me started on accountability. There's a reason why curl is closing their bug bounty program.

                          mikiM This user is from outside of this forum
                          mikiM This user is from outside of this forum
                          miki
                          wrote last edited by
                          #63

                          @lucydev @KatS Curl is closing their bug bounty program because it's far too easy to use LLMs to produce slop. It doesn't mean you can't use LLMs to produce non-slop, just that it is a technique some people have found to get money with not too much effort, and we haven't yet sufficiently adapted to it. This is a genuine problem.

                          1 Reply Last reply
                          0
                          • πŸ’€L πŸ’€

                            the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.

                            ai didn't democritize any of these things. People did. The internet did. if all these things weren't democritized and freely available on the internet before, there wouldn't have been any training data available in the first place.

                            the one single amazing thing that today's day and age brought us is, that you can learn anything at any time for free at your own pace.

                            like, you can just sit down, and learn sketching, drawing, programming, writing, basics in electronics, pcb design, singing, instruments, whatever your heart desires and apply and practice these skills. fuck, most devs on fedi are self taught.

                            the most human thing there is is learning and creativity. the least human thing there is is trying to automate that away.

                            (not to mention said tech failing at it miserably)

                            ? Offline
                            ? Offline
                            Guest
                            wrote last edited by
                            #64

                            @lucydev @nina_kali_nina

                            I think it's accurate

                            Instead of building your own skill, control someone else's

                            Sure they didn't _consent_, but democracies don't ask opposition voters for consent.

                            It's an accurate analogy and shows why democracy isn't a good thing πŸ€ͺ

                            1 Reply Last reply
                            0
                            • πŸ’€L πŸ’€

                              alr the sentence "the most human thing there is is learning and creativity. the least human thing there is is trying to automate that away." goes so hard imma drop it in my bio now

                              ? Offline
                              ? Offline
                              Guest
                              wrote last edited by
                              #65

                              @lucydev if you don't mind imma steal this

                              1 Reply Last reply
                              0
                              • πŸ’€L πŸ’€

                                the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.

                                ai didn't democritize any of these things. People did. The internet did. if all these things weren't democritized and freely available on the internet before, there wouldn't have been any training data available in the first place.

                                the one single amazing thing that today's day and age brought us is, that you can learn anything at any time for free at your own pace.

                                like, you can just sit down, and learn sketching, drawing, programming, writing, basics in electronics, pcb design, singing, instruments, whatever your heart desires and apply and practice these skills. fuck, most devs on fedi are self taught.

                                the most human thing there is is learning and creativity. the least human thing there is is trying to automate that away.

                                (not to mention said tech failing at it miserably)

                                ? Offline
                                ? Offline
                                Guest
                                wrote last edited by
                                #66

                                @lucydev indeed, it's "easy" to "democratize" if you just put a cute bow on top of something already built by others then HIDE that they did it.

                                Also no democratization is possible when one relies on a black box controlled by others, so even if the technology itself was fine (which it's not IMHO) then at least most if not all commercializations of it are huge red flags trying to establish dependency.

                                1 Reply Last reply
                                0
                                • myrmepropagandistF myrmepropagandist shared this topic
                                • πŸ’€L πŸ’€

                                  the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.

                                  ai didn't democritize any of these things. People did. The internet did. if all these things weren't democritized and freely available on the internet before, there wouldn't have been any training data available in the first place.

                                  the one single amazing thing that today's day and age brought us is, that you can learn anything at any time for free at your own pace.

                                  like, you can just sit down, and learn sketching, drawing, programming, writing, basics in electronics, pcb design, singing, instruments, whatever your heart desires and apply and practice these skills. fuck, most devs on fedi are self taught.

                                  the most human thing there is is learning and creativity. the least human thing there is is trying to automate that away.

                                  (not to mention said tech failing at it miserably)

                                  myrmepropagandistF This user is from outside of this forum
                                  myrmepropagandistF This user is from outside of this forum
                                  myrmepropagandist
                                  wrote last edited by
                                  #67

                                  @lucydev

                                  It's also condescending, insulting, to disabled people to suggest that if some of them, IDK, struggle with a paint brush what is needed is for the computer to draw it for them rather than for all of us to look and listen with more care to the work that they create.

                                  1/

                                  myrmepropagandistF 1 Reply Last reply
                                  0
                                  • myrmepropagandistF myrmepropagandist

                                    @lucydev

                                    It's also condescending, insulting, to disabled people to suggest that if some of them, IDK, struggle with a paint brush what is needed is for the computer to draw it for them rather than for all of us to look and listen with more care to the work that they create.

                                    1/

                                    myrmepropagandistF This user is from outside of this forum
                                    myrmepropagandistF This user is from outside of this forum
                                    myrmepropagandist
                                    wrote last edited by futurebird@sauropods.win
                                    #68

                                    @lucydev

                                    I heard a guy say AI could "make art more diverse" and he had all these images of black elves and dwarfs. As if "lacking diversity" were just a surface issue not one built into who gets to participate, who has the time for creative expression.

                                    As if just pasting in a different colored face were the same thing as having an artist who wanted to draw that diversity and whose work would emerge from and be informed by the culture and experiences of the creator.

                                    2/

                                    myrmepropagandistF P ? Alex :yikes:A Eric LawtonE 5 Replies Last reply
                                    0
                                    • myrmepropagandistF myrmepropagandist

                                      @lucydev

                                      I heard a guy say AI could "make art more diverse" and he had all these images of black elves and dwarfs. As if "lacking diversity" were just a surface issue not one built into who gets to participate, who has the time for creative expression.

                                      As if just pasting in a different colored face were the same thing as having an artist who wanted to draw that diversity and whose work would emerge from and be informed by the culture and experiences of the creator.

                                      2/

                                      myrmepropagandistF This user is from outside of this forum
                                      myrmepropagandistF This user is from outside of this forum
                                      myrmepropagandist
                                      wrote last edited by futurebird@sauropods.win
                                      #69

                                      @lucydev

                                      These AI as equity arguments aren't coming from people who have ever said anything about "equity" before this moment, and they will never say anything about equity after this moment. They don't really care about equity. They just want to have something to say that might pause our criticism.

                                      "What if it really could help people?"

                                      Let that go. If it could help people you'd see people using AI effectively to help people.

                                      They are using it as cover.

                                      3/3

                                      Дими́трийS 1 Reply Last reply
                                      0
                                      • none gender with left politicsV none gender with left politics

                                        @atax1a @lucydev honestly I see anyone using the word "democratize" as a red flag

                                        ? Offline
                                        ? Offline
                                        Guest
                                        wrote last edited by
                                        #70

                                        @vikxin @atax1a @lucydev democratize is like decentralize in tech, without qualification it means nothing besides warning you that you're going to get scammed.

                                        1 Reply Last reply
                                        0
                                        • Kat (post-Hallowe'en edition)K Kat (post-Hallowe'en edition)

                                          @miki @lucydev

                                          It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.

                                          No, I'm sorry, but it doesn't.

                                          What it "democratises" is being an art director who commissions a machine to generate things derived from the (uncredited, un-compensated) work of others (whose lack of consent was gleefully violated).

                                          Gutenberg democratised learning, with his movable-type press.
                                          Encylopaedias took that a step further, and Wikipedia amped it up again.
                                          Blogs and Youtube democratised the sharing of knowledge and skills.
                                          All these things have enabled people to learn how to do a thing.

                                          But if you typed in a description and got a picture in return, you did not create that picture. You commissioned it.

                                          ? Offline
                                          ? Offline
                                          Guest
                                          wrote last edited by
                                          #71

                                          @KatS
                                          You plagiarized it. I know, judgement is still out under what circumstances this statement holds legally, but practically, it's plagiarism for me.
                                          @miki @lucydev

                                          1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • 1
                                          • 2
                                          • 3
                                          • 4
                                          • 5
                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups