the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.
-
@lucydev It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.
We're already seeing non-programmers successfully create quite substantial coding projects with AI, to an extend which surprises even me, who was a huge proponent for AI in coding from the start.
Same applies to art, there are many people who need or want art (small business owners, hobbyist game creators, wedding organizers, school teachers), but don't have the budget for the real thing.
Of course, many artists and programmers don't want this to happen and try to invent reasons why this is a bad idea, just as phone operators didn't want the phone company to "force" customers to make their own calls, and just as elevator drivers tried to come up with reasons why driverless elevators were unsafe.
@miki @lucydev It doesn't democratize shit unless it's locally ran models.
If you rely on your programming or "art" on big tech company to do it for you, then it's the opposite.
Big tech wants people to think that AI is democratizing, because it gives the more control. That's why it's free and unprofitable as hell, because it can get people hooked on it, until they have to pay for it (and once you pay for it, I can't see how it's more democratizing than paying an actual artist or developer). Additional bonus in investors being happy because lots of users. -
the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.
ai didn't democritize any of these things. People did. The internet did. if all these things weren't democritized and freely available on the internet before, there wouldn't have been any training data available in the first place.
the one single amazing thing that today's day and age brought us is, that you can learn anything at any time for free at your own pace.
like, you can just sit down, and learn sketching, drawing, programming, writing, basics in electronics, pcb design, singing, instruments, whatever your heart desires and apply and practice these skills. fuck, most devs on fedi are self taught.
the most human thing there is is learning and creativity. the least human thing there is is trying to automate that away.
(not to mention said tech failing at it miserably)
-
@lucydev It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.
We're already seeing non-programmers successfully create quite substantial coding projects with AI, to an extend which surprises even me, who was a huge proponent for AI in coding from the start.
Same applies to art, there are many people who need or want art (small business owners, hobbyist game creators, wedding organizers, school teachers), but don't have the budget for the real thing.
Of course, many artists and programmers don't want this to happen and try to invent reasons why this is a bad idea, just as phone operators didn't want the phone company to "force" customers to make their own calls, and just as elevator drivers tried to come up with reasons why driverless elevators were unsafe.
It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.
No, I'm sorry, but it doesn't.
What it "democratises" is being an art director who commissions a machine to generate things derived from the (uncredited, un-compensated) work of others (whose lack of consent was gleefully violated).
Gutenberg democratised learning, with his movable-type press.
Encylopaedias took that a step further, and Wikipedia amped it up again.
Blogs and Youtube democratised the sharing of knowledge and skills.
All these things have enabled people to learn how to do a thing.But if you typed in a description and got a picture in return, you did not create that picture. You commissioned it.
-
@lucydev Saying AI democratises art, writing or programming is like saying that a chef democratises cooking, or a maid democratises house cleaning.
-
It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.
No, I'm sorry, but it doesn't.
What it "democratises" is being an art director who commissions a machine to generate things derived from the (uncredited, un-compensated) work of others (whose lack of consent was gleefully violated).
Gutenberg democratised learning, with his movable-type press.
Encylopaedias took that a step further, and Wikipedia amped it up again.
Blogs and Youtube democratised the sharing of knowledge and skills.
All these things have enabled people to learn how to do a thing.But if you typed in a description and got a picture in return, you did not create that picture. You commissioned it.
@KatS @lucydev It democratizes in the public transit way (by making transport available to non-drivers), not in the car way (by making it easy).
And btw: all art is uncredited and a lot of it is unconsensual. Outside of academia, it's extremely rare to credit every single influence that an artist used, down to Da Vinci or the Gregorian chants, as long as snippets significant snippets aren't extracted directly from that work, something that AI only does when prompted.
-
@KatS @lucydev It democratizes in the public transit way (by making transport available to non-drivers), not in the car way (by making it easy).
And btw: all art is uncredited and a lot of it is unconsensual. Outside of academia, it's extremely rare to credit every single influence that an artist used, down to Da Vinci or the Gregorian chants, as long as snippets significant snippets aren't extracted directly from that work, something that AI only does when prompted.
@miki @KatS we're not talking about influences, but more akin to "retracing".
Besides, there are real implications regarding free software licenses and AI generated slop, so it's not exclusively a moral dilemma, but a legal one too.
legal != the right thing to do necessarily, but mangling a bunch of intellectual property that's not yours through a statistical computer program isn't exactly comparable with an aspiring artist learning to draw.
-
@miki @KatS we're not talking about influences, but more akin to "retracing".
Besides, there are real implications regarding free software licenses and AI generated slop, so it's not exclusively a moral dilemma, but a legal one too.
legal != the right thing to do necessarily, but mangling a bunch of intellectual property that's not yours through a statistical computer program isn't exactly comparable with an aspiring artist learning to draw.
-
-
-
@lucydev @KatS Because I use it every day, and I can see how much it helps. And to be fair, it primarily helps people who get X done, not the doers of X. Just as automated telephones primarily help those who want to make phone calls (by making them cheaper, faster and much more convenient), not the phone operators who helped to make them in the past.
-
-
@lucydev @KatS Because I use it every day, and I can see how much it helps. And to be fair, it primarily helps people who get X done, not the doers of X. Just as automated telephones primarily help those who want to make phone calls (by making them cheaper, faster and much more convenient), not the phone operators who helped to make them in the past.
-
@lucydev @KatS Because I use it every day, and I can see how much it helps. And to be fair, it primarily helps people who get X done, not the doers of X. Just as automated telephones primarily help those who want to make phone calls (by making them cheaper, faster and much more convenient), not the phone operators who helped to make them in the past.
-
-
-
@miki @lucydev Wow.
It'll make for more efficient communication in future if you make it explicitly clear that you're democratising the commissioning of things, and working hard to devalue artistry in all its forms.Talking about "democratising art" is typically read as making it easier for people to make art.
This is what leads to this kind of convoluted exchange. -
@lucydev @KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.
I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.
They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.
-
@lucydev @KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.
I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.
They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.
-
@lucydev @KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.
I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.
They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.
-
@lucydev @KatS Nothing is ever gonna work right, not even humans. Different technologies are at different points on the price-to-mistakes curve, our job is to find a combination that minimizes price while also minimizing mistakes and harm caused.
E.G. it is definitely true that humans are much, much better psychologists than LLMs, but LLLMs are free, much more widely available in abusive environments, speak your language, even if you are in a foreign country, and work at 4AM on a Saturday when you get dumped by your partner. Human psychologists do not. Very often, the choice isn't between an LLM and a human, the real choice is between an LLM and nothing (and the richer you are, the less true this is, hence the "class divide" in opinions about tech). And I'm genuinely unsure which option wins here, but considering the rate of change over the last 3 years, I woulndn't bet towards "nothing" winning for long.
