A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.
After GOTY pull, Clair Obscur devs draw line in sand over AI: 'Everything will be made by humans by us'
-
You can always tell the people with no artistic talent because they don't understand how AI is different than digital art software like PhotoShop. And they seem to think that artists should just accept having their life's work stolen and vomited up as slop. Fuck anyone who thinks like this. They think they are entitled to my creativity without doing any of the work. "Everyone is doing it." The absolute degeneration of morality in this era is mind boggling. Have no morals, seek only profit. The fact that so many people cannot take a stand for integrity because of perceived pragmatism is sickening. I hope anyone that thinks like this gets the AI slop filled hell they deserve. And I hope their careers are the next to be axed and replaced by the plagiarism machines.
-
Just to point out, LLMs are genAI. Lots of code editors provide code suggestions similar to autocorrect/text suggestions using AI. Strictly I doubt any game is made without AI. Not to say it can't be deliberately avoided, but given the lack of opposition to GPT and LLMs I don't see it being considered for avoidance in the same way as art. So Awards with constraints on "any AI usage in development" probably disqualifies most modern games.
-
I'm in an entirely different industry than the topic at hand here, but my boss is really keen on ChatGPT and whatnot. Every problem that comes up, he's like "have you asked AI yet?" We have very expensive machines, which are maintained (ideally) by people who literally go to school to learn how to. We had an issue with a machine the other day and the same ol' question came up, "have you asked AI yet?" He took a photo of the alarm screen and fed it to ChatGPT. It spit out a huge reply and he forwarded it to me and told me to try it out. Literally the first troubleshooting step ChatGPT gave was nonsense and did not apply to our specific machine and our specific set-up and our specific use-case."I will be investigating this shortly." That way, you don't have to commit to AI and can distance a bit from the micromanagement. If he persists. "I have a number of avenues I'd like to go down and will update on progress tomorrow". Though I'd be tempted to flippant, "if you're feeling confident to pick it up, I'm happy to review it". If they hesitate, " that's OK, I'll go through the process. Standups should be quick. Any progress, any issues, what you're focussing on. Otherwise you waste everyone's time. Any messages I'll ignore until I have 5 mins. Micromanagement environments are not worth it.
-
Just to point out, LLMs are genAI. Lots of code editors provide code suggestions similar to autocorrect/text suggestions using AI. Strictly I doubt any game is made without AI. Not to say it can't be deliberately avoided, but given the lack of opposition to GPT and LLMs I don't see it being considered for avoidance in the same way as art. So Awards with constraints on "any AI usage in development" probably disqualifies most modern games.Code analysis and suggestion tools in many professional IDEs are not powered by LLMs, in the IDEs I use, there's an available LLM that I've disabled the plugin for (and never paid for so it did nothing anyways). LLMs are simply too slow for the kind of code completion and recommendation algorithms used by IDEs and so using them is *not* "using genAI"
-
The anti AI crowd is getting crazy. Everyone uses it during development. It's a tool for fuck's sake, what's next? Banning designers from using Photoshop because using it is faster and thus taking jobs from multiple artists who would have to be employed otherwise?I'm not going to fault someone for driving to work in a car, but I certainly wouldn't call them the winner of a marathon even if they only drove for a few minutes of that marathon. There's a difference between something that runs the race for you (LLM AI) and something that simply helps you do what you are already doing (I suppose photoshop is the equivalent of drinking gatorade).
-
This post did not contain any content.If there was an AI that licensed every bit of art/code/etc that it trained on, then I think I would be fine if they used it. BUT, I’d never think their final product was ever a more than just a madlibs of other people’s work, cobbled together for cheap commercial consumption. My time is worth something, and I’m not spending a minute of it on AI generated crap when I could be spending it on the product of a true author, artist, coder, craftsman. They deserve my dollar, not the AI company and their ai-using middle man who produced shit with it.
-
I'm not going to fault someone for driving to work in a car, but I certainly wouldn't call them the winner of a marathon even if they only drove for a few minutes of that marathon. There's a difference between something that runs the race for you (LLM AI) and something that simply helps you do what you are already doing (I suppose photoshop is the equivalent of drinking gatorade).I don't think that's a relevant comparison - marathon is a race meant specifically to test what the human body is capable of. Using a car there is obviously against the goal of the competition. When I'm writing code, I'll happily offload the boring parts to AI. There's only so many times you can solve the same problem without it being boring. And I've been doing this long enough that actually new problems I haven't solved yet are pretty rare.
-
By data aggregators, I strictly mean websites like Reddit, Shutterstock, deviant Art, etc. giving them the keys would bring up the cost of building a state of the art model so that any open sourcing would be literally impossible. These models already cost in the low millions to develop. Take video generation for instance, almost all the data is owned by YouTube and Hollywood. Google wanted to charge 300$ a month to use it but instead, we have free models that can run on high end consumer hardware. Scraping has been accepted for a long time and making it illegal would be disastrous. It would make the entry price for any kind of computer vision software or search engine incredibly high, not just gen AI. I'd love to have laws that forced everything made with public data to be open source but that is *not* what copyright companies, AI companies and the media are pushing for. They don't want to help artists, they want to help themselves. They want to be able to dictate the price of entry which suits them and the big AI companies as well. I'm all for laws to regulate data centers and manufacturing, but again, that's not what is being pushed for. Most anti-AI peeps seem the be helping the enemy a lot more then they realize.> I’m all for laws to regulate data centers and manufacturing, but again, that’s not what is being pushed for. Most anti-AI peeps seem the be helping the enemy a lot more then they realize. I'm guessing there's a lot of controlled opposition which is incredibly cheap to produce, doesn't leave much of a paper trail, and is reasonably effective.
-
I don't think that's a relevant comparison - marathon is a race meant specifically to test what the human body is capable of. Using a car there is obviously against the goal of the competition. When I'm writing code, I'll happily offload the boring parts to AI. There's only so many times you can solve the same problem without it being boring. And I've been doing this long enough that actually new problems I haven't solved yet are pretty rare.I'm not going to fault you for that - but do you think you should receive an award for the work you didn't do? In the case of this particular game, perhaps the bulk of the creative work was done by humans. But if the GOTY committee isn't confident with drawing a line between what work is OK to offload onto an AI and what work isn't, then I think it's fair for then to say this year that any generative AI use is a disqualifier. You and I can say with ease that an implementation of a basic swap function (c=a, a=b, b=c) doesn't require any creative work and has been done to death, so there's no shame in copypasteing something from stackoverflow or chatgpt into your own code to save time. But it's harder to gauge that for more complex things. Especially with art - where would you draw the line? Reference material? Concept art? Background textures or 3d models of basic props (random objects in the scene like chairs, trees, etc)? I don't think there's a clear answer for that. You might have an answer you think is correct, and I might have one as well, but I think it will be difficult and time consuming to achieve consensus in the game development community. So, the most efficient answer for now is to have any generative AI be a disqualifier.
-
Dude, go touch grass, please. This is embarrassing.Yes, it is indeed embarrassing that you insist on defending AI. My sister insisted on using an AI this year to generate our secret santa. She didn't get a gift. Ahahahaha. AI is strictly for stupid people that can't do good work on their own. No reasonably intelligent person is using AI to make a draft that they can then correct, because it will always be more practical to just make it yourself.
-
I didn't mean it in the literal sense but if it makes you happy, we can pretend that whenever someone says "everyone" they mean it literally.
-
I didn't mean it in the literal sense but if it makes you happy, we can pretend that whenever someone says "everyone" they mean it literally.
-
Code analysis and suggestion tools in many professional IDEs are not powered by LLMs, in the IDEs I use, there's an available LLM that I've disabled the plugin for (and never paid for so it did nothing anyways). LLMs are simply too slow for the kind of code completion and recommendation algorithms used by IDEs and so using them is *not* "using genAI"Uh... Sorry but no, LLMs are definitely fast enough. It works just like auto complete, except sometimes it's a genius that pulls the next few lines you were about to write out of the ether, and sometimes it makes up a library to do something you never asked for Mostly it works about as well as code completion software, but it'll name variables much better
-
If there was an AI that licensed every bit of art/code/etc that it trained on, then I think I would be fine if they used it. BUT, I’d never think their final product was ever a more than just a madlibs of other people’s work, cobbled together for cheap commercial consumption. My time is worth something, and I’m not spending a minute of it on AI generated crap when I could be spending it on the product of a true author, artist, coder, craftsman. They deserve my dollar, not the AI company and their ai-using middle man who produced shit with it.I mean they mostly used it for textures, right? Those are often generated from a combination of noise and photography, it's not like they were building the game out of lego bricks of other people's art. I don't see how it's significantly different than sampling in music, it's just some background detail to enhance the message of the art. Obviously modern AI is a nightmare machine that cannot be justified, but I could imagine valid artistic uses for the hypothetical AI you described.
-
Uh... Sorry but no, LLMs are definitely fast enough. It works just like auto complete, except sometimes it's a genius that pulls the next few lines you were about to write out of the ether, and sometimes it makes up a library to do something you never asked for Mostly it works about as well as code completion software, but it'll name variables much betterI believe you that genAI is being used for code suggestions, but I wouldn’t call it a genius. This is anecdotal but over the last couple of years I’ve noticed Visual Studio’s autocomplete went from suggesting exactly what I wanted more often than not to just giving me hot garbage today. Like even when I’m 3-4 lines in to a very obvious repeating pattern it’ll suggest some nonsense variation on it that’s completely useless. Or just straight making up variable and function names that don’t exist, or suggesting inputs to function calls that don’t match the signature. Really basic stuff that any kind of rules-based system should have no problem with.
-
This post did not contain any content.
-
Just to point out, LLMs are genAI. Lots of code editors provide code suggestions similar to autocorrect/text suggestions using AI. Strictly I doubt any game is made without AI. Not to say it can't be deliberately avoided, but given the lack of opposition to GPT and LLMs I don't see it being considered for avoidance in the same way as art. So Awards with constraints on "any AI usage in development" probably disqualifies most modern games.
-
This post did not contain any content.I'm not surprised they ultimately felt like GenAI isn't useful to what they're trying to do. Game dev has known about this type of generation for a while now (see https://en.wikipedia.org/wiki/Model_synthesis and the sources linked at the bottom) and it takes a lot of human effort to curate both the training data and the model weights to end up with anything that feels new _and_ meaningful. If I shuffle a deck of 52 cards, there is [a high chance](https://math.stackexchange.com/questions/671/when-you-randomly-shuffle-a-deck-of-cards-what-is-the-probability-that-it-is-a) of obtaining a deck order that has never occurred before in human history. Big whoop. GenAI is closer to [sexy dice](https://www.scribd.com/document/605580945/Sexy-Dice-Game-Printables) anyways - the "intelligent work" was making sure the dice faces always make sense when put together and you don't end up rolling "blow suck" or "lips thigh". It's very impressive that we're able to scale this type of apparatus up to plausibly generate meaningful paragraphs, conversations, and programs. It's ridiculous what it cost us to get it this far, and just like sexy dice and card shuffling I fail to see it as capable of replacing human thought or ingenuity, let alone expressing what's "in my head". Not until we can bolt it onto a body that can feel pain and hunger and joy, and we can already make human babies much more efficiently than what it takes to train an LLM from scratch (and _they_ have personhood and thus rights in most societies around the world). Even the people worried about "AI self-improving" to the point it "escapes our control" don't seem to be able to demonstrate that today's AI can do much more than slow us down in the long run; [this study] (https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/) was published over 5 months ago, and they don't seem to have found much [since then](https://metr.org/research/).
-
VSCode autocomplete is literal AI now, they don’t have intellisense Others will follow, making “no AI” software really difficult to prove