A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.
Carmack defends AI tools after Quake fan calls Microsoft AI demo “disgusting”
-
Heh, especially for this generation I suppose. Even the Arc B580 is on TSMC and overpriced/OOS everywhere. It's kinda their own stupid fault too. They could've uses Samsung or Intel, and a bigger slower die for each SKU, but didn't.TSMC is the only proven fab at this point. Samsung is lagging and current emerging tech isn't meeting expectations. Intel might be back in the game with their next gen but it's still to be proven and they aren't scaled up to production levels yet. And the differences between the different fabs means that designing a chip to be made at more than one would be almost like designing entirely different chips for each fab. Not only are the gates themselves different dimensions (and require a different layout) but they also have different performance and power profiles, so even if two chips are logically the same and they could trade area efficiency for more consistent higher level layout (like think two buildings with the same footprint but different room layouts), they'd need different setups for things like buffers and repeaters. And even if they do design the same logical chip for both fabs, they'd end up being different products in the end. And with TSMC leading not just performance but also yields, the lower end chips might not even be cheaper to produce. Also, each fab requires NDAs and such and it could even be a case where signing one NDA disqualifies you from signing another, so they might require entirely different teams to do the NDA-requiring work rather than being able to have some overlap for similar work. Not that I disagree with your sentiment overall, it's just a gamble. Like what if one company goes with Samsung for one SKU and their competition goes with TSMC for the competing SKU and they end up with a whole bunch of inventory that no one wants because the performance gap is bigger than the price gap making waiting for stock the no brainer choice? But if Intel or Samsung do catch up to TSMC in at least some of the metrics, that could change.
-
TAA looks worse than no AA IMO. It can be better than not using it with some other techniques that cause the frames to look grainy in random ways, like real time path traced global illumination that doesn't have enough time to generate enough rays for a smooth output. But I see it as pretty much a blur effect. Other AA techniques generate more samples to increase pixel accuracy. TAA uses previous frame data to increase temporal stability, which can reduce aliasing effects but is less accurate because sometimes the new colour isn't correlated with the previous one. Maybe the loss of accuracy from TAA is worth the increase you get from a low sample path traced global illumination in some cases (personally a maybe) or extra smoothness from generated frames (personally a no), but TAA artifacts generally annoy me more than aliasing artifacts. As for specifics of those artifacts, they are things like washed out details, motion blur, and difficult to read text.
-
Oh my god you're still trying to have it both ways.I know it's difficult for you to understand because you're clearly kinda stupid, but the real world has this thing called "nuance". Imagine a scenario: you are a major content artist at a studio. You have limited time and money, and you need to create 200 textures extremely quickly that are due by the end of the week (not uncommon in today's corporate crunch culture). Normally you'd throw your hands up and go "oh man, I'm fucked, I'm gonna get fired". But thankfully you live in a world with stable diffusion models. You train said model on your own previous work, then prompt it to generate a bunch of textures. You pick the best 200, and now you only have to clean them up. Bam, you have now saved 90% of your time working with a cutring-edge piece of productivity-improving software that is technically a plagiarism machine because you only had to clean up what it generated, and you didn't steal anybody else's work. The company then keeps you on because you need to continually create fresh ideas to train the model on, because the model cannot create fresh, good ideas by itself, which is the reason they hired you. You keep your job and are now more efficient. This is how AI helps artists, and it's extremely common these days. Please sit down and shut the fuck up, dumbass.
-
TSMC is the only proven fab at this point. Samsung is lagging and current emerging tech isn't meeting expectations. Intel might be back in the game with their next gen but it's still to be proven and they aren't scaled up to production levels yet. And the differences between the different fabs means that designing a chip to be made at more than one would be almost like designing entirely different chips for each fab. Not only are the gates themselves different dimensions (and require a different layout) but they also have different performance and power profiles, so even if two chips are logically the same and they could trade area efficiency for more consistent higher level layout (like think two buildings with the same footprint but different room layouts), they'd need different setups for things like buffers and repeaters. And even if they do design the same logical chip for both fabs, they'd end up being different products in the end. And with TSMC leading not just performance but also yields, the lower end chips might not even be cheaper to produce. Also, each fab requires NDAs and such and it could even be a case where signing one NDA disqualifies you from signing another, so they might require entirely different teams to do the NDA-requiring work rather than being able to have some overlap for similar work. Not that I disagree with your sentiment overall, it's just a gamble. Like what if one company goes with Samsung for one SKU and their competition goes with TSMC for the competing SKU and they end up with a whole bunch of inventory that no one wants because the performance gap is bigger than the price gap making waiting for stock the no brainer choice? But if Intel or Samsung do catch up to TSMC in at least some of the metrics, that could change.Yeah you are correct, I was venting lol. Another factor is that fab choice design decisions were made *way* before the GPUs launched, when everything you said (TSMC's lead/reliability, in particular) rang more true. *Maybe* Samsung or Intel could offer steep discounts for the lower performance (hence Nvidia/AMD could translate that to bigger dies), but that's quite a fantasy I'm sure... It all just sucks now.
-
I know it's difficult for you to understand because you're clearly kinda stupid, but the real world has this thing called "nuance". Imagine a scenario: you are a major content artist at a studio. You have limited time and money, and you need to create 200 textures extremely quickly that are due by the end of the week (not uncommon in today's corporate crunch culture). Normally you'd throw your hands up and go "oh man, I'm fucked, I'm gonna get fired". But thankfully you live in a world with stable diffusion models. You train said model on your own previous work, then prompt it to generate a bunch of textures. You pick the best 200, and now you only have to clean them up. Bam, you have now saved 90% of your time working with a cutring-edge piece of productivity-improving software that is technically a plagiarism machine because you only had to clean up what it generated, and you didn't steal anybody else's work. The company then keeps you on because you need to continually create fresh ideas to train the model on, because the model cannot create fresh, good ideas by itself, which is the reason they hired you. You keep your job and are now more efficient. This is how AI helps artists, and it's extremely common these days. Please sit down and shut the fuck up, dumbass.But it'll never apply to what *you* do, because you're special and that's different. *Nuance nuance nuance! Plagiarism machine.* Zero cognitive dissonance.
-
But it'll never apply to what *you* do, because you're special and that's different. *Nuance nuance nuance! Plagiarism machine.* Zero cognitive dissonance.
-
Obviously AI is coming for sound designers too. You know that right? https://elevenlabs.io/sound-effects And if you work on games and you haven’t seen your industry decimated in the past 16 months, I want to know what rock you have been living under and if there’s room for one more.
-
But it'll never apply to what *you* do, because you're special and that's different. *Nuance nuance nuance! Plagiarism machine.* Zero cognitive dissonance.Okay, dude, time to put your money where your mouth is. Mute [this video](https://youtu.be/oPhnIBuzCjw?t=150) (no cheating by listening to it first; you have to act like a REAL designer here), then use AI to generate for me some sound design that works for the visuals at the timestamp. Should be simple to do better than the sound designers over at Riot for an expert like yourself with access to an AI that makes their expertise irrelevant and will totally steal their job. Oh, what’s that? It sounds awful and doesn’t represent the character, let alone what's happening on-screen at all? Hmm…nah, I must still be wrong somehow.
-
Okay, dude, time to put your money where your mouth is. Mute [this video](https://youtu.be/oPhnIBuzCjw?t=150) (no cheating by listening to it first; you have to act like a REAL designer here), then use AI to generate for me some sound design that works for the visuals at the timestamp. Should be simple to do better than the sound designers over at Riot for an expert like yourself with access to an AI that makes their expertise irrelevant and will totally steal their job. Oh, what’s that? It sounds awful and doesn’t represent the character, let alone what's happening on-screen at all? Hmm…nah, I must still be wrong somehow.> Should be simple to do better than the sound designers over at Riot
-
> Should be simple to do better than the sound designers over at Riot>Your explicit argument is that “better” isn’t the standard, when it comes to cranking out textures or whatever. But suddenly when it’s your thing - there’s no possible way that passable results could work. Disruptive change won’t affect your profession, because good-enough audio never happens in video games, a medium obsessed with escalating… audio quality? No, right, graphics. The thing you figure AI will be awesome for, when someone needs two hundred variations, to-day. And when that demand becomes reasonable there’s no fucking way the studio will demand two thousand. I pretty explicitly explained why this is the dumbest take ever in the previous post, and then tried to get you to do what you're claiming in actual practice so perhaps midway through you'd realize how dumb you're being. Read, please. >This latest comment is an open wound of grievances which I, personally, didn’t say a god damned word about. You’re just showing your whole ass over… cut content? For some reason? Who asked. And you’re trying to turn measured pushback regarding your sweeping claims into some chest-beating dominance-game, where one of us has to go away humiliated and quintessentially stupid forever, instead of just saying - oh, guess that was a slight overreach, whoopsie daisy. Nah, dude, those are the most basic responses you'd get out of literally anybody with any knowledge on this if you brought it up as a suggestion. You're just digging the hole further, shut the fuck up lmao. >My condolences to anyone who works under you. However good you are at your job, the way you handle disagreement is demonstrably miserable. Please get better at it. Writing fan-fiction about some random online to make yourself feel better? Who's showing their ass here, again?