@futurebird Which Apple one? I know they've had lightning for a while, but before they were basically forced to start standardizing more they were among the companies to capitalize on making new ones every few generations so they could sell new accessories.

nazokiyoubinbou@urusai.social
Posts
-
USB According To My Husband -
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@joby @futurebird @CptSuperlative @emilymbender Yeah, there are a lot of little scenarios where it can actually be useful and that's one of them. The best thing about that is it merely stimulates you to create on your own and you can just keep starting over and retrying until you have a pretty good pre-defined case in your head to start from with the real person.
As long as one doesn't forget that things may go wildly differently IRL, it can help build up towards a better version of what might otherwise be a tough conversation.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@futurebird @CptSuperlative @emilymbender To be clear on this, I'm one of the people actually using it -- though I'll be the first to admit that my uses aren't particularly vital or great. And I've seen a few other truly viable uses. I think my favorite was one where someone set it up to roleplay as the super of their facility so they could come up with arguments against anything the super might try to use to avoid fixing something, lol.
I just feel like it's always important to add that reminder "by the way, you can't 100% trust what it says" for anything where accuracy actually matters (such as summaries) because they work in such a way that people do legitimately forget this.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@futurebird @CptSuperlative @emilymbender Summaries aren't reliable either.
There are indeed use-cases. But every single one of them comes with caveats. And, I mean, to be fair, most "quick" methods of doing anything come with caveats. It's just that people forget those caveats.
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@futurebird @cxxvii Right. I make this differentiation because A. I want to be clear that no matter how they might advertise this or that method is more accurate, it will always fail due to the underlying issue and B. some people think the tech just isn't fully developed, but its underlying mechanism can NEVER improve without changing to something else entirely.
(Well, as a side note, many do actually make an effort to train in more accuracy, just, the fundamental issue always comes back to bite them. They legit are trying, it just can't work.)
-
"Chat GPT told me that it *can't* alter its data set but it did say it could simulate what it would be like if it altered it's data set"@cxxvii @futurebird Well, to be clear, the issue isn't the training. A lot of stuff is thrown into the latest training methods to try to make them more accurate. The much more fundamental problem isn't the training, but the actual mechanism itself which -- no matter how good or accurate the training -- simply can't reliably produce the correct output.
-
Today's ant of the day is Solenopsis molesta, the thief ant.@rubinjoni @futurebird I just want to point out that carpenter ants still haven't built one thing for me and I'm very disappointed in them.