Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

  1. Home
  2. Uncategorized
  3. “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.

“If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.

Scheduled Pinned Locked Moved Uncategorized
28 Posts 9 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • myrmepropagandistF myrmepropagandist

    “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources. I’m not blindly following just anything that says”

    People feel that this being a “responsible user of new technology”

    I think it is actually the opposite.
    1/2

    myrmepropagandistF This user is from outside of this forum
    myrmepropagandistF This user is from outside of this forum
    myrmepropagandist
    wrote last edited by
    #2

    The most exciting and pivotal moments in research are those times when the results do not meet your expectations.

    We live for those moments.

    If an LLM is not reliable enough for you to trust unexpected results then it is not reliable enough to tell you anything new: it’s incapable of telling you anything that you don’t at (some level) already know.

    2/2

    Urban HermitU ? 2 Replies Last reply
    0
    • myrmepropagandistF myrmepropagandist

      The most exciting and pivotal moments in research are those times when the results do not meet your expectations.

      We live for those moments.

      If an LLM is not reliable enough for you to trust unexpected results then it is not reliable enough to tell you anything new: it’s incapable of telling you anything that you don’t at (some level) already know.

      2/2

      Urban HermitU This user is from outside of this forum
      Urban HermitU This user is from outside of this forum
      Urban Hermit
      wrote last edited by
      #3

      @futurebird
      1/2
      Supposedly these things are good at finding correlations. But that is confusing narrowly focused, small data set, supervised research with generic LLMs.

      In my personal experience, the LLMs I have access to are likely to ignore all minority opinions, new research, and claim that scantly documented problems do not exist. They can not weigh the significance of any data, so they always default to what is frequently said is more true.

      Urban HermitU 1 Reply Last reply
      0
      • Urban HermitU Urban Hermit

        @futurebird
        1/2
        Supposedly these things are good at finding correlations. But that is confusing narrowly focused, small data set, supervised research with generic LLMs.

        In my personal experience, the LLMs I have access to are likely to ignore all minority opinions, new research, and claim that scantly documented problems do not exist. They can not weigh the significance of any data, so they always default to what is frequently said is more true.

        Urban HermitU This user is from outside of this forum
        Urban HermitU This user is from outside of this forum
        Urban Hermit
        wrote last edited by
        #4

        @futurebird
        2/2
        It is a stereotype disaster waiting to influence everything. They will rob us of Science. To an LLM, and the tech billionaires who want to influence us, what is stated frequently is always true, and what is new is always wrong or suspect.

        Intellectual stagnation. In the age of information, LLMs are prepping us for a new dark age. The largest LLM in the world might as well be called Aristotle.

        Robbing us of Science.

        Urban HermitU 1 Reply Last reply
        0
        • Urban HermitU Urban Hermit

          @futurebird
          2/2
          It is a stereotype disaster waiting to influence everything. They will rob us of Science. To an LLM, and the tech billionaires who want to influence us, what is stated frequently is always true, and what is new is always wrong or suspect.

          Intellectual stagnation. In the age of information, LLMs are prepping us for a new dark age. The largest LLM in the world might as well be called Aristotle.

          Robbing us of Science.

          Urban HermitU This user is from outside of this forum
          Urban HermitU This user is from outside of this forum
          Urban Hermit
          wrote last edited by
          #5

          @futurebird
          1/3
          I experimented with an LLM last year at the urging of a friend.

          I invented a game called "minority opinion" where we (me & the LLM) took turns identifying theories that could replace the dominant explination, and then asked the LLM to estimate a probability, based on supporting evidence, that the paradigm could be replaced with the new idea in the future.

          The LLM could list a dozen reasons why a new theory was a better fit, yet the probabilities were always astonishingly low.

          Urban HermitU 1 Reply Last reply
          0
          • Urban HermitU Urban Hermit

            @futurebird
            1/3
            I experimented with an LLM last year at the urging of a friend.

            I invented a game called "minority opinion" where we (me & the LLM) took turns identifying theories that could replace the dominant explination, and then asked the LLM to estimate a probability, based on supporting evidence, that the paradigm could be replaced with the new idea in the future.

            The LLM could list a dozen reasons why a new theory was a better fit, yet the probabilities were always astonishingly low.

            Urban HermitU This user is from outside of this forum
            Urban HermitU This user is from outside of this forum
            Urban Hermit
            wrote last edited by
            #6

            @futurebird
            2/3
            And I could push those probabilities around by simply objecting to them. So it really is a people pleasing machine.

            I knew LLM logic was worthless when the LLM chose to believe that ghosts were a more likely explanation for haunted houses than carbon monoxide poisoning. Because of the many ghosts that people claim to have personally identified.

            Urban HermitU 1 Reply Last reply
            0
            • Urban HermitU Urban Hermit

              @futurebird
              2/3
              And I could push those probabilities around by simply objecting to them. So it really is a people pleasing machine.

              I knew LLM logic was worthless when the LLM chose to believe that ghosts were a more likely explanation for haunted houses than carbon monoxide poisoning. Because of the many ghosts that people claim to have personally identified.

              Urban HermitU This user is from outside of this forum
              Urban HermitU This user is from outside of this forum
              Urban Hermit
              wrote last edited by
              #7

              @futurebird
              3/3
              Google can weigh a source for their own LLM, making it insist on a thing, but they can't weigh their own sources for credibility, other than frequency in the training data.

              So, the most commonly held beliefs are automatically true and will be referenced as such by an LLM.

              It's a confirmation bias machine for all of humanity.

              The end of Science.

              Urban HermitU 1 Reply Last reply
              0
              • Urban HermitU Urban Hermit

                @futurebird
                3/3
                Google can weigh a source for their own LLM, making it insist on a thing, but they can't weigh their own sources for credibility, other than frequency in the training data.

                So, the most commonly held beliefs are automatically true and will be referenced as such by an LLM.

                It's a confirmation bias machine for all of humanity.

                The end of Science.

                Urban HermitU This user is from outside of this forum
                Urban HermitU This user is from outside of this forum
                Urban Hermit
                wrote last edited by
                #8

                @futurebird
                4/3
                Make no mistake, the fact that I could have a conversation like this with a machine is a great accomplishment. Turning a huge data set and some language rules into a thing I could query for hours is astonishing.

                But, current AI has a credibility problem, and that means that it is not ready as a truth telling product. And the hype outweighs the truthiness by an uncomfortable margin.

                ? Urban HermitU 2 Replies Last reply
                0
                • Urban HermitU Urban Hermit

                  @futurebird
                  4/3
                  Make no mistake, the fact that I could have a conversation like this with a machine is a great accomplishment. Turning a huge data set and some language rules into a thing I could query for hours is astonishing.

                  But, current AI has a credibility problem, and that means that it is not ready as a truth telling product. And the hype outweighs the truthiness by an uncomfortable margin.

                  ? Offline
                  ? Offline
                  Guest
                  wrote last edited by
                  #9

                  @Urban_Hermit @futurebird

                  There are a lot of of people I could talk to for hours and still go away being no wiser or better informed than I was before, but at least I was talking to a real person.

                  myrmepropagandistF 1 Reply Last reply
                  0
                  • Urban HermitU Urban Hermit

                    @futurebird
                    4/3
                    Make no mistake, the fact that I could have a conversation like this with a machine is a great accomplishment. Turning a huge data set and some language rules into a thing I could query for hours is astonishing.

                    But, current AI has a credibility problem, and that means that it is not ready as a truth telling product. And the hype outweighs the truthiness by an uncomfortable margin.

                    Urban HermitU This user is from outside of this forum
                    Urban HermitU This user is from outside of this forum
                    Urban Hermit
                    wrote last edited by
                    #10

                    @futurebird
                    A/C
                    Another experiment:

                    I know of a data base that was populated by an early AI that 'hallucinated' details. An international kite museum, supposedly in Corpus Christi, Texas, was said to be populated by displays on "The Age of Mammoths" and "The Iron Horse" because the word "museum" took more precedence than "International Kite".

                    It hallucinated a lot of other generic museum like details.

                    A street view search of the address shows a hotel, and no museum at all for blocks around.

                    myrmepropagandistF 1 Reply Last reply
                    0
                    • myrmepropagandistF myrmepropagandist

                      The most exciting and pivotal moments in research are those times when the results do not meet your expectations.

                      We live for those moments.

                      If an LLM is not reliable enough for you to trust unexpected results then it is not reliable enough to tell you anything new: it’s incapable of telling you anything that you don’t at (some level) already know.

                      2/2

                      ? Offline
                      ? Offline
                      Guest
                      wrote last edited by
                      #11

                      @futurebird not to take away from anything you've said, it's definitely true that this sort of approach leads to dismissing novel ideas. but i think there's also another level at which this kind of approach is worrying: by saying they need to sometimes dismiss the machine's output, one admits the machine is not to be trusted, yet they trust it when they can't catch it in what's (seemingly) an obvious lie.
                      in other words, it's trusting the liar's word unless you already know it to be false. it's trusting the allegorical wolf to take care of the sheep because if you ever see it attacking one you'd stop it.

                      if you know you can't trust the machine sometimes, then you can't trust the machine. and if your system is then "well i'll just catch the mistakes", then either you only intend to use it in cases where you already know the right answer (in which case, why use it), or you believe you'll somehow figure out something's wrong when you can't tell what "correct" looks like.

                      it's saying "if it looks correct then it's correct". which is wrong both on the false negatives end and the false positive ends at the same time.

                      Alessandro Corazza 🇨🇦A 1 Reply Last reply
                      0
                      • Urban HermitU Urban Hermit

                        @futurebird
                        A/C
                        Another experiment:

                        I know of a data base that was populated by an early AI that 'hallucinated' details. An international kite museum, supposedly in Corpus Christi, Texas, was said to be populated by displays on "The Age of Mammoths" and "The Iron Horse" because the word "museum" took more precedence than "International Kite".

                        It hallucinated a lot of other generic museum like details.

                        A street view search of the address shows a hotel, and no museum at all for blocks around.

                        myrmepropagandistF This user is from outside of this forum
                        myrmepropagandistF This user is from outside of this forum
                        myrmepropagandist
                        wrote last edited by
                        #12

                        @Urban_Hermit

                        I think when some people are presented with these kinds of errors they think “the LMM just made a factual mistake” they think with more data and “better software” this will not be a problem. They don’t see that what it is doing is *foundationally different* from what they are asking it to do.

                        That it has fallen to random CS educators, and people interested in these models to desperately try to impress upon the public the way they are being tricked makes me angry.

                        1 Reply Last reply
                        0
                        • myrmepropagandistF myrmepropagandist shared this topic
                        • ? Guest

                          @Urban_Hermit @futurebird

                          There are a lot of of people I could talk to for hours and still go away being no wiser or better informed than I was before, but at least I was talking to a real person.

                          myrmepropagandistF This user is from outside of this forum
                          myrmepropagandistF This user is from outside of this forum
                          myrmepropagandist
                          wrote last edited by
                          #13

                          @the5thColumnist @Urban_Hermit

                          People, even people who have terrible mostly wrong ideas tend to have some guiding set of values. Even if you don’t learn much from their opinions you can learn about the philosophy that informs those opinions.

                          Asking an LLM *why* it said any fact or opinion is pointless. It will supply a response that sounds like a human justification but the real “reason” is always the same “it was the response you were most likely to accept as correct”

                          myrmepropagandistF ? 2 Replies Last reply
                          1
                          0
                          • ? Guest

                            @futurebird not to take away from anything you've said, it's definitely true that this sort of approach leads to dismissing novel ideas. but i think there's also another level at which this kind of approach is worrying: by saying they need to sometimes dismiss the machine's output, one admits the machine is not to be trusted, yet they trust it when they can't catch it in what's (seemingly) an obvious lie.
                            in other words, it's trusting the liar's word unless you already know it to be false. it's trusting the allegorical wolf to take care of the sheep because if you ever see it attacking one you'd stop it.

                            if you know you can't trust the machine sometimes, then you can't trust the machine. and if your system is then "well i'll just catch the mistakes", then either you only intend to use it in cases where you already know the right answer (in which case, why use it), or you believe you'll somehow figure out something's wrong when you can't tell what "correct" looks like.

                            it's saying "if it looks correct then it's correct". which is wrong both on the false negatives end and the false positive ends at the same time.

                            Alessandro Corazza 🇨🇦A This user is from outside of this forum
                            Alessandro Corazza 🇨🇦A This user is from outside of this forum
                            Alessandro Corazza 🇨🇦
                            wrote last edited by
                            #14

                            @talya

                            This is why AI is the ultimate Gell-Mann Amnesia machine.

                            @futurebird

                            myrmepropagandistF 1 Reply Last reply
                            0
                            • myrmepropagandistF myrmepropagandist

                              @the5thColumnist @Urban_Hermit

                              People, even people who have terrible mostly wrong ideas tend to have some guiding set of values. Even if you don’t learn much from their opinions you can learn about the philosophy that informs those opinions.

                              Asking an LLM *why* it said any fact or opinion is pointless. It will supply a response that sounds like a human justification but the real “reason” is always the same “it was the response you were most likely to accept as correct”

                              myrmepropagandistF This user is from outside of this forum
                              myrmepropagandistF This user is from outside of this forum
                              myrmepropagandist
                              wrote last edited by futurebird@sauropods.win
                              #15

                              @the5thColumnist @Urban_Hermit

                              I’m trying to avoid loaded phrases like “bullshitting machine” I’ve had a lot of people roll their eyes and shut down on me because “well you just hate AI” as if this is a matter of personal taste.

                              In reality I struggle to see how I could find value in exposing my curiosity to a system with these limitations. I will insulate myself from those times when a simple obvious question brings me up short— it just seems really dangerous to me.

                              1 Reply Last reply
                              0
                              • Alessandro Corazza 🇨🇦A Alessandro Corazza 🇨🇦

                                @talya

                                This is why AI is the ultimate Gell-Mann Amnesia machine.

                                @futurebird

                                myrmepropagandistF This user is from outside of this forum
                                myrmepropagandistF This user is from outside of this forum
                                myrmepropagandist
                                wrote last edited by
                                #16

                                @alessandro @talya

                                I think with topics where one isn’t an expert it can be more important to know what “most people” in your social circle *think* is true than it can be to know what is really true.

                                Knowing an iconoclastic truth, but not having the expertise to explain it to others isn’t very useful. Moreover without that expertise you will struggle to evaluate the validity of the unpopular opinion.

                                So, people reach for the popular option and hope it is correct.

                                myrmepropagandistF 1 Reply Last reply
                                1
                                0
                                • myrmepropagandistF myrmepropagandist

                                  @alessandro @talya

                                  I think with topics where one isn’t an expert it can be more important to know what “most people” in your social circle *think* is true than it can be to know what is really true.

                                  Knowing an iconoclastic truth, but not having the expertise to explain it to others isn’t very useful. Moreover without that expertise you will struggle to evaluate the validity of the unpopular opinion.

                                  So, people reach for the popular option and hope it is correct.

                                  myrmepropagandistF This user is from outside of this forum
                                  myrmepropagandistF This user is from outside of this forum
                                  myrmepropagandist
                                  wrote last edited by
                                  #17

                                  @alessandro @talya

                                  No one is enough of a polymath and no one has enough time to avoid trusting others. This isn’t really a bad thing, but we have to be open to the reality that there are some things that “everyone knows” that are simply wrong.

                                  dataramaD 1 Reply Last reply
                                  0
                                  • myrmepropagandistF myrmepropagandist

                                    @alessandro @talya

                                    No one is enough of a polymath and no one has enough time to avoid trusting others. This isn’t really a bad thing, but we have to be open to the reality that there are some things that “everyone knows” that are simply wrong.

                                    dataramaD This user is from outside of this forum
                                    dataramaD This user is from outside of this forum
                                    datarama
                                    wrote last edited by
                                    #18

                                    @futurebird @alessandro @talya Drawing a line to my favourite topic:

                                    Virtually everything we "knew" about the cognition and social and emotional lives of reptiles before the last couple of decades was wrong (due to systemically flawed studies, and old half-truths that ended up getting quoted enough to take on a weird life on their own). Many of those wrong things are still things that "everyone knows", and they're almost certainly well-represented in a training corpus that both includes older textbooks and all of Reddit. It's completely hit-or-miss if LLMs (at least the ones I've tried) answer with some long-debunked piece of archaic trivia or something well-established backed up by a modern herpetology article. (And when they say something wrong, they're often so authoritative and frequently back it up with consistent-but-wrong "reasoning" that I have to double-check to find out if *I* am wrong).

                                    If we'd had enough computer power and data to train LLMs in 1995, virtually every single thing they would have had to say on the topic would be wrong!

                                    (Making a judgment on whether everything we know in 2025 is so right that something like that will never happen again in any field is left as an exercise to the reader.)

                                    *sparkling anxiety* EvelynG 1 Reply Last reply
                                    1
                                    0
                                    • myrmepropagandistF myrmepropagandist

                                      @the5thColumnist @Urban_Hermit

                                      People, even people who have terrible mostly wrong ideas tend to have some guiding set of values. Even if you don’t learn much from their opinions you can learn about the philosophy that informs those opinions.

                                      Asking an LLM *why* it said any fact or opinion is pointless. It will supply a response that sounds like a human justification but the real “reason” is always the same “it was the response you were most likely to accept as correct”

                                      ? Offline
                                      ? Offline
                                      Guest
                                      wrote last edited by
                                      #19

                                      @futurebird @the5thColumnist @Urban_Hermit Why do you say this versus "most likely to be statistically correct"?

                                      Regardless, that doesn't mean it's the right answer nor the right answer for the person asking it.

                                      myrmepropagandistF 1 Reply Last reply
                                      0
                                      • dataramaD datarama

                                        @futurebird @alessandro @talya Drawing a line to my favourite topic:

                                        Virtually everything we "knew" about the cognition and social and emotional lives of reptiles before the last couple of decades was wrong (due to systemically flawed studies, and old half-truths that ended up getting quoted enough to take on a weird life on their own). Many of those wrong things are still things that "everyone knows", and they're almost certainly well-represented in a training corpus that both includes older textbooks and all of Reddit. It's completely hit-or-miss if LLMs (at least the ones I've tried) answer with some long-debunked piece of archaic trivia or something well-established backed up by a modern herpetology article. (And when they say something wrong, they're often so authoritative and frequently back it up with consistent-but-wrong "reasoning" that I have to double-check to find out if *I* am wrong).

                                        If we'd had enough computer power and data to train LLMs in 1995, virtually every single thing they would have had to say on the topic would be wrong!

                                        (Making a judgment on whether everything we know in 2025 is so right that something like that will never happen again in any field is left as an exercise to the reader.)

                                        *sparkling anxiety* EvelynG This user is from outside of this forum
                                        *sparkling anxiety* EvelynG This user is from outside of this forum
                                        *sparkling anxiety* Evelyn
                                        wrote last edited by
                                        #20

                                        @datarama @futurebird @alessandro @talya
                                        Could you give an example, please?
                                        I have to confess that I'm not sure if I've ever thought about how reptiles think.

                                        dataramaD 1 Reply Last reply
                                        0
                                        • ? Guest

                                          @futurebird @the5thColumnist @Urban_Hermit Why do you say this versus "most likely to be statistically correct"?

                                          Regardless, that doesn't mean it's the right answer nor the right answer for the person asking it.

                                          myrmepropagandistF This user is from outside of this forum
                                          myrmepropagandistF This user is from outside of this forum
                                          myrmepropagandist
                                          wrote last edited by
                                          #21

                                          @elight @the5thColumnist @Urban_Hermit

                                          Because the popular models people will encounter have been trained to work this way.

                                          1 Reply Last reply
                                          0

                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • 1
                                          • 2
                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups