Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

Urban HermitU

urban_hermit@mstdn.social

@urban_hermit@mstdn.social
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.
About
Posts
7
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
    Urban HermitU Urban Hermit

    @futurebird
    A/C
    Another experiment:

    I know of a data base that was populated by an early AI that 'hallucinated' details. An international kite museum, supposedly in Corpus Christi, Texas, was said to be populated by displays on "The Age of Mammoths" and "The Iron Horse" because the word "museum" took more precedence than "International Kite".

    It hallucinated a lot of other generic museum like details.

    A street view search of the address shows a hotel, and no museum at all for blocks around.

    Uncategorized

  • “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
    Urban HermitU Urban Hermit

    @futurebird
    4/3
    Make no mistake, the fact that I could have a conversation like this with a machine is a great accomplishment. Turning a huge data set and some language rules into a thing I could query for hours is astonishing.

    But, current AI has a credibility problem, and that means that it is not ready as a truth telling product. And the hype outweighs the truthiness by an uncomfortable margin.

    Uncategorized

  • “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
    Urban HermitU Urban Hermit

    @futurebird
    3/3
    Google can weigh a source for their own LLM, making it insist on a thing, but they can't weigh their own sources for credibility, other than frequency in the training data.

    So, the most commonly held beliefs are automatically true and will be referenced as such by an LLM.

    It's a confirmation bias machine for all of humanity.

    The end of Science.

    Uncategorized

  • “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
    Urban HermitU Urban Hermit

    @futurebird
    2/3
    And I could push those probabilities around by simply objecting to them. So it really is a people pleasing machine.

    I knew LLM logic was worthless when the LLM chose to believe that ghosts were a more likely explanation for haunted houses than carbon monoxide poisoning. Because of the many ghosts that people claim to have personally identified.

    Uncategorized

  • “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
    Urban HermitU Urban Hermit

    @futurebird
    1/3
    I experimented with an LLM last year at the urging of a friend.

    I invented a game called "minority opinion" where we (me & the LLM) took turns identifying theories that could replace the dominant explination, and then asked the LLM to estimate a probability, based on supporting evidence, that the paradigm could be replaced with the new idea in the future.

    The LLM could list a dozen reasons why a new theory was a better fit, yet the probabilities were always astonishingly low.

    Uncategorized

  • “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
    Urban HermitU Urban Hermit

    @futurebird
    2/2
    It is a stereotype disaster waiting to influence everything. They will rob us of Science. To an LLM, and the tech billionaires who want to influence us, what is stated frequently is always true, and what is new is always wrong or suspect.

    Intellectual stagnation. In the age of information, LLMs are prepping us for a new dark age. The largest LLM in the world might as well be called Aristotle.

    Robbing us of Science.

    Uncategorized

  • “If the LLM produces a wild result, something that doesn’t meet with my expectations *then* I’ll turn to more reliable sources.
    Urban HermitU Urban Hermit

    @futurebird
    1/2
    Supposedly these things are good at finding correlations. But that is confusing narrowly focused, small data set, supervised research with generic LLMs.

    In my personal experience, the LLMs I have access to are likely to ignore all minority opinions, new research, and claim that scantly documented problems do not exist. They can not weigh the significance of any data, so they always default to what is frequently said is more true.

    Uncategorized
  • 1 / 1
  • Login

  • Don't have an account? Register

  • Login or register to search.
Powered by NodeBB Contributors
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups