@futurebird
(For passersby) Some things it seems people have a really hard time grokking are that there are no necessary relations between any training set + prompt combo and a particular result. There is no way to predict what the output will be or to backtrace why the output was what it was.
But, at the same time, the synthetic text is not merely random in relation to the training set + prompt combo. The synthetic text is random within a certain distribution. Humans are terrible at thinking about statistics and aihype plays off of that.
The above does not make LLMs uniquely useful. It makes LLMs uniquely useless. If there's a use-case that LLMs are actually, reliably good at I haven't heard of it.
Today I relaxed by watching this interview with @emilymbender and it's almost painful to watch the hosts's faces as she systematically undercuts just about every use case.