@hipsterelectron second point
-
@hipsterelectron second point
A computer program cannot have a model, because a model can be changed by a human.
Rn what we have are *images*. Static and cannot be changed without significant restraining from scratch.
Said otherwise, we are lacking the learning part.
-
@hipsterelectron second point
A computer program cannot have a model, because a model can be changed by a human.
Rn what we have are *images*. Static and cannot be changed without significant restraining from scratch.
Said otherwise, we are lacking the learning part.
@Di4na another thing i believe to be true and intimately related to robust conceptions of correctness is the process of feature extraction. to my understanding, the text-only approach used in all the hyped models freezes in place the representation of the input and output as a variable-length stream of tokens. this is understandably desirable as it allows the inference and training processes to be compared and swapped out. but i think it actively rejects even the possibility of incorporating a domain-specific model
-
@Di4na another thing i believe to be true and intimately related to robust conceptions of correctness is the process of feature extraction. to my understanding, the text-only approach used in all the hyped models freezes in place the representation of the input and output as a variable-length stream of tokens. this is understandably desirable as it allows the inference and training processes to be compared and swapped out. but i think it actively rejects even the possibility of incorporating a domain-specific model
@hipsterelectron @Di4na
Text tokens with LLMs are transportation of a statistical vector+wieghts relationship, not meaning.
The human who expressed the source words and the other human reading+interpreting the provided words is making the meaning. -
F myrmepropagandist shared this topic