"I don't want to roast him, but he could not be roasted more than he's roasted himself"
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
@futurebird I don't tend to like her videos very much (no critique, just personal taste) but this one is an instant classic.
Incredible levels of academic own-goal
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
@futurebird Angela's so good
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
i hope he got fired & the grants he stole using an AI were reassigned to academics who do their own work
i couldn't hear the audio on the YouTube but i followed the link to his own publication admitting to using ChatGPT to do his job for him, & this guy is crying out to be prosecuted for fraud
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
@futurebird OK I watched a little and skipped a little to where she seemed to be saying that yea, not wanting to spend a mess of time watching this was a critique of this kind of video...
Reminds me of meetings that could have been an email... -
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
Can only hope he is an outlier

-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
@futurebird
I use Wikipedia as a source because its information is about as reliable as an encyclopedia.Academics rejected Wikipedia as a source because its information is about as reliable as an encyclopedia (please correct me if I am wrong).
So academic use of LLM highly ironic. Surely, they understand that LLMs are NOT expertise systems, and that the LLM has no actual understanding of the words it is using.
Heck, the software picking the 10th word has no idea why the 6th word was picked.
-
Can only hope he is an outlier

@FediThing @futurebird I'm angry about him using it and the implications of using it for writing grant applications.
We've already had enough nonsense with AI review in the EIS process when it was being used for a while as first pass for Horizon Funding just under 5 years ago.
I knew at some point someone would use LLMs to write grant applications.
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
@futurebird lol get his ass angela
-
@FediThing @futurebird I'm angry about him using it and the implications of using it for writing grant applications.
We've already had enough nonsense with AI review in the EIS process when it was being used for a while as first pass for Horizon Funding just under 5 years ago.
I knew at some point someone would use LLMs to write grant applications.
Apart from anything else, the LLM makers could subtly control who gets grants or funding, or subtly change the nature of any other decision made by LLM.
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
Iām a new associate editor of Wiley earth science journals.
This last December at the annual AGU meeting, I learned that the authors own self-reported use of ai/ml in any stage of production of work is 80%.
Itās shocking to me.
I asked folks how the can use it. I explain that I tried it for coding, and llms will write code that claims there are 360 degrees of longitude and latitude with full confidence.
We donāt need new works publishing summaries of old works. With any possibility of hallucinations. AI can produce those faster that human fact checkers can filter them out.
New works should focus on what is novel. They should only include the bits not replaceable by AI.
We donāt need AI to keep spinning obfuscating, time wasting wheels.
-
"I don't want to roast him, but he could not be roasted more than he's roasted himself"
Academic publicly admits to be LLM-dependent and gives "helpful" warning assuming "everyone is doing it"
@futurebird I know that she and you and I see the morals of the story as "don't hit the delete button" and "don't outsource your work to AI," but I have a sneaking suspicion the intended moral was "never revoke data consent."
-
Iām a new associate editor of Wiley earth science journals.
This last December at the annual AGU meeting, I learned that the authors own self-reported use of ai/ml in any stage of production of work is 80%.
Itās shocking to me.
I asked folks how the can use it. I explain that I tried it for coding, and llms will write code that claims there are 360 degrees of longitude and latitude with full confidence.
We donāt need new works publishing summaries of old works. With any possibility of hallucinations. AI can produce those faster that human fact checkers can filter them out.
New works should focus on what is novel. They should only include the bits not replaceable by AI.
We donāt need AI to keep spinning obfuscating, time wasting wheels.
@atthenius
That is a huge percentage! Is that due to foreign language speakers preparing manuscripts in English? There has been a large increase of Chinese scientists publishing as the Chinese research enterprise grows rapidly. Something like grammarly checks could be self reported as AI use. -
@atthenius
That is a huge percentage! Is that due to foreign language speakers preparing manuscripts in English? There has been a large increase of Chinese scientists publishing as the Chinese research enterprise grows rapidly. Something like grammarly checks could be self reported as AI use.@Brad_Rosenheim @atthenius @jf_718
That would make me feel less despondent about it.