Have you heard about ROOST (Robust Open Online Safety Tools)?
-
Have you heard about ROOST (Robust Open Online Safety Tools)?
Last week #Google, #OpenAI, #Roblox, and #Discord announce that they were forming a non-profit partnership to provide a free AI for content moderation to online media companies. Eric Schmidt is the Founding Partner of ROOST.
https://www.theverge.com/news/609367/roblox-discord-openai-google-roost-online-safety-tools
This free AI will be used by partner organizations including #GitHub, #Mozilla, and #Bluesky to automate content moderation.
https://roost.tools/partnerships
more...
-
Have you heard about ROOST (Robust Open Online Safety Tools)?
Last week #Google, #OpenAI, #Roblox, and #Discord announce that they were forming a non-profit partnership to provide a free AI for content moderation to online media companies. Eric Schmidt is the Founding Partner of ROOST.
https://www.theverge.com/news/609367/roblox-discord-openai-google-roost-online-safety-tools
This free AI will be used by partner organizations including #GitHub, #Mozilla, and #Bluesky to automate content moderation.
https://roost.tools/partnerships
more...
Here are some questions:
1. When a partner company utilizes the ROOST AI does it send all content and user data to ROOST(Google/OpenAI)?
2. Is this this content and user data then used for training the ROOST AI?
3. If so, does the training data stay within ROOST or does it precipitate out to Google and OpenAI LLM models. That is, is this a backdoor for general AI training.
4. Do partner company's justify any data exfiltration based on TOS terms for "improving the service."
more...
-
Here are some questions:
1. When a partner company utilizes the ROOST AI does it send all content and user data to ROOST(Google/OpenAI)?
2. Is this this content and user data then used for training the ROOST AI?
3. If so, does the training data stay within ROOST or does it precipitate out to Google and OpenAI LLM models. That is, is this a backdoor for general AI training.
4. Do partner company's justify any data exfiltration based on TOS terms for "improving the service."
more...
5. If a partner company uses ROOST does that shield them from responsibility for content moderation?
6. Apparently ROOST is justified in part by the proliferation of AI generated CSAM material. So, isn't using an AI to address a problem created by AI kind of ironic?
7. Has it been proven that AIs are actually any good at this kind of content moderation? Will it not just create an 'arms race' with perpetrators figuring out how to 'trick' ROOST?
more...
-
5. If a partner company uses ROOST does that shield them from responsibility for content moderation?
6. Apparently ROOST is justified in part by the proliferation of AI generated CSAM material. So, isn't using an AI to address a problem created by AI kind of ironic?
7. Has it been proven that AIs are actually any good at this kind of content moderation? Will it not just create an 'arms race' with perpetrators figuring out how to 'trick' ROOST?
more...
Content moderation is hard expensive work. #Bluesky justifies it's partnership in large part because they do not have the financial resources to do human moderation.
https://bsky.app/profile/aaron.bsky.team/post/3lhtnlq2lv22h
AI marketing has succeeded such that organizations can now publicly justify it for unproven applications based on labor cost savings. This is a slippery slope. The same rationale can be applied, for instance, to teachers. It is very expensive to educate children. Does this make it a good application for AI?
-
F myrmepropagandist shared this topic on
K Kichae shared this topic on