Forwarded from United Anarchists
Raytheon won an environmental award… beyond parody
https://www.governor.virginia.gov/newsroom/news-releases/2024/april/name-1025446-en.html
https://www.governor.virginia.gov/newsroom/news-releases/2024/april/name-1025446-en.html
www.governor.virginia.gov
Governor Glenn Youngkin | Governor.Virginia.gov
Governor Glenn Youngkin, 74th Governor of Virginia
Forwarded from United Anarchists
X (formerly Twitter)
punkbnuuy (@punk_bnuuy) on X
LRT: i want to talk about misinformation, and how bad actors intentionally distort the truth and make up events to spark outrage against fringe and minority groups
lets take this recent example
to state the obvious, this is not true
lets take this recent example
to state the obvious, this is not true
Did some research into this one. Apparently, the ACTUAL story is that a student was being bullied for wearing a headband with animals ears on it, and the administration stepped in. After a week or two on the rumor mill, the situation got turned into "school administration indulges furries licking people" or whatever
Anyway, you know we're a really dangerous place as far as the media is concerned when middle school rumors are being inflated into shock bait news headlines
💯2
Tl;dr, there's an upcoming Supreme Court case about whether or not protections preventing homeless encampments from being taken down when there isn't enough space in shelters are necessary.
Excerpt I would like to highlight:
"Principles for the use of generative AI
Machine generated content and machine learning tools aren’t new to Wikipedia and other Wikimedia projects. At the Wikimedia Foundation, we have developed machine learning and AI tools around the same principles that have made Wikipedia such a useful resource to so many: by centering human-led content moderation and human governance. We continue to experiment with new ways to meet people’s knowledge needs in responsible ways including with generative AI platforms, aiming to bring human contribution and reciprocity to the forefront. Wikipedia editors are in control of all machine generated content − they edit, improve, and audit any work done by AI − and they create policies and structures to govern machine learning tools that are used to generate content for Wikipedia.
These principles can form a good starting point for the use of current and emerging large language models. To start, LLMs should consider how their models support people in three key ways:
Sustainability. Generative AI technology has the potential to negatively impact human motivation to create content. In order to preserve and encourage more people to contribute their knowledge to the commons, LLMs should look to augment and support human participation in growing and creating knowledge. They should not ever impede or replace the human creation of knowledge. This can be done by always keeping humans in the loop and properly crediting their contributions. Not only is continuing to support humans in sharing their knowledge in line with the strategic mission of the Wikimedia movement, but it will be required to continue expanding our overall information ecosystem, which is what creates up-to-date training data that LLMs rely on.
Equity. At their best, LLMs can expand the accessibility of information and offer innovative ways to deliver information to knowledge seekers. To do so, these platforms need to build in checks and balances that do not perpetuate information biases, widen knowledge gaps, continue to erase traditionally-excluded histories and perspectives, or contribute to human rights harms. LLMs should also consider how to identify, address, and correct biases in training data that can produce inaccurate and wildly inequitable results.
Transparency. LLMs and the interfaces to them should allow humans to understand the source of, verify, and correct model outputs. Increased transparency in how outputs are generated can help us understand and then mitigate harmful systemic biases. By allowing users of these systems to assess causes and consequences of bias that may be present in training data or in outputs, creators and users can be part of understanding and the thoughtful application of these tools."
"Principles for the use of generative AI
Machine generated content and machine learning tools aren’t new to Wikipedia and other Wikimedia projects. At the Wikimedia Foundation, we have developed machine learning and AI tools around the same principles that have made Wikipedia such a useful resource to so many: by centering human-led content moderation and human governance. We continue to experiment with new ways to meet people’s knowledge needs in responsible ways including with generative AI platforms, aiming to bring human contribution and reciprocity to the forefront. Wikipedia editors are in control of all machine generated content − they edit, improve, and audit any work done by AI − and they create policies and structures to govern machine learning tools that are used to generate content for Wikipedia.
These principles can form a good starting point for the use of current and emerging large language models. To start, LLMs should consider how their models support people in three key ways:
Sustainability. Generative AI technology has the potential to negatively impact human motivation to create content. In order to preserve and encourage more people to contribute their knowledge to the commons, LLMs should look to augment and support human participation in growing and creating knowledge. They should not ever impede or replace the human creation of knowledge. This can be done by always keeping humans in the loop and properly crediting their contributions. Not only is continuing to support humans in sharing their knowledge in line with the strategic mission of the Wikimedia movement, but it will be required to continue expanding our overall information ecosystem, which is what creates up-to-date training data that LLMs rely on.
Equity. At their best, LLMs can expand the accessibility of information and offer innovative ways to deliver information to knowledge seekers. To do so, these platforms need to build in checks and balances that do not perpetuate information biases, widen knowledge gaps, continue to erase traditionally-excluded histories and perspectives, or contribute to human rights harms. LLMs should also consider how to identify, address, and correct biases in training data that can produce inaccurate and wildly inequitable results.
Transparency. LLMs and the interfaces to them should allow humans to understand the source of, verify, and correct model outputs. Increased transparency in how outputs are generated can help us understand and then mitigate harmful systemic biases. By allowing users of these systems to assess causes and consequences of bias that may be present in training data or in outputs, creators and users can be part of understanding and the thoughtful application of these tools."
❤1👍1
Also, relevant sidenote: I unironically believe Wikipedia is one of the greatest human achievements of the modern era.
🫡8🔥1💯1
Forwarded from /r/fuckcars
Lawns and Car Storage. Name a More Wasteful Use of Land.
#PlanningMemes | +1₁₀₀
link | 0 comments in 7 minutes
#PlanningMemes | +1₁₀₀
link | 0 comments in 7 minutes
💯3