Stating The Obvious 0808 – We Could Have Had A Sane Society With Moonbases. Instead We Got Day Care For Trannies.
Podcast: Play in new window | Download
The Great One is back to rant some more about ranty rants. What’s pissing him off today?
You know the answer. Everything.
However the things he’s going to bitch about in particular are:
More about colour TVs and how stuff does not equal prosperity. Which do you want? Cheap TVs in a dystopia or expensive TVs in a sane society.
Did Hummus call for a cease fire? I seen a headline to that effect but don’t care enough to follow up. If so, what about my prediction there will be a war for Israel while Trump is President?
The table and who owns it. Bisexual women or trannies.
Speaking of trannies, The Dick Show, Maddox, Digibro. Digibro is now Trixie.
Remember, Dick Masterson and Republicans think this person should be allowed to live and to vote and thus have political power over you.
Spanking women with a cow horn. Yes. Spanking. Women. With a cow horn.
And probably some other stuff. Who knows? I certainly don’t.
I looked this up after I recorded the podcast.
Microsoft proposals for shareholders to vote on 9 December 2024. The Board of Directors recommends voting against all of these.
https://www.microsoft.com/en-us/investor/annual-meeting
Proposal 4: Report on Risks of Weapons Development (Shareholder Proposal)
Harrington Investments, Inc. has advised us that they intend to submit the following proposal for consideration at the Annual Meeting.
Microsoft (MSFT) developed an augmented reality headset to provide night vision, thermal sensing, and monitoring of vital signs, initially intended for gaming purposes, then the United States Army adapted this product to be used for military training and combat.1
In 2021, MSFT was awarded a $479 million Integrated Visual Augmentation System (IVAS) contract with the United States Department of the Army. This later became a $22 billion contract for a semi-custom version of IVAS to rapidly develop, test, and manufacture a single platform that soldiers can use to fight, rehearse, and train that provides increased lethality, mobility, and situational awareness necessary to achieve overmatch against our current and future adversaries.
In 2019, amidst contract negotiation with the military, MSFT employees pushed back in a letter to MSFT stating they “do not want to become war profiteers” and “did not sign up to develop weapons”, “demand[ing] a say in how our work is used”2;
“It will be deployed on the battlefield and works by turning warfare into a simulated ‘video game,’ further distancing soldiers from the grim stakes of war and the reality of bloodshed,” Microsoft workers warn.
Simply put, the application of HoloLens within the IVAS system is an integration of technological capabilities to “improve soldier lethality”3, ultimately “designed to help people kill.”
Further revelations surrounding the problematic nature are noted as “Lawmakers cited concerns over the HoloLens 2-based device’s field tests, where the headset struggled with environmental, sight calibration, and other issues. Device assessments also explained how the headset led to soldier “impairments” such as motion sickness, headaches, and other concerns.”4
Regardless of how our Company positions itself by making public statements, the outcome is a significant contract with the United States military and Microsoft is actively working to develop what employees, other stakeholders, including shareholders, and the public view as a weapons system used in war.Further, in December, images of Chinese military personnel wearing HoloLens 2 headsets were seen on China’s State Media – while simultaneously, United States lawmakers urged increased restrictions on exports to stymie the Chinese military’s access to United States technologies, especially those with dual application in commercial and defense sectors.5
Additionally, families of victims of the 2022 Uvalde school shooting filed suit in 2024 against several companies, including Microsoft, for marketing guns to youths.6Involvement in the development of weapons poses a serious risk to a company’s reputation, especially for investors and stakeholders. Is it prudent for our Company to be identified as a weapons developer?
BE IT RESOLVED Shareholders request that the board issue an independent, third-party report, at reasonable expense and excluding proprietary information, to assess the reputational and financial risks to the company for being identified as a company involved in the development of weapons used by the military.
Proposal 8: Report on AI Misinformation and Disinformation (Shareholder Proposal)
Arjuna Capital and co-filers have advised us that they intend to submit the following proposal for consideration at the Annual Meeting.
Report on Misinformation and Disinformation
Whereas, There is widespread concern that generative Artificial Intelligence (AI) may dramatically increase misinformation and disinformation globally, posing serious threats to democracy and democratic principles.
“I’m particularly worried that these models could be used for large-scale disinformation,” said Sam Altman, CEO of OpenAI, the company that developed ChatGPT along with Microsoft.1
Microsoft has invested over 13 billion dollars in OpenAI, and has integrated ChatGPT into its AI-powered digital assistant Copilot.2
The Washington Post found Microsoft’s Bing chat provided inadequate or inaccurate answers in about 10 percent of questions asked.3 Recently, Copilot produced responses that users referred to as “bizarre, disturbing, and in some cases, harmful.”4 While Microsoft has limited responses to election-related questions in English, one report found Microsoft provided partially or completely incorrect responses to election-related questions in other languages.5 Microsoft’s products have also been used to create deepfake pornography, and in one case, to develop a campaign chatbot that spewed conspiracy theories.6,7
Generative AI’s disinformation may pose serious risks to democracy by manipulating public opinion, undermining institutional trust, and swaying elections. Eurasia Group ranked generative AI as the fourth highest political risk globally, warning disinformation will be used to “influence electoral campaigns, stoke division, and undermine trust in democracy.”8
Shareholders are concerned that generative AI presents Microsoft with significant legal, financial and reputational risk. Many legal experts believe technology companies’ liability shield provided under Section 230 of the Communications Decency Act may not apply to content generated by AI. Senator Wyden, who wrote the law, says Section 230 “has nothing to do with protecting companies from the consequences of their own actions and products.”9 Microsoft has also already faced substantial defamation litigation due to misinformation produced by the Company’s generative AI.10 Microsoft will also need to be responsive to the evolving AI regulatory landscape – including the EU’s AI Act, Biden’s executive AI order, and several legislative proposals.11
Satya Nadella recently stated, “Our responsibility…is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced…I think we can govern a lot more than we think.”12
Shareholders seek greater transparency into these guardrails and their effectiveness in preventing the risks of misinformation and disinformation from generative AI. Microsoft’s 2024 Responsible AI Transparency Report does not address many of these critical questions.
Resolved, Shareholders request the Board issue a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, assessing the risks to the Company’s operations and finances as well as risks to public welfare presented by the company’s role in facilitating misinformation and disinformation disseminated or generated via artificial intelligence, and what steps, if any, the company plans to remediate those harms, and the effectiveness of such efforts.
Proposal 9: Report on AI Data Sourcing Accountability (Shareholder Proposal)
National Legal and Policy Center has advised us that they intend to submit the following proposal for consideration at the Annual Meeting.
Report on AI Data Sourcing Accountability
Whereas: The immense and transformative potential of artificial intelligence comes with substantial risks.
The development and training of AI systems rely on vast amounts of data, and public information available via the Internet may not be enough to quench developers’ insatiable thirst for high-quality training data.1 Thus, stakeholders are concerned that developers will draw from unethical or illegal sources – such as personal information collected online,2 copyrighted works,3 and proprietary commercial information provided by users.4
Supporting Statement: Microsoft Corporation (“Microsoft” or the “Company”) is an early leader in the AI arms race5 6 because of its extensive partnership with OpenAI,7 which has helped push the Company to one of the highest market capitalizations in the world.8But shareholders should be concerned with Microsoft’s record on data ethics:
• Microsoft employs generative AI models developed by OpenAI, which allegedly stole large amounts of personal information by scraping the web, including “private information and private conversations, medical data, information about children — essentially every piece of data exchanged on the internet it could take — without notice to the owners or users of such data, much less with anyone’s permission.”9• OpenAI recently appointed a former head of the National Security Agency – which has been criticized for spying on American citizens – to its board.10 The move increases fears that Microsoft and OpenAI are reneging on their privacy assurances.
• Microsoft has received pushback against its proposed AI “Recall” feature, which screenshots everything a Windows user sees or does and turns it into searchable data. Users thought the feature was a gross violation of privacy and a cybersecurity risk.11 12
• Microsoft or OpenAI, through their services, may inadvertently or deliberately access and utilize proprietary information provided by users, potentially leading to unauthorized use or exposure of sensitive business information.13 14
• Microsoft and OpenAI have been sued by the New York Times, among others, which alleged copyright infringement.15
Prioritizing data ethics in Microsoft’s AI development may help avoid harmful fiduciary and regulatory16 17 consequences.18 Americans surveyed by Pew Research Center have expressed that data privacy and usage are among their main concerns with Big Tech’s AI initiatives.19 Developers who prioritize ethical data usage will reap the benefits of consumer trust, while those that do not will suffer.
Microsoft’s position in the AI arms race, and its associated historic valuation, hang in the balance.
Resolved: Shareholders request the Company prepare a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, which assesses the risks to the Company’s operations and finances, and to public welfare, presented by the real or potential unethical or improper usage of external data in the development and training of its artificial intelligence offerings; what steps the Company takes to mitigate those risks; and how it measures the effectiveness of such efforts.
Source material for this episode:
https://CynLibSoc.com/clsology/sources/Microsoft-2024_Proxy_Statement.pdf
Join the unofficial official Cynical Libertarian Society Telegram which is run by Free Range Fornicator: https://t.me/+55ezQ-ezV8Q4NDU0
All The Podcasts Belong To You: You can get every podcast ever recorded by The Great One, Himself. No bullshit. Every podcast.
RSS Feed: https://www.cynlibsoc.com/feed/
Cyber Begging: Contribute here. Give me your federal reserve fiat currency cuck bucks. For $111 federal reserve fiat currency cuck bucks I will do a podcast on any topic you choose.
Give me demz Bitcoinz at:
bc1qrjanhe8434sk44xwvnqsgt0y52ngd8yk9hv2y7
Stalk The Great One. Send The Great One hate messages and death threats. Tell The Great One how right he is and feed his ego. Send The Great One nude photos of you if you are a cute girl. All this and more at the faggot social media I almost never use at the links below:
Odysse.com: https://odysee.com/@CynLibSoc:7
InstaThot: https://www.instagram.com/cynlibsoc/
Back on the Twitterverse, or X, or Whatever It’s Pronouns Are This Week: https://twitter.com/cynlibsoc
Twitterverse account where new podcasts are posted: https://twitter.com/CLSPodcastFeed
CensorshipTube: https://www.youtube.com/@CynLibSoc/videos
BoomerBook: https://www.facebook.com/CynLibSoc
Gab: https://gab.com/CynLibSoc
CLS Merch, get it before it’s removed for violating the TOS: https://www.cafepress.com/cynlibsoc
Send some commies to Canada. They said they would go if the Trumpenfuhrer was elected President but they are too dumb to figure out Canada is to the north and too poor to get there ’cause they have liberal arts degrees. Commies To Canada.
Discover more from Cynical Libertarian Society
Subscribe to get the latest posts sent to your email.
Comments
Stating The Obvious 0808 – We Could Have Had A Sane Society With Moonbases. Instead We Got Day Care For Trannies. — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>