US AI Policy, Censorship, and Stargate
News at the intersection of justice & AI. The week of January 27th, 2025. Issue 45.
If you prefer not to read the newsletter, you can listen instead!👂 The Just AI podcast covers everything in the weekly newsletter and a bit more. Have a listen, and follow & share if you find it to be helpful!
OpenAI's New AI Agent Can Order Groceries for You. Analysts Say That's Bad News for Google. (Investopedia)
AI weapon detection system at Antioch High School failed to detect gun in Nashville shooting (NBC News)
$60 billion in one year: Mark Zuckerberg touts Meta's AI investments (NBC News)
*Paige works for GitHub, a Microsoft Company
The Flurry of Social Media Interference Accusations Amidst the 2025 Election Landscape
In the early days of 2025, social media platforms are facing unprecedented scrutiny over their content moderation practices, raising critical questions about free speech, algorithmic bias, and election integrity.
Auto-following and algorithmic shifts Meta's platforms have triggered user confusion with unexpected account follows. As President Trump took office, many Instagram and Facebook users found themselves automatically following presidential accounts. Meta spokesman Andy Stone insisted, "People were not made to automatically follow any of the official Facebook or Instagram accounts," but acknowledged technical complexities during the transition. (It was confirmed that people were not made to auto-follow, but some did experience a technical glitch that made it challenging to unfollow for a period of time in the midst of the transition between administrations.)
Content Suppression Concerns Multiple troubling trends emerged simultaneously:
Democratic Content Blocking Instagram users reported being unable to search for terms like "#Democrats" and "#DNC", with the platform displaying "sensitive content" warnings. Meta confirmed an "error affecting hashtags across the political spectrum."
Abortion Information Suppression Abortion pill providers like Aid Access and Hey Jane reported systematic blocking of their accounts and content. Rebecca Davis from Hey Jane stated, "We know firsthand that this suppression actively prevents [us] from reaching people who are seeking out timely health care information." Meta confirmed that this was happening and called it “over-enforcement".”
Manipulated algorithmic feeds to boost pro-Trump content
Created thousands of AI-generated fake accounts
Allowed foreign government influence campaigns
Manufactured viral news stories
We do not know if this letter from the X employee is factual, but it is concerning and worthy of investigation. I am on the lookout for reputable sources getting-to-ground on this and will report back.
👉 Why it matters: While X made its content moderation changes some time ago, Meta has made significant shifts in content moderation, and Mark Zuckerberg announced abandoning independent fact-checkers, stating they were "too politically biased" and it was "time to get back to our roots around free expression." These censorship events add fuel to people’s outrage and distrust of the tech giant. Social media expert Matt Navarra warned, "In a hyper-partisan environment, even unintentional errors like this can escalate into accusations of partisanship." The censoring of political search terms and abortion and sexual health accounts and posts, even if unintentional, is a major cause for concern. Abortion and sexual health rights are rights, even if the government or social media company owner does not agree that they should be rights. Everyone should remain watchful. Responsible AI rating: 🚩
OpenAI and President Trump Unveil $500 Billion "Stargate" AI Infrastructure Project
President Donald Trump announced a massive $500 billion joint venture between OpenAI, SoftBank, MGX, and Oracle to build new AI data centers in the United States. The project, called Stargate, aims to create powerful AI infrastructure and maintain U.S. technological leadership.
While the total investment will be spread over four years, $100 billion will be deployed immediately, with construction already underway in Texas. The project includes international investment, with MGX (an Abu Dhabi sovereign wealth fund) and SoftBank (a Japanese company) contributing funding.
The announcement sparked tension between tech rivals Elon Musk and Sam Altman. Musk criticized the project's financing, claiming SoftBank has "well under $10 billion secured." Altman responded diplomatically, praising Musk while defending the project's viability. Responsible AI rating: 🟡
👉 Why it matters: This project, as well as many other AI expansion projects, are a cause for major concern, especially given the huge amounts of energy and water consumed to ensure they run properly. With the US now no longer party to the Paris Climate Agreement, and with focus of AI dominance from the Trump administration, it seems that American companies building out AI infrastructure now have a blank check to get ahead. AI’s energy use is expected to skyrocket, and so will its impact on the planet, if agreements aren’t made to address the negative impact. The Trump administration, as well as Open AI and other companies, speak of using AI for “human flourishing" but fail to define what that means.
AI's Latest Challenge: Humanity’s Last Exam
Researchers at the Center for AI Safety and Scale AI have developed "Humanity’s Last Exam," a rigorous evaluation aimed at assessing advanced AI capabilities across diverse academic fields. Spearheaded by Dan Hendrycks, the test features over 3,000 difficult questions curated by experts in fields like philosophy and engineering. It represents an effort to measure AI systems' abilities beyond existing benchmarks, which AI models now surpass with ease.
When tested on six leading AI systems, including Google’s Gemini 1.5 Pro and OpenAI's latest model, results were underwhelming, with top scores at just 8.3%. However, experts anticipate rapid improvement, signaling the need for new ways to evaluate AI impact, such as economic influence or its potential to make novel discoveries. Despite the test's complexity, researchers highlight that advanced AI may still fall short in unstructured real-world applications. Responsible AI rating: 🟡
👉 Why it matters: Researchers and AI engineers are driven by a goal to measure AI’s intelligence against that of humans. As AI continues to grow, people will create new and different benchmarks to measure and compare, which may have the effect of frightening people into thinking that AI is taking over. I urge individuals to consider the many aspects of intelligence, and not fall victim to believing that AI is beyond humanity simply because it passed the test in question.
President Trump Revokes Biden’s AI Executive Order, Issues a New One
President Trump signed an executive order revoking Biden's AI policies to remove perceived innovation barriers. The order aims to develop AI systems "free from ideological bias" and create a 180-day AI action plan. Key provisions include immediately reviewing and potentially suspending previous AI-related directives that might obstruct American AI leadership. The White House defines the goal as sustaining "America's global AI dominance" to promote human flourishing, economic competitiveness, and national security. Alondra Nelson, former Biden administration official, criticized the move as "backward looking," warning it could compromise Americans' rights and safety in technological development. President Trump did keep one Biden era directive relating to land use for data centers.
👉 Why it matters: The Trump Administration is following through on their promises related to AI - that they would remove anything they saw as a barrier to innovation. In this case, that includes safety systems and processes, that were put in place by the Biden Administration, in an effort to ensure that technology used in the US government was safe and fair. Government agencies now no longer need to adhere by responsible AI processes, and corporations that are allowed to provide AI to the US government are now able to determine what safety and fairness mean, without benchmarks or standards from the US government. This opens the door to biased AI technology and dangerous or nefarious uses.
China’s Path to AI Dominance May Be a Model Named DeepSeek
DeepSeek, a little-known AI lab from China, has sparked alarm in Silicon Valley after releasing AI models that outperform America’s best at a fraction of the cost. In December, DeepSeek unveiled a free, open-source large-language model built in just two months for under $6 million, using Nvidia’s H800 chips, which are less advanced than top-tier options like the H100.
Benchmark tests showed DeepSeek’s model surpassing Meta’s Llama 3.1, OpenAI’s GPT-4o, and Anthropic’s Claude Sonnet 3.5 in problem-solving, math, and coding tasks. On Monday, the lab released "r1," a reasoning model that also outperformed OpenAI’s latest "o1."
DeepSeek’s success raises questions about the effectiveness of U.S. export controls on high-end chips, which were designed to curb China’s AI development. Experts credit DeepSeek’s innovative cost-saving techniques, such as model distillation, with driving efficiency. Microsoft CEO Satya Nadella warned that China’s advancements demand serious attention.
👉 Why it matters: The US has gone to great length to stop China from advancing, but this Chinese AI lab is benefitting from open source AI technology rather than attempting to create proprietary models, as pointed out by Yann LeCunn. US companies have largely been focused on creating proprietary models, which are not made available for the public to use and build on. This limits the number of minds working to advance the closed models. While China’s advancement in AI models is concerning for the US, but the US is pushing back with its own AI infrastructure projects, which come with their own set of concerns.
President & PM for responsible AI in Pakistan Propose Using AI to Revolutionize Education
Pakistani leaders President Zardari and Prime Minister Shehbaz Sharif championed artificial intelligence's role in education on the International Day of Education. They emphasized AI as a supportive tool for teachers, not a replacement, focusing on preserving human creativity and agency while integrating technology into learning systems. The government pledged to invest in technological infrastructure, establishing high-impact IT labs, digital hubs, and innovation centers. These initiatives aim to equip youth with critical skills in technology, communication, and digital literacy, positively impacting students while maintaining the central role of human educators.
👉 Why it matters: Positively impacting students, these initiatives aim to equip youth with critical skills in technology, communication, and digital literacy. This commitment to using AI shows that Pakistan is committed to finding the best uses of AI, and leveraging the technology to that end. Negatively, there's potential concern about maintaining human essence amid increasing automation, but as Pakistan does not plan to replace teachers with AI, this is less of a, immediate risk, although still a concern for future generations.
Spotlight on Research
Integrating generative AI (GAI) into higher education is essential for cultivating GAI-literate students, but global institutional adoption policies remain underexplored, particularly outside the Global North. This study applies Diffusion of Innovations Theory to analyze GAI adoption in 40 universities across six regions, examining innovation characteristics like compatibility, trialability, and observability, along with communication channels and policy roles. Findings show proactive measures such as ethical guidelines, authentic assessments, and faculty/student training to enhance academic integrity and equity. However, gaps persist in addressing data privacy and equitable access. The study highlights the need for clear communication, stakeholder collaboration, and continuous evaluation, offering actionable insights for policymakers to develop inclusive, transparent, and adaptive GAI integration strategies.
Discrimination and AI in insurance: what do people find fair? Results from a survey
In this paper, we report on a survey of the Dutch population (N=999) in which we asked people's opinions about examples of data-intensive underwriting and behavior-based insurance. The main results include the following. First, if survey respondents find an insurance practice unfair, they also find the practice unacceptable. Second, respondents find almost all modern insurance practices that we described unfair. Third, respondents find practices fairer if they can influence the premium. For example, respondents find behavior-based car insurance with a car tracker relatively fair. Fourth, if respondents do not see the logic of using a certain consumer characteristic, then respondents find it unfair if an insurer calculates the premium based on the characteristic. Fifth, respondents find it unfair if an insurer offers an insurance product only to a specific group, such as car insurance specifically for family doctors. Sixth, respondents find it unfair if an insurance practice leads to higher prices for poorer people. We reflect on the policy implications of the findings.
Watch: Catch up on Last Week’s News
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Email hello@justaimedia.ai if you want to connect 1:1.
How AI was used in this newsletter:
I read up on all news items before I publish the newsletter or record the podcast. I’ve begun using AI to summarize some articles.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.