AI Safety Tug-of-War: Google Rolls Back AI Weapons Ban, Global Leaders Assemble in Paris for AI Action Summit
News at the intersection of Justice & AI. February 10th, 2025. Issue 47.
If you prefer not to read the newsletter, you can listen instead!👂 The Just AI podcast covers everything in the weekly newsletter and a bit more. Have a listen, and follow & share if you find it to be helpful!
Inside France’s Effort to Shape the Global AI Conversation (TIME)
The Paris AI Action Summit: The Eighteenth Brumaire of Big Tech? (TechPolicy.Press)
White House seeks public input on AI strategy (Axios)
*Paige works for GitHub, a Microsoft Company
Google Re-edits Super Bowl AI Ad After Gouda Cheese Blunder
Google was forced to re-edit a Super Bowl advertisement for its AI tool, Gemini, after it falsely claimed that Gouda accounts for "50 to 60 percent of global cheese consumption." The ad, intended to highlight Gemini’s capabilities, was pulled after a blogger pointed out the error on X, calling it “unequivocally false.”
Google executive Jerry Dischler defended Gemini, stating the AI pulled the figure from multiple websites rather than hallucinating the data. However, after public scrutiny, Google removed the claim and re-released the ad on YouTube without the misleading statistic.
Responsible AI rating: 🟡
👉 Why it matters: This is not Google's first AI challenge — Gemini has previously faced criticism for generating historically inaccurate images and AI search results suggesting bizarre advice. This particular incident, occurring on the high-profile Super Bowl stage, raises further concerns over AI reliability and misinformation in mainstream applications. Furthermore, it raises concerns about the basic steps taken to prioritize checking the outputs of AI for issues large and small. This is another indicator that Google does not have the basic processes in place to ensure that uses and outputs of AI are safe and accurate.
So what is the world’s most popular cheese? Perspectives vary. According to Spinneyfields, it’s mozzarella, but according to a researcher for the US Department of Agriculture, it’s cheddar.
Google Lifts a Ban on Using It’s AI for Weapons and Surveillance, Changes AI Principles
Google has overhauled its AI principles, lifting restrictions on using AI for weapons and surveillance, a shift from its 2018 commitments. The changes, announced shortly after the start of Trump’s second term, remove previous bans on AI technologies that could cause harm, violate human rights, or facilitate surveillance beyond international norms. Instead, Google now emphasizes “appropriate human oversight” and “aligning with widely accepted international laws.” The revision has sparked internal concerns, with employees criticizing the lack of transparency and public input.
These changes include moving from seven clearly defined AI principles and a list of applications the company would not pursue, to a list of three very general principles, and no clear statement on what AI uses, if any, will be off-limits for the tech giant.
Google executives cite geopolitical pressures and evolving AI standards as key reasons for the policy shift, emphasizing AI’s role in national security and global competition.
Responsible AI rating: 🚩
👉 Why it matters: Critics argue this move signals Big Tech’s growing alignment with military interests, and this seems to show another tach company aligning with the goals of the Trump Administration, for better or worse. For Google, their interest in using AI in a military capacity is what sparked employee outrage and led to the ban of those uses in the first place in 2018. The changes provide Google with more flexibility in AI applications, but raise very real ethical concerns about AI’s future role in warfare, surveillance, and how civil liberties may be violated. Employees of Google feel these changes were made without transparency or public input.
Big Tech’s AI Spending Soars to $325 Billion in 2025 Amid Investor Scrutiny
Meta (META), Microsoft (MSFT), Amazon (AMZN), and Alphabet (GOOG) are set to invest a record-breaking $325 billion in AI infrastructure in 2025, a 46% increase from last year’s $223 billion. Despite investor concerns over the long-term payoff, these tech giants remain committed to aggressive AI expansion.
Amazon leads spending, projecting around $105 billion for 2025, with the majority going toward AI for AWS. CEO Andy Jassy called AI “the biggest opportunity since the internet.” Meta's AI investment jumps to $60-$65 billion, with Mark Zuckerberg stating the company will spend 'hundreds of billions' over time. Alphabet plans $75 billion in AI-related investments, 30% higher than expected, causing a 7% stock drop.
Responsible AI rating: 🟡
👉 Why it matters: The spending spree persists even after Chinese startup DeepSeek’s low-cost AI models rattled markets, raising questions about the necessity of such massive expenditures. Investors remain cautious as AI’s profitability timeline remains uncertain. Much of the investment appears to be going toward the creation of massive data centers, a CapEx expense that gives companies more control over the training, input and output data. The investment in infrastructure also begs the question - when will we see more investment in responsible AI? In the given political climate, it’s unlikely that we will see this type of broad investment from large companies in the near future. Of course, there are always exceptions.
The Department of Government Efficiency uses AI to review PII Data, Make Decisions
The Department of Government Efficiency (DOGE) has fed sensitive federal data, including personally identifiable information and financial records, into AI software to identify potential spending cuts at the U.S. Department of Education. The AI analysis, conducted via Microsoft Azure, is part of a broader effort to shrink the Department of Education and ultimately dismantle it, aligning with the Trump administration’s goal. DOGE’s AI-driven strategy is being replicated across other federal agencies, including the Departments of Health, Labor, and the CDC.
👉 Why it matters: Critics warn that using AI on secure government data increases cybersecurity risks and could lead to significant errors or bias. The initiative has already led to the placement of 100 employees—primarily women and non-White staff—on administrative leave due to past diversity training participation, according to the Washington Post. Government employees and AI policy experts are expressing growing concerns over the lack of transparency, potential misuse of data - which potentially includes retaliation for things like taking diversity training, being a “diverse” candidate, or working on investigations that President Trump finds personally offensive - and speed of DOGE’s actions.
What you Should Know: AI Action Summit in Paris
The AI Action Summit in Paris (Feb 10-11, 2025) brings together world leaders, tech executives, and experts to discuss AI governance, competition, and safety. As the latest in a series of AI summits following Bletchley Park (UK, 2023) and Seoul (South Korea, 2024), the event marks a critical moment in the global AI power struggle. China’s DeepSeek AI is challenging U.S. dominance, prompting concerns about AI accessibility and economic impact. France and the EU see this as a "wake-up call" to strengthen their AI leadership, while Vice President JD Vance leads the U.S. delegation alongside OpenAI’s Sam Altman and Google’s Sundar Pichai.
Discussions will cover AI safety risks—including misinformation, cybersecurity threats, and AI-controlled weapons—along with economic concerns like automation and workforce displacement. The event will also push for the Global Digital Compact (GDC), a UN-led AI governance initiative.
👉 Why it matters: Critics argue that Big Tech has too much influence over these summits, raising concerns about corporate dominance in AI policy. With China’s rising influence, Europe’s ambitions, and growing calls for AI regulation, the Paris summit will shape the future of global AI governance, economic impact, and technological power dynamics. Major concerns for the summit include the risk that large AI companies will have a huge amount of sway over the conversation, and that the US focus on AI dominance will upend the focus on safety, collaboration and a global perspective.
US Copyright Office Creates a Copyright & AI Report
What you need to know from page iii of the report:
“1. Questions of copyrightability and AI can be resolved pursuant to existing law, without the need for legislative change.
2. The use of AI tools to assist rather than stand in for human creativity does not affect the availability of copyright protection for the output.
3. Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material.
4. Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements.
5. Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.
6. Based on the functioning of current generally available technology, prompts do not alone provide sufficient control.
7. Human authors are entitled to copyright in their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material in the outputs, or creative modifications of the outputs.
8. The case has not been made for additional copyright or sui generis protection for AI-generated content."
👉 Why it matters: Concerns over copyright have been constant since the launch of ChatGPT in late 2022. While this report does not address the outstanding lawsuits about large companies using copyrighted materials to train AI, it touches on the topic lightly. The major takeaway for individuals using generative AI is that to claim copyright of an item that has been generated with AI, the human must put in significant contributions, which will be analyzed on a case-by-case basis.
Spotlight on Research
DeepMind AI crushes tough maths problems on par with top human solvers
Google DeepMind's AlphaGeometry2, an upgraded AI system, has surpassed human competitors in solving complex geometry problems, achieving gold-medal-level performance at the International Mathematical Olympiad (IMO). This marks a significant improvement over its predecessor, AlphaGeometry, which previously matched silver medallists. The system utilizes a neuro-symbolic architecture that integrates Google’s Gemini large language model and introduces enhanced reasoning capabilities, including manipulating geometric objects and solving linear equations. AlphaGeometry2 successfully solved 84% of IMO geometry problems from the past 25 years, significantly outperforming its predecessor’s 54% success rate. Future advancements will focus on handling inequalities and non-linear equations to further refine its problem-solving abilities. This breakthrough highlights AI’s growing role in mathematical reasoning, with implications for education, scientific discovery, and automated theorem proving. As AI systems like AlphaGeometry2 continue to evolve, they bring both opportunities and challenges in the realm of human-AI collaboration in mathematics.
Two Types of AI Existential Risk: Decisive and Accumulative
The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This discourse, however, often neglects the serious possibility of AI x-risks manifesting incrementally through a series of smaller yet interconnected disruptions, gradually crossing critical thresholds over time. This paper contrasts the conventional decisive AI x-risk hypothesis with an accumulative AI x-risk hypothesis. While the former envisions an overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence, the latter suggests a different causal pathway to existential catastrophes. This involves a gradual accumulation of critical AI-induced threats such as severe vulnerabilities and systemic erosion of economic and political structures. The accumulative hypothesis suggests a boiling frog scenario where incremental AI risks slowly converge, undermining societal resilience until a triggering event results in irreversible collapse. Through systems analysis, this paper examines the distinct assumptions differentiating these two hypotheses. It is then argued that the accumulative view can reconcile seemingly incompatible perspectives on AI risks. The implications of differentiating between these causal pathways — the decisive and the accumulative — for the governance of AI as well as long-term AI safety are discussed.
Watch: VIDEO TITLE
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Email hello@justaimedia.ai if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.
Paige! I totally loved every minute of this. So informative. Go girl grow!