Grok, AI Surveillance, & Majorana
News at the intersection of Justice & AI. February 24th, 2025. Issue 49.
Check out our Just AI Weekly Update podcast! We cover the top AI, responsible AI, and AI policy news stories of the week - and why it matters - in 25 minutes or less.
Anthropic debuts Claude 3.7 Sonnet with deep reasoning (Axios)
DeepSeek rushes to launch new AI model as China goes all in (Reuters)
Microsoft Dropped som AI Data Center Leases (Bloomberg)
*Paige works for GitHub, a Microsoft Company
Microsoft Debuts the Majorana 1 Chip, Potentially Revolutionizing Quantum Computing
Microsoft has unveiled Majorana 1, a groundbreaking quantum chip based on a Topological Core architecture. Utilizing a novel material called a topoconductor, it enables more stable and scalable qubits, a major step toward quantum computers with a million qubits. This breakthrough promises to solve industrial-scale problems, such as environmental cleanup and material science innovations, within years rather than decades. The chip’s digital control simplifies quantum operations, improving reliability and efficiency. Microsoft’s unique approach, validated by peer-reviewed research in Nature, led to DARPA’s endorsement for utility-scale quantum computing. By leveraging Majorana particles—previously theoretical—Microsoft has achieved a key milestone in fault-tolerant quantum computing. The company aims to integrate these advancements into Azure Quantum, providing scalable solutions for commercial and scientific applications. This progress could revolutionize AI, chemistry, and engineering, making quantum computing an essential tool for future problem-solving.
For a deeper dive on this topic, I recommend checking out Meg McNulty’s Cipher Talk deep dive article - Microsoft’s Quantum Leap.
👉 Why it matters: Risks related to this breakthrough include the typical understanding that any technology can be a tool or a weapon. So it is with AI, so it is with quantum computing. As the technology scales and becomes less expensive in the future, we could see a reality where bad actors use a quantum/AI combination to achieve catastrophic ends. One hope is that the AI era gives enough runway to land on realistic and meaningful safety guidelines that can be applied to quantum innovations. Furthermore, as with AI, impact on the planet should be considered, especially given that qubits must be kept at absolute zero (-460 degrees Fahrenheit), and the energy required to achieve that is tremendous.
Responsible AI rating: 🟡
xAI Grok 3 vs. OpenAI o3-Mini-High AI Benchmark Controversy
A dispute over AI benchmarking erupted after an OpenAI employee accused xAI of misleading claims about its Grok 3 model’s performance on AIME 2025, a math benchmark. xAI’s chart showed Grok 3 outperforming OpenAI’s o3-mini-high but omitted the “cons@64” metric, which boosts scores by allowing multiple attempts. When using this metric, OpenAI’s models performed better. xAI co-founder Igor Babuschkin defended the company, noting OpenAI has used similar tactics. Experts argue AI benchmarks often lack transparency, especially regarding computational costs. This debate highlights the ongoing challenges in fairly assessing AI model capabilities and communicating their real-world effectiveness.
👉 Why it matters: AI benchmark results can be misleading if not critically examined. While they offer insight into model performance, they are often arbitrary and don’t fully capture a model’s real-world capabilities or limitations. As AI researcher Nathan Lambert points out, the most important but often overlooked factor is the computational and financial cost required to achieve top scores. Without transparency on these aspects, comparisons between models can be skewed. It’s crucial to fact-check claims, understand the context behind the numbers, and recognize that benchmarks alone don’t determine an AI model’s true effectiveness.
Responsible AI rating: 🚩
Microsoft Employees Protest Microsoft’s Cloud Contracts with the Israeli Military
Five Microsoft employees were ejected from a company town hall after protesting Microsoft’s AI and cloud contracts with the Israeli military. Their protest followed an AP investigation revealing that Microsoft's AI models were used in Israeli military targeting during conflicts in Gaza and Lebanon. The employees wore shirts spelling “Does Our Code Kill Kids, Satya?” before being removed. Microsoft has faced internal dissent on this issue for months, with some employees questioning whether these contracts violate the company’s human rights commitments. The company has not disclosed whether the protesters will face disciplinary action, but it previously fired two workers for organizing a vigil for Palestinian refugees.
👉 Why it matters: This incident highlights growing tensions within major tech firms over ethical AI use and military contracts. As AI becomes more integrated into warfare, companies like Microsoft face scrutiny over their role in global conflicts. Employee activism on ethical concerns is increasing, raising questions about corporate responsibility, transparency, and alignment with stated human rights principles. The controversy also underscores the broader debate on AI’s ethical deployment, particularly in life-or-death military applications.
Responsible AI rating: 🟡
Open AI Uncovers Evidence of Chinese Surveillance Tool
OpenAI uncovered evidence of a Chinese AI-powered surveillance tool designed to track anti-Chinese posts on Western social media. Researchers identified the tool, dubbed Peer Review, after someone used OpenAI’s technology to debug its code. OpenAI also found another Chinese campaign, Sponsored Discontent, which generated English-language posts criticizing Chinese dissidents and translated articles into Spanish to spread anti-U.S. narratives. Additionally, a Cambodia-based operation used OpenAI’s models to aid online scams. These findings highlight growing concerns about AI’s role in surveillance, disinformation, and cybercrime, while also demonstrating how AI can be leveraged to detect and counter such threats.
👉 Why it matters: This case highlights the importance of being mindful when using AI. OpenAI’s ability to track how its technology is used underscores that AI companies are monitoring activity—sometimes for good, but not always. Businesses should be especially cautious, as sharing sensitive information with AI tools could lead to unintended exposure.
Additionally, this incident shows AI’s potential for positive impact. OpenAI’s detection of misuse provided valuable insights into China’s disinformation tactics and surveillance efforts.
Finally, it’s a reminder to educate loved ones about online scams. Techniques like pig butchering have led to devastating financial losses—awareness is key to prevention.
Political Consultant who Commissioned Biden DeepFake Calls Goes to Trial
Democratic consultant Steve Kramer is on trial for orchestrating AI-generated robocalls mimicking President Biden ahead of the 2024 New Hampshire primary. Kramer faces 26 criminal charges and a $6 million FCC fine for attempting to mislead voters. The calls, reaching up to 25,000 people, falsely urged Democrats to skip the primary. Kramer claims it was a stunt to highlight AI regulation needs. A magician was paid $150 to create the deepfake. Lingo Telecom, which transmitted the calls, faces a $2 million fine. New Hampshire officials launched an investigation, underscoring concerns over AI’s role in election misinformation.
👉 Why it matters: This case highlights the dangerous potential of AI in spreading election misinformation. While Kramer claims his actions were meant to expose the need for AI regulation, the impact remains deeply harmful to democracy. AI-generated deepfakes can erode trust in elections, manipulate voters, and create confusion, setting a troubling precedent. As AI technology advances, stronger safeguards are needed to prevent similar incidents from undermining democratic processes and public confidence in fair elections. Intentions aside, the misuse of AI in this way demonstrates the urgent need for ethical and legal protections.
EU Withdraws AI Liability Proposal
The EU has withdrawn the AI Liability Directive, which aimed to protect victims of AI-related harm, from its 2025 work program due to stalled negotiations. However, the European Parliament’s Internal Market and Consumer Protection Committee is pushing to continue work on it, while the Legal Affairs Committee has yet to decide. Critics, including MEP Axel Voss, blame industry lobbying for the reversal, arguing that Big Tech fears liability rules. Meanwhile, in the U.S., the White House is seeking public input on its AI Action Plan, which will shape policies on AI governance, security, innovation, and regulation, with comments open until March 15, 2025.
👉 Why it matters: If the AI Liability Directive is indefinitely shelved, individuals harmed by AI systems will have fewer legal protections and limited recourse for proving harm. Without liability rules, the EU would lack the necessary transparency requirements for AI companies, making it harder for victims to seek justice.
The decision may also be influenced by discussions at the Paris AI Action Summit, where the EU acknowledged that regulatory complexity and red tape were deterring businesses. While streamlining AI governance is important, removing liability protections risks prioritizing corporate interests over consumer rights and accountability.
Spotlight on Research
"Why do we do this?": Moral Stress and the Affective Experience of Ethics in Practice
Authors: Sonja Rattay, Ville Vakkuri, Marco Rozendaal, Irina ShklovskiA plethora of toolkits, checklists, and workshops have been developed to bridge the well-documented gap between AI ethics principles and practice. Yet little is known about effects of such interventions on practitioners. We conducted an ethnographic investigation in a major European city organization that developed and works to integrate an ethics toolkit into city operations. We find that the integration of ethics tools by technical teams destabilises their boundaries, roles, and mandates around responsibilities and decisions. This lead to emotional discomfort and feelings of vulnerability, which neither toolkit designers nor the organization had accounted for. We leverage the concept of moral stress to argue that this affective experience is a core challenge to the successful integration of ethics tools in technical practice. Even in this best case scenario, organisational structures were not able to deal with moral stress that resulted from attempts to implement responsible technology development practices.
Authors: Deven R. Desai, Mark O. Riedl
(Due to space requirements, the abstract was reduced using ChatGPT. Please open the article to read the full abstract.)
Thanks to advances in large language models, AI agents now execute tasks rather than just generating text. Companies like OpenAI and Google envision AI agents booking trips or posting content autonomously. However, legal scholars warn of risks like rogue commerce, manipulation, and defamation, calling for regulation. This Article argues AI agents can be disciplined through software design, leveraging value-alignment to enhance user control. It also asserts that AI agents should not receive legal personhood—humans remain responsible for their actions. The Article provides a framework for building responsible AI agents while enabling economic benefits and mitigating risks.
Watch: VIDEO TITLE
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Email hello@justaimedia.ai if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles. I always read the summary to check that it’s factual and accurate.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.