The US State Department's New AI Use - and its Impact on Free Speech
News at the intersection of Justice & AI. March 10th, 2025. Issue 50.
If you prefer not to read the newsletter, you can listen instead!👂
Who bought this smoked salmon? How ‘AI agents’ will change the internet (and shopping lists)(The Guardian)
What one Finnish church learned from creating a service almost entirely with AI (AP News)
Meet the 21-year-old helping coders use AI to cheat in Google and other tech job interviews (CNBC)
*Paige Lord is the founder & CEO of Just AI. She also works for GitHub, a Microsoft Company
Nuclear: Achieving net-zero emissions as AI energy demand rises
Microsoft is investing $1.6 billion to restart the Three Mile Island nuclear plant, leveraging nuclear power to meet AI data centers' growing energy needs. Big Tech companies like Google and Amazon are also turning to nuclear energy to achieve net-zero emissions, with Google investing in small modular reactors (SMRs). Experts warn that SMRs are still unproven, and tech companies may struggle to scale them. Despite uncertainties, industry leaders believe nuclear could be key to meeting clean energy goals amid rising AI electricity demands. (Read the article.)
👉 Why it matters: As energy demand for AI products increase, companies are scrambling to balance their commitments to sustainability and their commitment to shareholders. This has led to the rapid exploration of energy options that will allow companies to build AI, power it, and meet demand for the products, but also help companies uphold their voluntary - and crucial - commitments to the future of the planet. While nuclear power is reliable and carbon-free, challenges include high costs, long build times, and nuclear waste. This begs the question: is there any good, clean, non-detrimental option for powering AI?
Responsible AI rating: 🟡
Apple’s speech-to-text is insulting - literally
A Scottish woman, Louise Littlejohn, was shocked when Apple’s AI-powered voice-to-text transcription mistakenly inserted an inappropriate reference to sex and an insult in a voicemail from a Land Rover dealership. The message was a routine business call inviting her to an event, but the AI system seemingly misheard words, likely due to background noise and the scripted nature of the call. Experts believe the Scottish accent may have contributed, but poor audio quality was a bigger factor. Apple declined to comment, but similar AI transcription issues have been reported before, highlighting concerns about speech-to-text accuracy and safeguards in public AI systems. (Read the article.)
👉 Why it matters: As AI continues to scale and more use cases are identified, there is an expectation that there will be some errors. However, speech-to-text technology has been around for some time. This, coupled with Apple’s other AI-related challenges, like the spreading of misinformation in the form of AI hallucinations on iOS devices, is a cause for concern and further illustrates Apple’s race to release AI may have left them on the back foot. However, Apple has a good track record of pulling their AI off the market when there are serious errors, and working to release safer, less problematic technology.
Responsible AI rating: 🟡
Apple pushes AI-enhanced Siri back to 2026
Apple has delayed its AI-enhanced Siri features until 2026, pushing back capabilities that would allow Siri to interact with other apps and use personal context for tasks like filling out forms. Originally expected this spring, the delay underscores Apple's struggles to keep pace with AI rivals like OpenAI, Amazon, and Google. Apple has already integrated some AI-driven Siri improvements, including a more conversational tone and ChatGPT integration. However, Apple Intelligence has faced challenges, including inaccuracies in AI-generated news summaries. Developers are preparing Siri-compatible features, but they won’t be functional until Apple releases a beta. Apple is expected to announce updates at WWDC in June. (Read the article.)
👉 Why it matters: Apple’s already seeing challenges with their Apple Intelligence platform and features after a hasty push to get their AI products in market. This setback is another sign that Apple isn’t in a place to compete well with other AI companies, and it begs the question: do they have a vision? Ultimately, it’s better to launch a higher quality, safer AI product on a longer timeline than a poor quality, unsafe product just to say you’re in the market.
Responsible AI rating: 🟢
The US State Department is using AI to target foreign student visa holders participating in protests, chills free speech
The State Department's new "Catch and Revoke" initiative uses AI technology to scan tens of thousands of foreign student visa holders’ social media accounts for signs of support for Hamas or other U.S.-designated terror groups. The AI will also flag participation in anti-war or pro-Palestinian demonstrations, checking news reports and lawsuits citing antisemitic incidents. Officials say the program is part of a "whole of government" effort involving the Justice and Homeland Security Departments.
Critics, including civil rights groups, argue the program raises serious First Amendment concerns and could encourage self-censorship on college campuses. The Foundation for Individual Rights and Expression warned that AI is not equipped to judge protected speech, while others likened the initiative to past surveillance programs targeting pro-Palestinian activists.
The move aligns with President Trump’s executive orders, which aim to deport non-citizen students who participate in pro-Palestinian protests. Trump recently pulled $400 million in federal funding from Columbia University over alleged failures to protect Jewish students. Critics fear AI’s role in policing speech could lead to broad overreach, affecting free expression on U.S. campuses. (Read the article.)
👉 Why it matters: Freedom of speech and freedom to assemble are two rights afforded by the First Amendment for citizens and residents of the United States.
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
This use of AI by the United States government catapults us into dangerous territory as a country. First, because it has a chilling effect on free speech and the right to peaceably assemble. Second, because this use case is being justified by a false narrative. The State Department has indicated that they believe that if someone stands with the Palestinian people, they also stand with terrorist groups like Hamas. This change of narrative, which advances the President’s agenda of seizing Gaza and turning it into a resort, is meant to create a false link between the two. Third, the use of AI to scan social media to identify student protesters as a way to “catch” the students and “revoke” their student visas is dangerous. Facial recognition technology and other vision AI used to examine these social media accounts and flag “offenders” is a road to harm. The Justice Department is not being transparent about the type of AI being use, safety built in, or how they have humans incorporated into the process to ensure that people are not being falsely identified.
China welcomes AI expert home after two decades abroad
Chinese AI expert Tingwen Huang has returned to China after over two decades abroad, joining Shenzhen University of Advanced Technology as a chair professor. Previously, he spent 20+ years at Texas A&M University in Qatar, where his research focused on intelligent control, optimization, and complex systems dynamics. Huang is a highly cited researcher, with nearly 700 published papers and over 44,000 citations on Google Scholar. His work, supported by $7 million in funding from the Qatar National Research Fund, applies mathematical principles to AI, neural networks, and autonomous systems. (Read the article.)
👉 Why it matters: Huang’s return signifies China's efforts to attract top AI talent as it strengthens its position in AI research and development. This is the latest in a series of signals that indicate that China is continuing to ramp up their AI efforts. Last week, Chinese authorities issued a travel warning to AI entrepreneurs and researchers advising them to limit their travel to the US out of fear that there could be potential intelligence leaks and strategic vulnerabilities. China also quietly removed their first minister of industry and technology - Jin Zhuanglong - and replaced him with another leader. Finally, China has been expanding their investment in “The Digital Silk Road” - implementing digital infrastructure in Africa. They’ve signed over $700 billion in engineering deals in Africa in the last decade, and are now expanding AI infrastructure as well.
Anthropic shares policy recommendations for the US AI Action Plan
Anthropic has submitted key recommendations to the Office of Science and Technology Policy (OSTP) for the U.S. AI Action Plan, urging decisive action to maintain America’s AI leadership. The company anticipates powerful AI systems emerging by 2026-2027, with human-like intellectual capabilities and autonomous control over digital and physical tools.
Anthropic’s six key policy proposals include:
National security testing of AI models.
Stronger export controls on semiconductor technology.
Enhanced AI lab security with classified intelligence coordination.
Expanding energy infrastructure by 50 gigawatts by 2027.
Accelerating AI adoption across government agencies.
Preparing for economic shifts caused by AI advances.
These recommendations aim to balance innovation with security risks, ensuring AI benefits all Americans. (Read the article.)
👉 Why it matters: As the US continues to work out its stance on AI policy and is collecting public comments, we’re seeing more companies throw their perspectives in the ring. Anthropic’s suggestions is the latest in a series of large AI company recommendations on AI policy. In contrast to what companies like Microsoft are saying, Anthropic is asking for stronger export controls on semiconductor technology, which shows a stark difference in perspectives. As perspectives are taken into account, it will be interesting to know which company’s suggestions have sticking power with the Trump administration.
Spotlight on Research
Generative AI, Democracy and Human Rights
Disinformation is not new, but given how disinformation campaigns are constructed, there is almost no stage that will not be rendered more effective by the use of generative artificial intelligence (AI). Given the unsatisfactory nature of current tools to address this budding reality, disinformation, especially during elections, is set to get much, much worse.
As these campaigns become more sophisticated and manipulative, the foreseeable consequence will be a further erosion of trust in institutions and a heightened disintegration of civic integrity, which in turn will jeopardize a host of human rights, including electoral rights and the right to freedom of thought.
In this policy brief, David Evan Harris and Aaron Shull argue that policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen, act quickly to ban using AI to impersonate real persons or organizations, and require the use of watermarking or other provenance tools to allow people to distinguish between AI-generated and authentic content.
Watch: Frontier Insights with host Yogesh Chavda
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Email hello@justaimedia.ai if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles. I always read the summary to check that it’s factual and accurate.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.