Microsoft vs. Hackers, US vs. China, UK vs. the World
News at the intersection of Justice & AI. January 14th, 2025. Issue 43.
If you prefer not to read the newsletter, you can listen instead!👂 The Just AI podcast covers everything in the weekly newsletter and a bit more. Have a listen, and follow & share if you find it to be helpful!
Shadow AI: Shining Light on a Growing Security Threat (FedTech)
Two misuses of popular AI tools spark the question: When do we blame the tools? (Fortune)
How Congress dropped the ball on AI safety (The Hill)
*Paige works for GitHub, a Microsoft Company
Microsoft is Taking Legal Action Against Hackers who Bypassed AI Safety
Microsoft has taken legal action against a group of foreign cybercriminals accused of bypassing AI safety measures on its Azure OpenAI platform. (See the complaint here.) The hackers allegedly used stolen API keys and custom software to generate harmful content, circumventing protocols designed to prevent misuse. Microsoft has secured a court order to seize domains tied to the scheme and disrupt its infrastructure. The group reportedly used tools to reverse-engineer AI filtering systems, generate harmful content, and strip metadata from generated media. Responsible AI rating: ✅
👉 Why it matters: This case is an example of the growing challenges companies face with the misuse of generative AI tools, and the struggle to enforce safety measures. Microsoft has a robust responsible AI philosophy, and systems and processes designed to ensure their AI products are safe and used safely. The sophisticated nature of the hacking and the conclusion that it is foreign cybercriminals conducting the attack indicates the target that companies become when they lead in the AI space. We can expect to hear more about hacking like this in the future, and we can only hope that AI companies have security and monitoring sophisticated enough to catch it.
How Meta’s Content Monitoring Changes Create Risk with AI on the Rise
The tech industry's shift toward fewer rules comes as AI enters a critical phase of development. Initially, discussions about AI governance focused on safeguards against superintelligence and bias, but the current environment shows a growing rejection of regulation. Meta’s decision to abandon fact-checking and move toward community notes reflects this trend, inspired by social and political changes, and it opens up a whole new level of risk when coupled with the acceleration of AI. Responsible AI rating: 🚩
👉 Why it matters: Companies face the challenge of balancing responsible design with competition from less-regulated rivals. U.S. liability laws and global regulations remain uncertain, but without public backlash or major crises, the push for regulation-free AI is likely to accelerate. As the company that owns some of the largest social media platforms in the world, Meta is a major leader and will set the tone for safety, accountability and responsible AI. Their shift is likely to signal to other companies that they, too, can make safety a lower priority. This is especially risky as AI continues to grow in its use cases, and as AI becomes more personalized.
NASA’s Commitment to Advancing Space Exploration with AI Responsibly
NASA is making strides in space exploration with the release of its 2024 AI Use Case inventory, highlighting how artificial intelligence is powering everything from the Perseverance Rover’s navigation on Mars to advanced mission planning and environmental monitoring. Tools like AEGIS and Enhanced AutoNav allow rovers to make decisions in real time, while systems like SensorWeb monitor volcanoes and floods on Earth. Adhering to responsible AI principles set by the White House, NASA is working to ensure its AI tools are transparent and accountable. Looking ahead, NASA plans to expand AI’s role, blending cutting-edge innovation with responsibility to shape the future of exploration. Responsible AI rating: ✅
👉 Why it matters: NASA is a great example of responsible AI transparency. By providing detailed information in their use cases, they’re giving insight into how AI is being used, and they’re highlighting their adherence to the White House responsible AI principles set forth in the AI Executive Order in October of 2023. Whether the principles will continue to be prioritized after the administration transition remains to be seen. It’s also unclear whether NASA could continue adhering to the principles, even if the AIEO is revoked by the Trump administration.
The UK Government is Going All-In on AI
The UK government has unveiled an ambitious AI Opportunities Action Plan aimed at boosting economic growth and enhancing public services through artificial intelligence. Backed by £14 billion from tech firms like Vantage Data Centres, Nscale, and Kyndryl, the plan is expected to create over 13,000 jobs and establish AI Growth Zones, starting in Oxfordshire. AI will be deployed in areas like road maintenance, public sector efficiency, and healthcare, with projected economic benefits of £47 billion annually. While PM Sir Keir Starmer hailed AI’s transformative potential, critics argue the plan lacks sufficient investment in cutting-edge infrastructure, risking Britain's global tech competitiveness.
👉 Why it matters: I often tell people that everything is up for grabs in the AI era, and will be for the next few years. This is because AI is so new (relatively) and its presence in this current moment is also new. Nothing is settled. How AI is built, run, altered, secured, monitored, restricted, defined - it’s all up for the taking. It’s the same with global AI leadership. The UK’s focus on being the leader in AI stems from a deep desire to see the technology benefit their people and their economy. As governments get their heads around the fundamentals of AI, we’re likely to see more and more of them striving to be the leader in AI. “Leading” in AI can look like creating the most influential AI, using AI most effectively to stimulate the economy, and using AI to solve social ills, benefitting citizens, among other things. With a new PM and a significant investment, the UK is one to watch.
The Biden Administration’s New Regulations to Limit AI Chip Exports
The Biden administration is pushing new regulations to control AI's global spread, prioritizing U.S. and ally dominance. These rules limit AI chip exports to adversaries like China and Russia while favoring domestic and allied data centers. Tech companies like Nvidia and Microsoft argue these measures could hinder innovation and harm international relations. Labor unions support the rules for their economic benefits, while allies and lawmakers express concerns about economic and geopolitical impacts.
👉 Why it matters: The policy, which is intended to safeguard national security and economic interests, faces industry resistance and uncertainty over future enforcement as there is currently no indication if the Trump Administration will uphold the limitations put in place by President Biden. As the new administration takes office, it’s important to keep an eye on the changes that are coming in AI policy and regulation, as they’re bound to shift and change for some time.
How China is Closing the Gap in AI Despite Restrictions
China is rapidly closing the gap in artificial intelligence development, despite U.S. efforts to restrict its access to advanced chips. Recent breakthroughs, such as Tencent’s Hunyuan-Large model and DeepSeek’s DeepSeek-v3, demonstrate that Chinese developers are finding ways to innovate even with limited resources. These advancements come amid U.S. export controls aimed at curbing China’s access to high-end semiconductors, a cornerstone of AI development. Reports of stockpiling and alternative sourcing suggest the restrictions may not be as effective as intended. With AI playing a growing role in global power dynamics, the U.S. faces pressure to refine its approach to remain competitive.
👉 Why it matters: The US is trying to remain competitive in AI, and one major priority is ensuring that China does not advance using US technology. While reasonable minds differ on the topic, it is the belief of many in government that a China with unhindered access to US AI technology will have a springboard for AI innovation, which will likely be used by the Chinese government for surveillance and military uses. The position of the US is that empowering China to advance in AI for these uses is foolish. Non-government entities like Microsoft and Nvidia believe that the efforts to stop China’s AI advancement could lead to significant geo-political tension which could hinder AI innovation and AI safety efforts.
Spotlight on Research
Agentic Systems: A Guide to Transforming Industries with Vertical AI Agents
In response to the need for consistency and scalability, this work attempts to define a level of standardization for Vertical AI agent design patterns by identifying core building blocks and proposing a \textbf{Cognitive Skills } Module, which incorporates domain-specific, purpose-built inference capabilities.
Enhancing Workplace Productivity and Well-being Using AI Agents
This paper discusses the use of Artificial Intelligence (AI) to enhance workplace productivity and employee well-being. By integrating machine learning (ML) techniques with neurobiological data, the proposed approaches ensure alignment with human ethical standards through value alignment models and Hierarchical Reinforcement Learning (HRL) for autonomous task management.
Watch: Meta’s AI Persona Troubles
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Email hello@justaimedia.ai if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.