If you prefer not to read the newsletter, you can listen instead!👂 The Just AI podcast covers everything in the weekly newsletter and a bit more. Have a listen, and follow & share if you find it to be helpful!
Army helicopter involved in fatal crash over the Potomac was not using AI, sources say (Defense Scoop)
The Paradox Of Responsible AI: Widespread Usage Coupled With Distrust (Forbes)
The tech bro who wasn't on Trump's inauguration stage (NPR)
*Paige works for GitHub, a Microsoft Company
IMPORTANT: Articles 1-5 of the EU AI Act are IN FORCE as of Sunday, February 2nd, 2025. If you’re doing business with AI technology in the EU, be sure you are adhering to the law. If not, it will cost you up to 7% of your total global annual turnover from the previous year.
All About DeepSeek
The summary below should get you up to speed on what DeepSeek is, why it’s been in the news so much, concerns about the technology, and global as well as company responses to the models they’ve launched.
Who/What is DeepSeek?
The launch of the DeepSeek R1 models rocked the US market last week and served as a wake-up call in AI companies across the globe. But what is DeepSeek? DeepSeek is an AI lab that has released a free, open-source large language model called DeepSeek in December 2024. A month later, they launched their DeepSeek R1 models. The R1 model is outperforming models from Meta, OpenAI, and Anthropic in accuracy on challenges related to complex problem solving, math, and coding. However, DeepSeek is a Chinese company, and therefore subject to the whims and demands of the Chinese government, which poses risks related to data privacy, security, and surveillance.
Why did the launch of DeepSeek R1 rattle the US Markets?
It cost more than $100 million dollars to train OpenAI’s GPT-4 model, and even Sam Altman knew that type of cost wasn’t sustainable. AI companies have been hard at work for the last two years trying to do two things:
create models that can accomplish different things, and that can do those things incredibly well
trying to find ways to build and train those new, shiny models for a lower cost
DeepSeek’s R1 models rattled the US markets for four reasons:
DeepSeek created a model using Nvidia’s H800 chips, which are less advanced than their H100 chips. The H800 chips have a fraction of the computing power, and are technically in-line with export laws enacted to keep China from advancing in AI.
DeepSeek used a training technique called “distillation,” in which a “teacher model” transfers it’s learnings to a “student model”. The part that panicked people is that the cost to train the model was a mere $6 million - pennies compared to the historic, exorbitant costs of OpenAI’s models.
DeepSeek made their model available at a significantly lower cost than US rivals, making it more compelling for those using AI.
There is evidence that Chinese state-linked accounts—including those of diplomats, embassies, and state media—amplified the release of the model, prompting hype that led to a spike in Apple App Store downloads of DeepSeek’s app, which quickly outpaced ChatGPT downloads.
These factors led global investors to dump investment in US stock, wiping out $593 billion (with a “b”) in Nvidia’s value in one day.
So, to summarize, DeepSeek used what they had (legal chip and open source technology) - and a cheaper training technique - to create a better model than those from US providers, which they made available at a sharply lower cost, and they used their considerable US social media accounts and influence to increase hype, which sparked interest, engagement and, ultimately, panic among investors.
What are the responses to the model?
AI companies across the globe had a variety of responses.
OpenAI responded in three ways:
Sam Altman stated that it’s “legit invigorating to have a new competitor” and assured people that OpenAI’s products would still be better.
OpenAI claimed that DeepSeek had stolen their training data via the novel training process that DeepSeek used - distillation.
Open AI then promptly released their o3-mini reasoning model for free and tripling daily message limits for paying customers to encourage use. It’s safe to say that OpenAI was both caught off guard, and sufficiently spooked.
As for other companies:
Microsoft’s CEO talked about DeepSeek at the World Economic Forum in Davos in January stating, “To see the DeepSeek new model, it’s super impressive in terms of both how they have really effectively done an open-source model that does this inference-time compute, and is super-compute efficient…We should take the developments out of China very, very seriously.” Microsoft also launched OpenAI’s o3 mini model on the Azure AI Foundry and GitHub Models, and worked to assure investors that DeepSeek was a good thing. They also opened a probe into whether or not DeepSeek improperly used OpenAI’s data.
Alibaba responded by launching a model - Qwen 2.5 - that it claims surpasses DeepSeek-V3 in benchmarks. Unfortunately, all eyes are on the DeepSeek-R1 model, rather that V3.
The CEO of Palantir, Alex Karp, said that the DeepSeek model showed that the US needs an “all-country effort” in AI. Karp said that he believes that DeepSeek has awakened the tech world to the thread of second movers.
Meta believes that DeepSeek’s success validates their open-source approach to AI. Whereas OpenAI and Anthropic keep their models closed to the public, Meta launched the Llama models open-source, meaning anyone could access, download, use, and ultimately build upon the models. This is, in part, how DeepSeek gained an edge.
Global response to DeepSeek
South Korea’s Personal Information Protection Commission - the country’s information privacy group - plan to ask DeepSeek how personal information of users is managed.
Italy completely banned DeepSeek’s service within the country due to a lack of transparency regarding user data handling. The Italian data protection group, the Garante, sent requests to DeepSeek asking for information on data handling, and asked where it had obtained its training data.
Belgian consumer privacy organization Testaankoop filed a complaint that DeepSeek might not be complying with GDPR rules, and opened a complaint, prompting Belgium to investigate.
Portugal’s consumer privacy organization, Deco Proteste, (sister organization to Testaankoop) has also filed a complaint with the Portuguese Data Protection Authority.
France, Ireland, and others are investigating as well. Read more about the international probe here.
Let’s talk about the United States:
The US Navy, NASA, Congress, and the Pentagon have all BANNED the use of DeepSeek by personnel, citing serious security concerns.
President Trump, who is still trying to find a solution to keep TikTok available in the US, has shared that he believes this is a positive development, but that it’s also a call for the US to be “laser focused on competing to win".
Allowing the use of TikTok while halting other Chinese technologies like Trae (an AI coding assistant) and DeepSeek’s chatbot interface could send conflicting messages about what AI technologies are and are not allowed in the US.
What else?
DeepSeek isn’t infallible. Wiz, a cloud security company, took at look at the back end of the model’s databases and “within minutes” accessed a significant amount of unencrypted internal data. Essentially, DeepSeek’s company secrets that are not shared publicly. Once alerted, DeepSeek immediately began to shore up these vulnerabilities.
The DeepSeek app has been a major concern, with people asking questions about controversial topics that the Chinese government has sought to re-write - such as the circumstances related to Tiananmen Square, student protests held in the midst of China’s Cultural Revolution. There are a number of videos online showing DeepSeek sharing a response, and then erasing the response and stating that it cannot give information on the topic.
👉 Why it matters:
One concern is related to training materials. It is concerning that DeepSeek may have inappropriately used OpenAI’s training data. But what makes this of major concern to society is that OpenAI inappropriately used the whole world’s assets as training data. There are major lawsuits against OpenAI for violating copyright, and just taking information to use to train their models, so it’s a major double standard for them to be upset that DeepSeek used their training data when it’s really our data.
There are major national security concerns. Chinese companies are beholden to the whims of the Chinese government. That means that the Chinese government can demand all of DeepSeek’s user data and information at any time, and can use it for any purpose. This poses a major security concern for governments and individuals the world over, as this data can reveal state, company and personal secrets, can open the door to hacking, and can enhance the efficacy of large-scale, or targeted, misinformation campaigns designed to bring about China’s end goals.
How the governments and companies the world over go about navigating DeepSeek truly matters. It appears there’s a split approach, with governments blocking access while companies embrace the new technology and competition.
Responsible AI rating: 🚩
Nvidia Chief Jensen Huang Met with President Trump to Discuss AI Policy
Former U.S. President Donald Trump and Nvidia CEO Jensen Huang met at the White House to discuss AI policy and semiconductor leadership, including concerns over China’s AI advancements. Their conversation centered on DeepSeek, a Chinese AI company that recently launched a low-cost AI assistant, sparking fears about China’s growing AI capabilities. The app's rapid success led to a $1 trillion loss in U.S. tech stocks, with Nvidia’s shares dropping 17%.
The meeting also addressed tightening AI chip export restrictions, particularly on Nvidia’s H20 chips, designed for the Chinese market under existing U.S. trade curbs. While discussions are in the early stages, restrictions on AI chip sales to China have been a bipartisan concern, with previous bans under President Biden's administration. Additionally, the U.S. Commerce Department is investigating whether DeepSeek has illegally accessed restricted U.S. chips, raising further national security concerns.
👉 Why it matters: Especially with the launch of the DeepSeek model, and China’s continued access to AI chips is of major concern to the US. While the US can put import and export restrictions chips and limit China’s access, much of the burden of enforcement lands on companies, like Nvidia. Nvidia is likely figuring out how to balance adhering to the policy, while also running its business and ensuring it doesn’t lose a huge and growing part of the AI chip market.
Texas Catches Heat for New Proposed AI Bill
Texas has introduced HB 1709, a strict AI regulatory bill that rivals California and Europe’s approaches. The bill establishes a risk-based framework for AI oversight, targeting industries like healthcare, finance, and legal services. It bans AI systems that manipulate human behavior, score individuals based on social behavior, or capture biometric data. AI developers must prevent algorithmic discrimination, ensure data security, and conduct annual impact assessments. Companies face fines up to $200,000 per violation or $40,000 per day for non-compliance. Critics, including James Broughel of the Competitive Enterprise Institute, argue the bill creates heavy administrative burdens that could slow AI development.
👉 Why it matters: The bill isn’t passed, it has only been proposed, but Texas has a reputation for being early and firm in their AI policy, as seen when Meta settled out of a lawsuit with the state for $1.4 billion dollars over violations of their law that protects residents’ face data from being used to train facial recognition technology. The law’s strict requirements mirror the EU AI Act and California’s SB 1047, which was vetoed. Broughel warns HB 1709 could jeopardize the $500 billion Stargate Project, a major AI infrastructure initiative backed by OpenAI, SoftBank, and Oracle.
Judge Throws out Facial Recognition Evidence in Murder Trial
A recent Ohio court ruling has excluded facial recognition evidence in a murder case, raising concerns about the reliability and legal challenges of AI-driven policing. The case involves the February 2024 fatal shooting of Blake Story, where investigators used Clearview AI to identify suspect Qeyeon Tolbert. However, police failed to independently verify his identity before obtaining a search warrant, leading the judge to suppress key evidence for lack of probable cause. This decision underscores broader issues with facial recognition, including accuracy biases, lack of transparency, and legal admissibility concerns.
👉 Why it matters: Studies show that AI-based identification is less reliable for people of color, and critics warn of mass surveillance risks. This is just one example of misuse of AI technology, and how it can easily go from an “investigative tool” to a tool that violates someone’s rights. As courts scrutinize facial recognition, states like Maine, Massachusetts, and Illinois have imposed restrictions. The ruling highlights the need for law enforcement to complement AI tools with traditional investigations to uphold due process and constitutional rights.
Spotlight on Research
How we estimate the risk from prompt injection attacks on AI systems
Google DeepMind’s Agentic AI Security Team has developed an evaluation framework to assess and mitigate the risk of indirect prompt injection attacks on AI systems like Gemini. These attacks occur when malicious instructions are embedded in external data sources, potentially leading AI to leak sensitive user information. Google acknowledges that no single defense will fully prevent these threats. Instead, they advocate for continuous evaluation, heuristic defenses, and security engineering to enhance AI system resilience.
As the climate crisis deepens, artificial intelligence (AI) has emerged as a contested force: some champion its potential to advance renewable energy, materials discovery, and large-scale emissions monitoring, while others underscore its growing carbon footprint, water consumption, and material resource demands. Much of this debate has concentrated on direct impact -- energy and water usage in data centers, e-waste from frequent hardware upgrades -- without addressing the significant indirect effects. This paper examines how the problem of Jevons' Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption. We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses. Rebound effects undermine the assumption that improved technical efficiency alone will ensure net reductions in environmental harm. Instead, the trajectory of AI's impact also hinges on business incentives and market logics, governance and policymaking, and broader social and cultural norms. We contend that a narrow focus on direct emissions misrepresents AI's true climate footprint, limiting the scope for meaningful interventions. We conclude with recommendations that address rebound effects and challenge the market-driven imperatives fueling uncontrolled AI growth. By broadening the analysis to include both direct and indirect consequences, we aim to inform a more comprehensive, evidence-based dialogue on AI's role in the climate crisis.
Watch: My interview with M. Alejandra Parra-Orlandoni, COO of Pasteur Labs
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Email hello@justaimedia.ai if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally use AI to create images.