What Happens in Paris...Impacts the World
News at the intersection of Justice & AI. February 17th, 2025. Issue 48.
Don’t have time to read? Listen instead!👂
Trump and Vance get it. US can use AI to help Americans and the world flourish. | Opinion (Yahoo News)
Trump's Efforts to Dismantle AI Protections, Explained (ACLU)
Student Team Creates AI Policy Recommendations for Connecticut State Government (Yale School of Management)
*Paige works for GitHub, a Microsoft Company
The New York Times Adopts AI
The New York Times has adopted AI tools in its newsroom, allowing staff to use AI for editing, summarizing, coding, and writing assistance. An internal AI tool called Echo will help with article summaries, briefings, and company updates. New editorial guidelines permit AI for suggesting edits, generating summaries, SEO headlines, and social media copy but prohibit AI from drafting full articles, significantly altering content, or using copyrighted material. AI tools such as GitHub Copilot, Google Vertex AI, and OpenAI’s non-ChatGPT API have been approved. Despite integrating AI, The Times emphasizes that human journalists remain responsible for reporting and editing. This rollout occurs as The Times is engaged in a legal battle with OpenAI and Microsoft, alleging unauthorized training of ChatGPT on its content. Other media outlets are also adopting AI at different levels, from grammar checking to full article generation.
👉 Why it matters: The use of AI in publishing was once considered taboo, as people felt strongly that some things should be left to humans. In a world where some publications are entirely AI-driven and others are entirely human-driven, what concerns me most isn’t whether AI is being used - it’s how. The NYT outlined the AI use cases that were appropriate, the tools allowed, and reinforced their value of ensuring that the news is written and edited by journalists. The news of the use of AI in the newsroom came with a commitment to training everyone on the AI, and ensuring employees were made aware of the appropriate uses.
Responsible AI rating: 🟢
Law firm restricts AI use to combat lawyers’ over-dependence
Hill Dickinson, an international law firm, has restricted general access to AI tools after detecting a surge in usage, including over 32,000 hits to ChatGPT and 3,000 to DeepSeek in one week. The firm cited policy violations and now requires staff to request access to AI tools. While the UK’s Information Commissioner’s Office cautioned against banning AI outright, Hill Dickinson stated it aims to embrace AI safely, ensuring compliance with security and accuracy guidelines. Legal experts acknowledge AI’s potential benefits but stress the need for human oversight. A survey found that 62% of UK solicitors expect AI usage to increase for tasks like drafting documents and legal research. Regulators warn of digital skill gaps in the legal sector, posing risks if AI is not well understood. The UK government is planning legislation to safely harness AI’s benefits while addressing emerging challenges through public consultation.
Responsible AI rating: 🚩
👉 Why it matters: In contrast to the NYT example listed above, this is an example of how NOT to roll out AI. Again, I look at how a company is going about welcoming or limiting the technology. In this case, it doesn’t appear that the law firm had a clear stance on when AI could be used, when it absolutely should NOT be used, and which AI could be leveraged. As a result, there have been major violations of their policies and they’re having to over-regulate AI access and uses within their company in order to adjust. Had they thought about responsible AI and transparency from the beginning, there likely would have been more clarity and less policy violations.
South Korea is taking action on AI
South Korea is taking significant steps to strengthen its AI capabilities and safeguard its digital infrastructure. The government plans to acquire 10,000 high-performance GPUs in 2025 through public-private cooperation to support its national AI computing center. The procurement details, including budget and GPU models, will be finalized by September. South Korea benefits from exemptions on U.S. AI chip export restrictions, unlike countries such as China and Russia. Additionally, the government has banned new downloads of China’s DeepSeek AI chatbot, citing concerns over personal data protection laws. Despite DeepSeek’s rapid rise in popularity, it remains restricted until privacy and security issues are resolved. South Korea’s Personal Information Protection Commission also prohibited government employees from using DeepSeek on work devices. These actions reflect South Korea’s dual approach—investing in AI infrastructure to remain competitive while enforcing strict data protection measures to address security concerns related to foreign AI applications.
👉 Why it matters: South Korea is one of the 18 countries that has virtually unrestricted access to GPUs from US companies, and they’re taking advantage of it to gain an edge. This is an example of how the US limits on access to AI chips will catapult some countries while harming others. Additionally, with South Korea banning DeepSeek, it presents a show of force for the country, and indicates their intention to leverage the current political headwinds to their advantage.
What happens in Paris…impacts the world
The Paris AI Action Summit brought together world leaders, tech executives, and researchers to discuss AI governance, investment, and ethical considerations. The summit marked a shift from previous safety-focused discussions to actionable initiatives. A key outcome was the launch of Current AI, a public-private partnership with an initial $400 million investment to support open-source AI and data access, with an aim to reach $2.5 billion.
The U.S. and the UK refused to sign a declaration advocating for inclusive and sustainable AI, citing concerns over its lack of practical clarity on governance and national security. U.S. Vice President JD Vance criticized European AI regulation, warning against excessive oversight and cooperation with China. France, in contrast, pledged €109 billion in AI investment and emphasized the "Notre-Dame approach"—simplified regulations to accelerate AI innovation.
The EU signaled a shift towards a lighter regulatory framework, recognizing concerns from industry leaders about excessive bureaucracy. European Commissioner Henna Virkkunen promised to cut red tape, following criticisms that overregulation could stifle AI development.
Geopolitically, China's AI model DeepSeek emerged as a competitor to Western AI firms, prompting discussions on international tech competition. The U.S. raised concerns over China’s AI advancements and their security implications. Meanwhile, France pushed for public-interest AI, seeking global support for ethical AI initiatives.
There were some wins though. Key outcomes included the publication of the International AI Safety Report, the launch of Current AI with a $400 million investment to support public interest AI, the formation of an Environmental Sustainability AI Coalition, and the adoption of the AI Action Summit Declaration emphasizing inclusive and sustainable AI. However, while many nations endorsed the declaration, the U.S. and the U.K. did not sign.
The International AI Safety Report provided an objective analysis of general-purpose AI capabilities and risks, while Current AI, backed by figures like Reid Hoffman and Clement Delangue, aimed to develop open, ethically governed AI models. The Environmental Sustainability AI Coalition, with 91 partners, prioritized making AI development more sustainable. The summit also introduced initiatives like a Public Interest AI Platform and Incubator to support digital public goods and an Observatory on AI's energy impact. France announced that India will host the next AI summit, continuing international efforts to shape AI’s future responsibly.
👉 Why it matters: Despite broad discussions on AI’s impact on jobs, ethics, and regulation, the summit did not produce binding agreements. The event highlighted global divides over AI governance, with the U.S. prioritizing competition, Europe balancing innovation and regulation, and China asserting its growing influence. As we continue to head into increasingly competitive environment, the current moment begs the question: will global powers ever agree on what is morally right and responsible when it comes to the creation, deployment, and use of AI, as well as its impacts?
Spotlight on Research
Responsible AI (RAI) is essential for addressing ethical concerns in AI development and deployment. While extensive literature covers RAI principles and technical aspects, a gap remains in translating theory into practice, especially in the shift to Responsible Generative AI (Gen AI). This article explores challenges and opportunities in implementing ethical, transparent, and accountable AI in the post-ChatGPT era. It examines governance and technical frameworks, explainable AI, key performance indicators, Gen AI benchmarks, AI-ready test beds, and sector-specific applications. Additionally, it discusses implementation challenges and provides a philosophical outlook on RAI's future. The goal is to offer insights for researchers, policymakers, and industry practitioners to develop AI systems that maximize benefits while mitigating risks. A curated list of resources and datasets is available on GitHub {this https URL}.
The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.
Watch: Two Types of AI Existential Risk
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Let’s connect.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles. I always read the summary to check that it’s factual and accurate.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.