OpenAI's Push to Avoid State AI Laws
News at the intersection of Justice & AI. March 17th, 2025. Issue 51.
Want the quick updates? Listen to the weekly Just AI Podcast on Apple and Spotify.
Powerful A.I. Is Coming. We’re Not Ready. (NYT)
Getting divorced? Artificial intelligence deepfakes could cost you in court (Fox News)
How the UK tech secretary uses ChatGPT for policy advice (newscientist.com)
*Paige works for GitHub, a Microsoft Company
Dashcam’s Binary: Safety or Privacy?
AI-powered dashcams are revolutionizing the trucking industry by enhancing road safety and fleet management. These devices use advanced computer vision to detect unsafe driving behaviors, such as rolling stops, lane changes, driver drowsiness, and seatbelt violations. Companies like FusionSite Services have reported significant safety improvements, including an 89% reduction in accidents. However, the widespread use of AI dashcams raises privacy concerns for truck drivers, who often view their vehicles as personal spaces, and for other motorists recorded without consent.
👉 Why it matters: This use of facial recognition technology is a double-edged sword. It could be used to identify trafficking victims or identify the license plates of cars that are on the run or party to an Amber Alert. However, legal challenges, such as lawsuits over biometric data collection, highlight the need for compliance with privacy regulations. To balance safety with privacy, companies are implementing features like Driver Privacy Mode and data security measures. Open communication and clear policies can help drivers adapt to these changes. The trucking industry must navigate this evolving landscape carefully, ensuring that safety benefits do not come at the cost of individual privacy rights.
Responsible AI rating: 🟡
Android Could Capitalize on Apple’s Big AI Mishaps
Apple’s AI ambitions have hit a rough patch, as internal turmoil over Siri's shortcomings and delayed Apple Intelligence features cast a shadow over its future. Apple executives have privately admitted to an "ugly and embarrassing" rollout, with AI-powered features working inconsistently and missing key release deadlines. Siri, already seen as lagging behind AI competitors, now risks falling even further behind as Apple scrambles to fix fundamental flaws.
Meanwhile, Google is pushing forward with new AI-powered features in Android 16, including AI-generated notification summaries for messaging apps. Unlike Apple's failed attempt at news summaries—which delivered misinformation—Google's more cautious approach may help Android gain an edge in AI-driven user experiences. While Apple’s AI delays extend into 2025, Android is poised to deliver practical, AI-powered enhancements that actually work.
With Apple struggling to integrate AI seamlessly into its ecosystem, Android’s steady progress could shift the balance, proving that AI innovation is only valuable if it delivers real-world benefits without undermining trust. As Apple stumbles, Android has a rare opportunity to redefine itself as the leader in mobile AI.
Responsible AI rating: 🟢
👉 Why it matters: This matters because it shows the competition in the AI space is alive and well, which could be good news for responsible AI. Companies are now breaking out of the “we have to chase other companies to catch up” mindset and are focused on charting their own paths. Apple has pushed their updates in favor of creating something truly useful rather than launching vaporware. Android is going to try to do what Apple did with news alerts - but better. Competition promotes choice.
China’s Manus Agent: What it is and why it matters
Chinese startup Butterfly Effect has unveiled Manus, an AI agent that operates autonomously without requiring step-by-step human instructions. Unlike traditional chatbots like ChatGPT, Manus can break down and complete complex tasks on its own using multiple AI models. Early users have tested its capabilities by creating video games, designing websites, and analyzing financial data. However, the AI faces challenges, including crashes and occasional misunderstanding of tasks. Its development hints at the potential future of artificial general intelligence (AGI).
Responsible AI rating: 🟡
👉 Why it matters: While Manus shows promise in advancing AI autonomy, it also raises ethical concerns about control and oversight. The major challenge with this technology is that it’s another product that’s not broadly available, that’s boasting to be the future of artificial general intelligence (AGI). AGI, of course, is controversial, because everyone is essentially making up their own definitions. More and more AI companies will boast about achieving AGI, but they’ll do that without product proof and with the goal of making money. It’s important that we all stay sharp and critically evaluate these new AI uses.
Keeping Watch: US State AI Laws
Illinois and California are taking steps to regulate AI, particularly in healthcare and high-impact decision-making. In Illinois, lawmakers are targeting AI’s role in mental health services and health insurance decisions. House Bill 1806 would prohibit AI from assisting in therapy sessions, citing ethical concerns and the inability of AI to recognize crises. House Bill 35, meanwhile, seeks to prevent health insurers from solely relying on AI to deny or reduce coverage, requiring human oversight. The legislation follows reports of AI-driven claim denials and lawsuits against major insurers for using automated systems to reject care.
In California, lawmakers are advancing a broader set of AI regulations, with Assembly Bill 1018 aiming to prevent automated discrimination in employment, education, healthcare, housing, and finance. The bill mandates AI performance evaluations and transparency when AI makes consequential decisions. Additionally, California is revisiting previous AI regulatory efforts, including protections for AI whistleblowers and oversight of AI’s impact on government services.
👉 Why it matters: The push for state-level AI regulation comes as federal oversight remains uncertain, especially after the Trump administration rescinded Biden-era AI protections. With major tech companies lobbying against strict AI rules, states like Illinois and California are positioning themselves as leaders in AI accountability, seeking to balance innovation with consumer protection. One challenge is that AI companies are trying to get relief from these laws in hopes that Federal laws will be more tailored to what they want, and will allow them to innovate in the way that they see fit.
China Mandates Labels for AI-Generated Content
China will require AI-generated content to be clearly labeled under new regulations effective September 1, aiming to curb misinformation and enhance transparency. Issued by the Cyberspace Administration of China (CAC) and other government agencies, the rules mandate both visible labels and embedded digital watermarks for AI-generated text, images, audio, and video. Online platforms must verify AI content before publication and retain records for six months. Removing or altering labels is strictly prohibited. The move aligns with Beijing’s broader AI oversight push, mirroring global efforts like the EU’s AI Act. However, challenges remain in enforcing real-time AI applications and watermark reliability.
👉 Why it matters: AI is influencing the market across the world. Deepfakes, in particular, are causing market havoc because people are drumming up false catastrophes, fooling people, and it’s causing people to react by selling shares. With this in mind, China is mandating labels for AI generated content in order to reduce the market churn and increase transparency.
The AI Action Plan Submissions are In
In February 2025, the White House Office of Science and Technology Policy (OSTP) issued a Request for Information (RFI) seeking public input to develop an Artificial Intelligence (AI) Action Plan aimed at sustaining and enhancing America's global AI leadership. The RFI invited feedback from various stakeholders, including academia, industry groups, private sector organizations, and government entities, with a submission deadline of March 15, 2025.
Several prominent organizations have submitted their recommendations:
Google: Emphasized investing in AI infrastructure, modernizing government AI adoption, and promoting pro-innovation approaches internationally.
OpenAI: Proposed strategies to ensure freedom to innovate, export democratic AI, promote the freedom to learn, and capitalize on infrastructure opportunities to drive growth.
Anthropic: Focused on preparing for the emergence of powerful AI systems by advocating for responsible scaling policies and robust testing and evaluation frameworks.
R Street Institute: Highlighted the importance of a pro-innovation policy framework to maintain U.S. competitiveness and address potential risks associated with AI deployment.
Premier Inc.: Discussed leveraging AI to reduce healthcare costs, improve quality and access, and alleviate workforce shortages, while emphasizing data standards and responsible data use.
With the public comment period concluded, the OSTP will review the submissions to inform the development of the AI Action Plan. This plan is expected to outline priority policy actions to bolster America's AI capabilities, promote innovation, and address potential challenges associated with AI technologies. A finalized version of the plan is anticipated to be presented to the President by mid-October 2025.
👉 Why it matters: These recommendations will shape the future of AI policy in the US. What remains to be seen is whether the OSTP will keep an open mind to the wide variety of perspectives and recommendations shared, or if they’re simply on the lookout for novel ideas that already align with their pre-determined AI policy agenda.
Spotlight on Research
Balancing Practical Uses and Ethical Concerns:The Role of Large Language Models in Scientific Research
The rapid adoption of artificial intelligence (AI) in scientific research is accelerating progress but also challenging core scientific norms such as accountability, transparency, and replicability. Large language models (LLMs) like ChatGPT are revolutionizing scientific communication and problem-solving, but they introduce complexities regarding authorship and the integrity of scientific work. LLMs have the potential to transform various research practices, including literature surveys, meta-analyses, and data management tasks like entity resolution and query synthesis. Despite their advantages, LLMs present challenges such as content verification, transparency, and accurate attribution. This study explores the appropriate use of LLMs for NASA's Science Mission Directorate (SMD), considering whether to develop a custom bespoke model or fine-tune an existing open-source model. This article reviews the outcomes and lessons learned from this effort, providing insights for other research groups navigating similar decisions.The rapid adoption of AI systems presents enterprises with a dual challenge: accelerating innovation while ensuring responsible governance. Current AI governance approaches suffer from fragmentation, with risk management frameworks that focus on isolated domains, regulations that vary across jurisdictions despite conceptual alignment, and high-level standards lacking concrete implementation guidance. This fragmentation increases governance costs and creates a false dichotomy between innovation and responsibility. We propose the Unified Control Framework (UCF): a comprehensive governance approach that integrates risk management and regulatory compliance through a unified set of controls. The UCF consists of three key components: (1) a comprehensive risk taxonomy synthesizing organizational and societal risks, (2) structured policy requirements derived from regulations, and (3) a parsimonious set of 42 controls that simultaneously address multiple risk scenarios and compliance requirements. We validate the UCF by mapping it to the Colorado AI Act, demonstrating how our approach enables efficient, adaptable governance that scales across regulations while providing concrete implementation guidance. The UCF reduces duplication of effort, ensures comprehensive coverage, and provides a foundation for automation, enabling organizations to achieve responsible AI governance without sacrificing innovation speed.
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai.
Connect with me on LinkedIn, or subscribe to the Just AI with Paige YouTube channel to see more videos on responsible AI.
Looking for a speaker at your next AI event? Email hello@justaimedia.ai if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles. I always read the summary to check that it’s factual and accurate.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.
Really loved this—both the look and the thoughtful analysis. Thanks for sharing! I’m just starting out on my own AI journey, and your post gave me a lot to think about as I step into my role as an AI Sherpa. Really appreciated it.