Examining AI Policy's Fate in the Wake of the 2024 Election
Your trusted source for insights on the world of responsible AI and AI policy. November 6th, 2024. Issue 35.
Quick Hits 👏
Interesting AI news from the week that I won’t explore in-depth, but is important to acknowledge in the current AI moment.
Salesforce is Hiring for Responsible AI Roles
Consider your Position on the Statement on AI Training
Check Out Stanford’s “Build LLMs” Lecture on YouTube (free)
AI Policy Beat
A look at what’s happening in the world of AI policy.
Examining what a Trump win will mean for AI policy in America
Dr. Keegan McBride, a lecturer at Oxford’s Oxford Internet Institute, shared his thoughts on what another Trump presidency means for AI policy in America. I found his unique perspective to be insightful and have included the full text below:
”What does a Trump win mean for American AI Policy? This is a question that I have been asked by policymakers time and time again over the past months. With the results of the election now clear, I thought it would be prudent to write up some of the main points.
During Trump’s first term as President, he announced multiple executive orders on AI. In 2019, Trump announced an EO on “Maintaining American Leadership In Artificial Intelligence” and then in 2020 signed an EO on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government”. Drawing on his past pro-American technology agenda and the broad support from the American tech ecosystem during this campaign cycle, it is highly likely that we will see substantial attention paid to improving and leveraging America’s technology ecosystem - domestically and globally.
In Trump’s platform, he has promised to “repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” I would expect Trump to deliver on this fairly early on and move the US back towards a use-case and sector-specific approach.
As Democrat-controlled states rush to announce new AI regulations in the face of a Trump presidency – something which was mentioned explicitly as a motivating factor behind the failed California bill SB 1047 – we will likely see efforts towards federal pre-emption on AI regulation. My gut feeling is that we end up much closer to the UK’s pro-innovation approach. Unlike the UK, the role of the AISI is almost certainly going to be called into question. The US AISI might continue to exist under a Trump administration, but it is unlikely to see much political support or funding.
Probably the biggest focus for the incoming Trump administration will be on ensuring American dominance in the global AI competition with a strong focus on beating China. In practice, this will likely see movement to both widen and strengthen export controls targeting China’s AI industry. While working to stymie China’s domestic AI capabilities, we will also likely see new investments internally in AI.
These investments will include unleashing the availability of American energy resources to power the growing energy needs of increasingly large data centers, investing in domestic AI R&D, and working to integrate AI into the United States’ national defense ecosystem. While it is unlikely to be the highest priority area, we will also likely see initiatives on using AI and other technology to improve the functioning of the US public sector by facilitating cuts to governmental bureaucracy. Though the Biden admin has not appointed a government CTO, this will likely change under the incoming Trump admin.
For anyone interested, I recommend reading Jacob Helberg’s article: “11 Elements of American AI Dominance” https://lnkd.in/gbexYV2f. It does a great job outlining many of these areas and the current areas of agreement and debate.”
AI Ethics News
Notable news in the world of AI ethics and responsible AI.
Teen Takes Their Own Life After Obsession with an AI
A 14-year-old Florida boy named Sewell Setzer III developed an intense emotional attachment to a chatbot on Character.AI, which he interacted with frequently and shared deeply personal feelings, including thoughts of self-harm. Sewell’s mother believes the chatbot contributed to his suicide by intensifying his isolation and mental distress. Garcia has filed a lawsuit against Character.AI, alleging that the company’s technology, lacking appropriate safeguards, exploited her son's emotional vulnerability and worsened his mental health.
👉 Why it matters: The case highlights concerns about the mental health impacts of AI companionship apps, and where the responsibility for safeguards lie. While there were warnings on the app highlighting that everything the app shared was made up, it did not prevent an emotional connection from taking hold. Experts warn that these platforms may exacerbate loneliness and depression. Character.AI has announced intentions to introduce additional safety measures for young users, but many, including myself, agree that they should aim higher.
Parents Sued Their Son’s School After Unfair Punishment for Using AI
A student at Hingham High School used AI to help him with research for a paper, but he did not use AI to write the paper. He was accused of cheating, given a reduced grade and detention. Beyond that, he was barred from induction to the National Honor Society. The suit alleges that their son would suffer irreparable harms as a result of the school’s action. The school in question lightly touches on AI use in its handbook, but does not define what “use” of AI looks like. For example, if a student searches Bing for information on a history topic, and some linked resources show up in the AI section of the search engine, is that “use” of AI? The suit is still in ongoing litigation.
👉 Why it matters: The case highlights for parents, students, schools and educators how critical it is to not only mention AI in their policies, but to truly step out what uses of AI, if any, are permitted. In a rapidly changing world where AI is being leveraged and scaled, schools must consider if there are certain uses of AI that would be beneficial for students, how they will clearly communicate those uses, and what steps might be taken to deal with infractions.
Microsoft’s Dance with Big Oil and Climate
In this article, the Atlantic takes a look at Microsoft’s commitment to climate, and business with oil companies. Microsoft* publicly promotes AI as a tool to combat climate change, but simultaneously markets its AI technologies to fossil-fuel companies to enhance oil and gas extraction. This approach has led to internal conflicts within Microsoft. Critics argue that while AI can improve operational efficiency, its significant resource demands and speculative environmental benefits may ultimately exacerbate environmental impacts.
👉 Why it matters: What a company says and what a company does needs to be considered in the age of AI. There are numerous companies that espouse responsible AI principles and practices, but their actions must be examined because arguments like profit, winning the market and maintaining shareholder trust will almost always win, even when safety and long-term benefit are at risk. All companies, including Microsoft, must be held accountable when their actions do not align with their promises.
*Paige is an employee of GitHub, which is owned by Microsoft.
Spotlight on Research
Scalable watermarking for identifying large language model outputs
SynthID-Text is a watermarking scheme designed to mark synthetic text generated by large language models (LLMs) without compromising quality or efficiency. Unlike other methods, it doesn’t affect LLM training, only adjusting the sampling process, and allows for efficient detection without using the LLM itself. SynthID-Text integrates watermarking with speculative sampling to scale efficiently in production environments, achieving high detection accuracy while preserving text quality, as confirmed in a live experiment with 20 million responses. This development aims to promote responsible use of LLMs by making watermarking feasible in large-scale systems.
Constitutional AI: Harmlessness from AI Feedback
"Constitutional AI" is a method for training AI systems to supervise other AIs without requiring extensive human labeling. Using a list of guiding principles, this approach combines supervised learning (SL) and reinforcement learning (RL) phases. In SL, the AI generates self-critiques and refines its responses, which are then used for fine-tuning. In the RL phase, a preference model evaluates responses, enabling training via "RL from AI Feedback" (RLAIF). This produces a non-evasive, harmless assistant that transparently addresses harmful queries by explaining objections, offering precise AI control with minimal human oversight.
WATCH: Responsible
Here’s some content from my other mediums. Feel free to follow me on Instagram, TikTok or YouTube.
Two weeks ago I had the privilege of connecting with Eric Kimberling to discuss Ethics and AI. Check it out!
Let’s Connect:
Connect with me on LinkedIn.
Looking for a speaker at your next AI event? Email thepaigelord@gmail.com.
Email thepaigelord@gmail.com if you want to connect 1:1.