AI Ethics & Policy Insights
The AI ethics & policy info we should know - simplified. Week of February 19, 2024. Issue 22.
This week - Sora, US AI Safety Institute, AI lobbying, regulation and policy updates.
The Latest:
INTRODUCING SORA - OpenAI introduced Sora to the world a few days ago and people are unbelievably interested. The new AI model creates realistic video from text prompts. It’s availability is limited at the moment, and there’s no word on when it will be generally available, but there’s no doubt that it’ll have a bug impact on the world of generative AI.
US AI SAFETY INSTITUTE - President Joe Biden has named Elizabeth Kelly, a former economic advisor to the administration, as director of the U.S. Artificial Intelligence Safety Institute (USAISI). Kelly was involved in the drafting of the US AI Executive Order, according to Time Magazine, and served on the White House National Economic Council. The USAISI will create AI safety tests for regulators as one way of working to mitigate AI risks.
AI LOBBYING ON THE RISE - The numbers have been released: there was a 185% year over year increase in AI lobbying in 2023, with major companies like Tesla, ByteDance, Nvidia and OpenAI investing their dollars into influencing how future regulation might shake out.
Deeper Dive: A primer on approaches to AI regulation
What is regulation? Who is it for and who decides what to regulate? What are the AI regulation schools of thought?
The Context:
It’s Friday morning, I’m drinking my coffee, and a Google search for “Artificial Intelligence Regulation” returned a casual 180M results. I turned to Google Trends to understand if people are more or less concerned with AI regulation now than they were five years ago, and the result was a resounding “yes”. Searches for the term have risen and fallen steadily from 2019 to 2021, and after 2021 the search was given enough “lift” that it barely dips back to the “no available data” line. The search term experienced a significant high in July of 2023 and has largely remained elevated since.
Regulation isn’t the most sparkly topic related to the broader international conversation around AI. In fact, when I compared search trend results for “artificial intelligence regulation” with results for more general terms like “LLM” and “artificial intelligence”, regulation’s popularity looks deflated by comparison. It’s the blue line running across the bottom of the graph, in case you couldn’t see it.
Despite its poor performance against more general and popular search terms, regulation is a hot topic in 2024, largely because nobody knows how to approach this new frontier in regulation and lawmaking. Sentiment from the general public has been varied, with some calling for regulation of AI ASAP, and others saying that regulation will never be impactful enough to protect humanity from AI’s potential harm. As the AI regulation conversation continues to heat up, I think it’s important to understand the basics surrounding the topic, so we can all engage in the conversation together at is evolves.
What is the purpose of regulation?
Generally speaking, regulations are requirements that the government places on private companies and individuals in order to achieve their goals and purposes. The goals of the government in question can include protecting public health and safety, protecting consumers, protecting the environment, economic growth, social welfare and national security, among others. At its best, regulation is meant to act as a guide that ultimately elevates the life of citizens of the government in question. At its worst, regulation is a set of guides (or a lack of guides) that act as hindrances, and have harmful effects on businesses and individuals, leading to a wide range of challenges and loss of quality of life.
I’m sure you can spot the challenge here. If over-regulated, people will be hamstrung by the rules they’re beholden to. If under-regulated, people and businesses will be untethered and harm will ensue. In most governments the goal of regulation is to land on guidelines that are in the “Goldilocks zone” or “just right” - the guidelines and restrictions benefit business and benefit the people.
What are the schools of thought on AI regulation?
The field of artificial intelligence is not new at all - people have been studying AI since the 1950s - but applicable uses of AI in the hands of everyday people is incredibly new, and its ascent has been rapid. We now find ourselves in a position where a huge portion of the population has access to AI, and none of the major governments of the world have regulation that is operational. A huge subject that has (necessarily) been a hurdle is IF we should regulate AI and HOW we should regulate AI. There are three major schools of thought on AI regulation, and I’ll break them down for you.
Regulate never:
The “regulate never” group, which some have also called the “leave AI free” group. These individuals and companies feel that regulating artificial intelligence would significantly impact our ability to innovate and see what the technology can do. This could put the world at a disadvantage by depriving it of technology that could potentially have incredibly positive impacts. Another concern is if the technology is regulated too quickly, experimenting with the limits of AI might become a "black market” activity and that innovators would “go underground” with their work for fear of being stopped. This group also believes that regulating AI will have an impact on the free market and could stifle economic growth.
Regulate now:
The “regulate now” camp are individuals and businesses who are already seeing the impact of AI on citizens and the potential impact on the world, and want to see regulation roll out sooner rather than later - before irreparable and widespread harm is done. Perspectives on the “right amount” of regulation vary, with some convinced that tighter regulation is needed sooner rather than later, while others believe that regulation can begin in smaller ways now (like requiring independent evaluation of LLMs on an annual basis) and other regulations can roll out later as needed.
Even within the “regulate now” camp, there are two major sub-groups. One group believes that regulating the technology itself is the main way to reduce potential harm and keep the technology from causing an existential crisis. The other group believes that we should regulate the impact of the AI and work to address harms after we’ve seen them come to fruition, rather than regulating what we anticipate the harms might be.
Regulate later:
The “regulate later” group sits between the “now” and the “never” groups. This school of thought is rooted in the idea that we have time to see what impact and potential harms might be, and that we can afford to wait a beat before we regulate the technology. This group (along with the regulate never group) has been criticized for not taking current harms into account. For example, law enforcement is using facial recognition technology that is largely inaccurate when examining the faces of women and people of color. This is causing misidentification and wrongful arrests, and it is disproportionately impacting the black community. If the “regulate later” group is waiting to see what harm is coming, why aren’t they concerned with the harm we’re already seeing? Or does this group believe that the harm must impact a huge portion of the population before regulation is considered?
Who gets to decide what is regulated and what isn’t?
Unsurprisingly, the government in question gets to decide what is regulated and how it’s regulated. However, if you’re in a democracy and your government branches are made up of representatives elected by a vote of the people, you have some influence over regulation through your vote. I know people say this a lot, but it really is true. SO MUCH of what we experience in terms of laws and regulation is determined by how we show up at the ballot box. It’s more critical now than ever to be aware of AI policy in your state and country, and to understand who is working to advance which school of thought on AI regulation. On a personal note, my wish isn’t that you would vote a certain way, but that if you are lucky enough to get to vote, you would vote as an informed citizen.
AI Policy Watch
Proposed AI Regulation in California
Senator Wiener of California introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act to the California legislature last week, and it’s being haled as a “landmark bill” that would impose regulation on AI to address dangers associated with the technology. The bill would focus on trying to prevent harms before they occur, rather than addressing harms after they’ve taken place. The bill would require that state officials test large AI models before they are broadly launched to the public for use.
Proposed AI Regulation in Tennessee
The Tennessee legislature is currently voting on HB 1630, which would require all public universities and school systems to set an AI policy each year for students, staff and faculty, and share that policy with law makers. This is the latest in a series of State efforts to get ahead AI by providing regulation that compels people to consider the use and impact of the technology.
US AI Executive Order Update
The white house published an update on the actions taken against the AI Executive Order (AIEO) which was signed in October. In 90 day’s time the Federal Government departments named in the Order have taken a number of critical steps to manage risks to safety and security and innovate AI for good, as listed in the table shown below. You can read more about the updates and see the full table here.
Video of the Week
Worth a Read
Licensing AI is not the answer - but it contains the answers (Brookings)
Four Things to Know about China’s New AI Rules in 2024 - MIT Technology Review
Let’s Connect.
Connect with me on LinkedIn.
Subscribe on YouTube.
Follow on TikTok, where I talk about developments in responsible AI.
Email me thepaigelord@gmail.com if you want to connect 1:1.