This week:
Should we try to stop AI? There are three main arguments that have been spurred by the FLI letter that launched last week, and I’ll walk you through them.
An AI business feature with no product and a threadbare website - an a $100 million valuation?
And an article worth exploring: why ChatGPT has been banned in Italy. (Linked below)
How are people feeling about AI? The last week has been incredibly telling. On Wednesday, March 29, the Future of Life Institute, an organization financially backed by Elon Musk, released a letter that had been signed by many big names in Tech calling for all AI labs to halt development of AI systems “more powerful” than Open AI’s GPT-4. The letter cited research by a number of well known researchers in the world of AI and tech ethics, and called for “elected officials” to put guardrails on AI’s development, rather than Tech CEOs.
Responses to this letter have been mixed, but one question keeps arising: should we fight to stop development of AI? There are three basic arguments in response to this question, and I’ll walk you through them.
Argument 1: Stop AI development, and save humanity from destruction.
Individuals in this camp have been loud of late. They cite concerns such as the “AI apocalypse” and the possibility of AI overcoming humanity in every way possible. The individuals in this camp range from people like Eliezer Yudkowsky, who wants the entire world to agree to shut down AI in its entirety, to individuals whose imaginations conjure up wild science fiction scenarios of robots taking over, to the individuals who signed the letter asking to halt AI development.
There are two general asks from this camp: either shut AI down entirely, or shut AI development down temporarily in order to build up regulation that will guide its use in the future. The second request, which is the basis of the letter signed by Musk and others, seems somewhat reasonable until it’s explored more in-depth. Several of the researchers whose work is cited as evidence in the letter have spoken out to say that they believe the letter misses the point entirely, or fails to grasp the research they’ve done.
Argument 2: Don’t stop AI development, because it will hinder innovation and national security. Humans will be fine.
The second argument is focused on innovation, and many proponents in this camp believe that winning the AI arms race is the main reason to avoid stifling innovation. If you’re following international relations at all, it’s no surprise that governments the world over are trying to innovate with AI - quickly. This isn’t just for military purposes (although that’s part of it) but it’s due in large part to the fact that countries that develop, own and leverage AI efficiently will no doubt make economic and social strides that can help them become, or maintain a position as, a world leader.
Others in this camp believe that we cannot truly know the power of a technology by guessing - we must see it in action, and for us to see it in action we must innovate without restrictions. Which is a reasonable, but dangerous, argument, in my opinion. (Many people from the “build unregulated AI” camp believe this is the path to knowledge and innovation when it comes to AI.)
Argument 3: Let’s focus on the challenges at hand with AI, rather than amplifying a narrative that leads to chaos.
Finally, we have people who fall into the camp where they believe that there’s a way to make progress on AI while building AI with safety in mind, and working with legislators to create regulation that protects people but doesn’t completely hinder innovation. A few of the researchers whose work was cited in the letter to pause AI development seem to fall into this camp.
Margaret Mitchell, co-author of “On the Dangers of Stochastic Parrots” said, “By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI…ignoring active harms right now is a privilege that some of us don't have."
In this statement she indicates that the letter from FLI is somewhat self-serving, and that it fails to focus on the current harms associated with AI in favor of stoking fear around potential future harms.
Shiri Dori-Hacohen, another researcher whose work was cited, also had issues with her work being cited because it states that AI can influence decision making related to climate change and nuclear war. She stated that, “AI does not need to reach human-level intelligence to exacerbate those risks…There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”
Coded bias is a risk. The spreading of mis and disinformation is a risk. Unrestricted chatbots are a risk. Use of AI systems by bad actors is a risk. Unregulated use of AI by companies is a risk. Unregulated collection and reselling of data are risks. These are some of the risks posed by AI in this very moment, and individuals who fall into this camp do believe that AI can be dangerous, but they truly believe that the “unseen” dangers we’re navigating now pose a more acute and present risk to humanity than a hostile robot takeover.
What now?
Now we all decide how we’ll move forward, and it’s really up to us. For me, I’m going to continue to research and understand the different arguments related to AI use, harm and regulation, and I’m going to formulate a path forward that’s thoughtful and rational. As always, I encourage readers to look into arguments related to AI and let reason win over fear. And if you disagree with some of the points I’ve made? Share it (respectfully) in the comments. I’d love to hear!
This week’s AI business has yet to prove itself, but somehow has a $100 million dollar valuation. This is its website:
Mobius AI was started by a few ex-Googlers, and while they’re still developing a product - yes, you read that right - they know it’ll have something to do with generative AI and they’re focused on using AI to help humans be more creative. They’ve already been featured in the New York Times and they’re well-funded so they can explore their whims. OH! And they’re hiring.
Oversight:
ChatGPT banned in Italy (BBC)
AI experts disown Musk-backed campaign (Reuters)
A call to shut AI down, globally (Business Insider)
Opinion:
There’s no such thing as artificial intelligence (The Japan Times)
A.I. Newsletter: What’s the Future for A.I.? (The New York Times)
Book recommendation:
Weapons of Math Destruction by Cathy O’Neil.
This book is a great starting place if you’re looking to understand how bias can be built into technology.
I regularly get asked how people can get into responsible AI, so here are some resources! I’ll keep adding to this list as I come across more information.
Responsible AI Institute is a nonprofit dedicated to helping organizations on their responsible AI journey. They provide awesome ways for their members to get connected through their Slack channel, get resources through their newsletter, and get invited to community events. Plus they’re a leader in responsible AI, so they’re a company to watch.
The Center for AI and Digital Policy offers policy clinics, and they look amazing. If you’re interested in AI policy, this might be for you! I’m hoping to apply for the Fall 2023 session.
Subscribe to my Substack if you want to receive this weekly update.
Connect with me on LinkedIn.
Do you need help on your responsible AI journey? Email me thepaigelord@gmail.com
Follow on TikTok, where I chat about responsible AI.