AI Ethics & Policy Insights
The AI ethics & policy info we should know - simplified. Week of February 5, 2024. Issue 21.
Welcome to February! January was 150 days long, wasn’t it? But January was also a bustling month for AI ethics and policy. From robocalls to changes in military usage language and increase focus on lawsuits, there’s a lot to discuss!
The Latest:
OPEN AI CHANGES WARFARE LANGUAGE - In the beginning of January, OpenAI, the breakout generative AI company behind ChatGPT and Dall-E, removed and changed language specific to military uses of its technology. According to The Intercept, the previous language banned “activity that has a high risk of physical harm” and enumerated “weapons development” and “military and warfare”. The language has been updated to “use our service to harm yourself or others” and provides “develop or use weapons” as a potential example. As The Intercept pointed out, “…the blanket ban on ‘military and warfare’ use has vanished.”
MICROSOFT ON THE RISE* - Microsoft had a successful earnings call last week with several major wins for the company, which is leading in the AI space. Cloud revenue was up 20% due in no small part to Azure AI. Nadella also stated that Microsoft is “mov[ing] from talking about AI to applying AI at scale by infusing AI across every layer of [its] tech stack”. Microsoft leaders shared that significant growth was due to Azure OpenAI and OpenAI’s API on Azure. Nadella also alluded to an upcoming AI adoption cycle as more companies wade deeper into use of AI and understanding of its benefits. No matter what happens in the AI race, Microsoft is one to watch.
*Disclosure: Paige is an employee of GitHub, which is owned by Microsoft.
Deeper Dive: The Dark Side of Generative AI
The Context:
You’ve likely seen news stories about image recognition AI that can identify cancer, AI in autonomous vehicles, and how robotics are taking the risk out of warehouse jobs. People leveraging generative AI are able to make images for their newsletters (see above) that would otherwise be beyond their talent to create, generate summaries of content so they can get the cliff notes, and a host of other tasks that would take a human longer than it takes Dall-E, ChatGPT, or GitHub Copilot. (For example, I recently used ChatGPT to separate out the address components into the Excel template format required by our wedding invitation website, a task that took the AI two minutes and would have taken me at least 30.)
But we’re beginning to see more stories about nefarious uses of generative AI. One challenge with the way these news stories are reported is that they’re one-off. You might read a story in October about a terrifying use of AI, and not hear another story until the following July. This gives the impression that maybe these negative use cases are few and far between. Unfortunately, that’s not the case. In the last few weeks alone we’ve heard about a number of nefarious uses of generative AI that have threatened people’s privacy and threatened democracy itself. If we are to be informed people with the fabric of humanity in mind, we must look directly at these stories to understand the dark side of generative AI.
Nefarious Uses of AI:
Biden on the phone.
On January 21st, two days before the Democratic primary in New Hampshire, an AI-generated phone call when out to Democrats trying to discourage them from voting. There is virtually no information about who produced the recording, which tools were used in the process, how many people received the message, and the impact the fake call had on the process of Democracy. While experts have said that the audio wasn’t a very sophisticated fake, it still throws two glaring truths into sharp relief. First, there are people who are actively working to exploit these technologies and lead people from the process of democracy. Second, with better technology and a more cunning strategy, things can get worse and have a significant negative impact on democracy.
AI-Generated Campaign Ads.
While we haven’t seen them en masse this election, we saw an AI generated video released by the Republican National Committee in April 2023 that forecasted an apocalyptic future if President Biden and Vice President Harris serve a second term. This video release no doubt had an impact on Google, Meta and other social media companies updating their AI-generated imagery policies. The challenge with AI-generated campaign ads is that they can spark fear over events that never took place, and catapult people into a false belief that the future depicted by AI is a guarantee.
Taylor Swift and Revenge Porn.
In 2023 at the UKs Bletchley Park AI Safety Summit, VP Kamala Harris cautioned the audience on limiting their view of AI safety to only those threats deemed “existential.” She brought up the case of a woman whose face was used in an AI generated “revenge porn” video and launched onto the internet. While it was not her body in the video, the AI-generated body in question was attached to her face - shocking and horrible.
Taylor Swift has been the most recent target of AI-generated revenge porn in the form of photos. The photos were shared on X and Telegram, and one photo was viewed over 47 million times. The harm that this can cause is beyond measure. For individuals - famous or not - this can cause financial, reputational and emotional harm. It can be leveraged as a form of abuse, or as a way to extort someone. It’s notable that these photos were released as speculation arose that the Biden Administration had asked for, and might receive, an endorsement from Taylor Swift.
The Ethical Considerations:
The ethical considerations for using generative AI transparently or non-transparently in election materials are many and varied, but the most notable is that of fooling people, which goes against the ethical AI principles of fairness and safety. Materials aimed to fool people prey on groups of individuals who are easily targeted as those who might be tricked into believing the story they’re created. AI-generated content with false facts or narratives can mislead people into believing that certain events took place, or might take place. It can reinforce or project false narratives among voters, which is a threat to democracy. For the content created and launched by mystery persons or organizations, this violates the transparency principle of ethical AI, where people would know from whom the materials have come.
The ethical considerations for revenge porn are significant as well. Namely, privacy is critical. A person has a right not have have their naked bodies or generated images of their would-be naked bodies shared with the world. This revenge porn also works against positively impacting society, and it disproportionately negatively impacts women as women are generally the subjects of the revenge porn.
Maintaining Constant Vigilance:
The key with generative AI is to maintain constant vigilance, and use caution when reviewing materials that may have been generate by AI. The huge challenge is that as GenAI gets more sophisticated, it becomes more challenging to understand when you’re engaging with it. Keep an eye on the news, do a search regularly for the latest scams, and share about those scams with people close to you so they can maintain a watchful eye, too.
AI Policy Watch
US to Work with China on AI Safety
Even in the midst of palpable tension between China and the United States, Arati Prabhakar, the director of the White House Office of Science and Technology Policy, indicated that the US is willing to work with Beijing on matters related to AI safety. The Financial Times reported that Prabhakar indicated that steps have already been taken to engage with China, and that this is “a moment where American leadership in the world depends on American leadership in AI.” While there is certainly disagreement between the two countries about AI regulation and how the technology should be used, Prabhakar seems confident that there are some areas of agreement related to safety, and is prepared to explore those opportunities to collaborate.
Latin American Countries Gear Up for AI Regulation
In Mexico, leaders in the country’s National Artificial Intelligence Alliance created by Congress in 2023 are working to understand the benefits and harms of AI, and to leverage the technology to positive ends. Mexico is joining the conversation on AI regulation and is working to balance the same challenges as other major regulatory players like the EU, United States and China. According to Senator Alejandra Lagunes, “Regulating based on fear can halt innovation and the possibility of levelling the ground between Mexico and other countries from the Global South with the big tech developers in the Global North.” Essentially, Mexico wants to leverage AI in the ways it can to increase development, innovation, job opportunities and other areas which could positively impact the country. No regulation has been proposed in Mexico, but other countries in Latin America have proposed legislation - most notably, Brazil.
The US joins 41 Countries in Adoption OECD AI Principles
The United States is one of 42 countries that has agreed to approve a new international agreement on building trustworthy AI, led by the Organization for Economic Cooperation and Development, an intergovernmental organization focused on stimulating economic progress and world trade. The agreement is in line with the President’s AIEO signed in October 2023, and indicates an extended commitment to ensuring global AI safety and trustworthiness alongside other signatories.
US AI Executive Order Update
The White House published an update to progress made against the AI Executive Order on January 29th, 2024. The AIEO, signed in October 2023, had several deadlines ranging from 30 days to complete certain items, to over 300 days. This update gave a long list of many of the listed deadlines ranging from 30-180 days, and the actions taken to mark them “complete”, as well as the associated agency responsible for it’s success. As we know, the power of the actions being put in place relies heavily on the robustness and durability of the processes stood up, and the seriousness with which individuals take the subject matter. Only time will tell how effective these actions will be in mitigating AI-related risk and harm in the Federal Government.
Video of the Week
In the Loop Podcast Interview
Last week Max Stein and Sofia Guzowski welcomed me to their podcast In the Loop! We had a great talk about AI Ethics and Responsible AI. Check it out!
Worth a Read
Let’s Connect.
Connect with me on LinkedIn.
Subscribe on YouTube.
Follow on TikTok, where I talk about developments in responsible AI.
Email me thepaigelord@gmail.com if you want to connect 1:1.