Kroger's Dynamic Pricing, California's SB-1047, and AI's Energy Consumption
Your trusted source for insights on the world of responsible AI and AI policy. August 20th, 2024. Issue 33.
This issue is a big one, because a lot is happening. If you have recommendations on content, content structure or newsletter cadence, please reach out and let me know.
AI In the Election
NEW! This temporary section will track updates at the intersection of AI and the US Presidential election. There is a lot happening in this space, and I want to ensure we’re all informed.
🟡 8/17/2024 - Donald Trump Shares AI Generated Images of a Communist-Style DNC Event
What Happened: Republican Presidential Nominee Donald Trump posted an AI-generated photo on X and Truth Social on August 17, 2024, showing a woman resembling Kamala Harris at a communist event. The image includes a "Chicago" sign, referencing the DNC location where VP Harris is set to speak on August 20. The photo was shared alongside an uptick in rhetoric from the former President around what he calls the “socialist agenda” of “Comrade Harris”.
🟡 8/18/2024 - Donald Trump Shares Several AI Generated Images Suggesting that Taylor Swift has Endorsed Him
What Happened: Republican Presidential Nominee Donald Trump posted several AI-generated photos on X and Truth Social on August 18, 2024, featuring women in “Swifties for Trump” shirts and an Uncle Sam-style image of Taylor Swift with the caption “Taylor Swift wants YOU to Vote for Donald Trump.” Trump accompanied the posts with the words “I accept.” It's unclear if he was suggesting he’s open to a Taylor Swift endorsement or if he considered these AI-generated images as an endorsement. Taylor Swift has not endorsed any candidate in this election and historically supported Biden-Harris in 2020.
👉 Why these events matter: The unregulated use of AI continues to escalate in the election, with Donald Trump's August 17, 2024 post marking a significant shift. Previously, Donald Trump had only reposted AI-generated content, but this is the first time a Presidential nominee has directly disseminated an AI-generated image, rather than re-posting. Both posts spread misleading messages about motivations and endorsements in the U.S. Presidential election, highlighting a troubling trend in AI's role in current and future elections.
Have you seen AI use in the election? Share with me here.
Quick Hits 👏
NEW! Interesting AI news from the week that I won’t explore in-depth, but is important to acknowledge in the current AI moment.
AI & The Battle with Mosquitos - Fox News
South Korean AI Chip Makers Merge - Reuters
South Africa’s AI Framework - Coin Telegraph
AI Ethics News
Notable news in the world of AI ethics and responsible AI.
Google’s AI Search Could Harm Website Publishers 🚩
Google's new AI-driven search feature offers instant, summarized answers at the top of search results. This model is frustrating for a number of reasons. First, users are required to use it - there is no ability to opt-out of the use of the AI. Second, this could potentially reduce traffic to websites because the search results are being surfaced and summarized. Website publishers are facing a dilemma: either allow Google to use their content, risking becoming obsolete, or block Google, which would reduce their visibility. There is no “win” for website publishers in this situation.
👉 Why it matters: This situation highlights Google's dominance in search and raises concerns about fairness and antitrust issues as generative AI becomes more integrated into search. Some argue that separating Google's search and AI operations could mitigate these challenges. This move would take Google back to it’s “old” model, which will drive traffic to websites.
The World According to Grok (Elon’s AI)
Elon Musk’s AI tool, Grok, recently started allowing users to create AI-generated images on the social platform X. The tool quickly became controversial as users began producing and sharing misleading and disturbing fake images of political figures, including Donald Trump and Kamala Harris. Concerns have arisen over the potential spread of misinformation, especially ahead of the U.S. presidential election.
👉 Why it matters: Elon Musk created his company xAI with the intention of creating an open, unhindered AI model. Grok is the interface of that model on X (formerly Twitter). The model’s ability to create images has led to experimental misuse, where users want to see what it will create and where it’s limits are. This has led to the creation of violent, pornographic and misleading images, many of which have been shared on the internet. Some restrictions have been introduced following criticism, but Grok’s ability to generate harmful content is raising alarms about its impact on public discourse.
Video Game Performers Continue Strike to Protect Work from AI
Video game performers are concerned about AI's potential to replicate their performances without consent, leading to reduced job opportunities and ethical issues. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) is striking to demand AI protections.
👉 Why it matters: Performers argue that AI could use past motion capture data to create new animations without their involvement, threatening their roles and rights. Studios have offered some protections, but performers want clearer definitions and stronger safeguards.
AI in the Wild
This section will highlight new and interesting uses of AI, so you can stay up-to-date on how the technology is changing.
Kroger’s Dynamic AI Pricing and It’s Potential Impact on Americans
Senators Warren and Casey sent a letter to Rodney McMullen, the CEO of The Kroger Co., requesting answers to 10 questions about Kroger’s use of digital price tags and their recent partnership with IntelligenceNode, a company that uses AI to provide dynamic pricing solutions. Mr. McMullen has until today, August 20th, 2024, to respond.
In the letter, the Senators express their concern that Kroger stores may be preparing to engage a “dynamic pricing” model, which would allows the company to change prices based on a variety of factors such as time of day, time of the year and whether items are in demand. For example, if you’re shopping at a busy time, prices may be higher. If you’re shopping for traditional Thanksgiving foods in the days leading to Thanksgiving, the prices could be higher. This model is employed by Uber and other companies, who hike prices based on several factors including time of day, and, as some have claimed, whether the ride-requestor’s phone has a low battery.
👉 Why it matters:
The use of AI to dynamically change food prices poses a significant threat to people’s personal wellbeing. In addition to being an unfair practice, it disproportionally threatens the livelihood of those who must shop at certain times, especially if the time they are shopping is a time in which Kroger or other grocers increase prices.
This could also pose a medical risk to those on a budget who might need cold medicine for a loved one during cold season.
A few factors to consider as you follow this topic:
1. The price Kroger and other grocers pay for the food/wares they sell are not subject to dynamic pricing. In other words, there appears to be no motivating factor behind this move other than profit.
2. Kroger is one of the largest grocers in the US, with nearly 3,000 stores.
3. The FTC has launched an investigation into the practice of differential pricing.
AI Policy Beat
A look at what’s happening in the world of AI policy.
OpenAI Disrupts an Iranian Influence Operation
OpenAI announced on August 16th, 2024 that it banned accounts that were linked to an Iranian influence operation. The operation used ChatGPT to generate content, some related to the US presidential campaign, and then disseminate that content on the internet. OpenAI also said in the statement, “We have seen no indication that this content reached a meaningful audience.”
US presidential elections have navigated significant interference attempts, some with meaningful impact, by foreign adversaries in the past. This effort by Iran specifically focused on topics that have proven to be sensitive and divisive for Americans, and content was posted from both progressive and conservative perspectives.
👉 Why it matters: Generative AI has made it possible for people to create massive amounts of bespoke content for nearly any given purpose. The technology is broadly available, which means it can be leveraged by bad actors with nefarious goals. In the past, the creation and dissemination of mis or disinformation required more people-power to create and post the content. With AI, the content can be created and shared much faster and at a broader scale, which means the risk of nefarious content having a meaningful negative impact is greater.
Because this is not regulated or addressed at a national scale, it falls to tech companies to monitor these nefarious uses and address them.
California’s Battle over SB-1047 - Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California’s SB-1024 bill - Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - introduced by Senator Scott Wiener (D-San Francisco) has passed the Assembly Appropriations Committee and will be up for a vote today, August 20th, 2024. The bill has been significantly revised, but has caused tension among AI and policy leaders in the state, with many calling for Governor Gavin Newsom to veto the bill should it reach his desk.
Many parts of the bill have caused frustration, but one of the topics causing the most concern is overly restrictive “pre-enforcement”.
Pre-enforcement are safety measures taken during the development phase of the model, before the model is released. The State of California would require any developers of AI - both individuals and businesses - to create a safety plan for the model. If a catastrophe occurred and the safety plan did not account for it, the individual developer could be held accountable for the harm done.
Many leaders, like Fei-Fei Li, feel that this would have a chilling effect on AI development because developers would be afraid of being held accountable for damage done that could not have possibly been predicted, as is often the case with new technology. Large tech companies like Meta, Google and OpenAI have openly opposed the bill, while Microsoft has remained neutral but expressed a desire for national legislation.
👉 Why it matters: With no meaningful national AI regulation in sight, states are stepping up to create their own AI regulation. The challenge is that if 50 states create 50 pieces of AI regulation then AI companies large and small are required to make 50 different sets of adjustments (or more) to do business in those states. This could make things incredibly challenging for small AI businesses who will likely not be able to afford to meet 50 sets of needs. This could reduce competition and continue to elevate large tech companies.
While AI regulation at the state level is an excellent step toward the meaningful protection of humanity from the negative use and impact of AI, increased state regulation adds pressure for national regulation, a step which the US Congress has been hesitant to take.
Granholm, Department of Energy, Work to Meet AI Energy Demands
US Energy Secretary Jennifer Granholm is working to address and meet the increasing demand for energy caused by AI data centers. Per Axios, demand for power from AI alone is accounting for a projected 9% growth in US energy consumption by 2030. The DOE released a list of resources to support the upgrades and changes needed to support data center electricity needs.
👉 Why it matters: Energy consumption by AI poses a number of challenges in the US for individuals, states, and the planet. Individuals: Rising power demand caused by AI could cause instability for state power companies, leading to rolling blackouts for individual households and increasing costs. States: The increasing demand for power will largely be dealt with by States, as most matters related to energy is handled at the state level. This could lead to a burden for taxpayers to help energy companies modernize to support big tech. The Planet: Rising energy consumption is upending many companies’ efforts to decrease their carbon emissions, which would positively impact the planet.
Spotlight on Research
Digital Government in Japan: Historical Foundations, Future Ambitions, and the Digital Agency
This paper examines Japan's journey in digitalization, focusing on the establishment of the Digital Agency in 2021, created in response to challenges during the COVID-19 pandemic. It highlights Japan's historical role in pioneering IT infrastructure and outlines the Digital Agency's strategic efforts in promoting digital transformation. The study provides insights into the current state and future trajectories of digital governance in Japan.
“It’s Not Exactly Meant to Be Realistic”: Student Perspectives on the Role of Ethics in Computing Group Projects
This study explores how software engineers identify and address their ethical concerns, moving beyond narrow principles like "fairness." Through surveys and interviews with 115 engineers across various sectors, it reveals concerns ranging from military and privacy issues to industry-wide existential questions. The research highlights that organizational and personal factors often limit efforts to address these concerns, suggesting that ethics interventions should empower engineers to resolve issues rather than just identifying them, expanding the focus beyond AI and Big Tech.Artificial Intelligence and the Digital Public Sphere, a Collection of Essays
The advent of AI, from predictive analytics to chatbots, is reshaping the digital public sphere, influencing social identities, legal frameworks, and labor relations. This collection of essays highlights AI's profound impact, emphasizing the need for robust governance. Sandra Wachter critiques the E.U. AI Act's shortcomings, while Xin Dai explores AI's role in enhancing access to justice in China, cautioning against potential risks. Michele Elam showcases how artist-technologists of color actively shape the digital realm. Veena Dubal and Vitor Araújo Filgueiras reveal the toll of algorithmic management on workers, and Woodrow Hartzog calls for layered regulation to protect the public good. These essays stress the importance of critically shaping AI's development to harness its potential while mitigating harm.
WATCH: Microsoft’s Policy Recommendations to the US Goverment
Here’s some content from my other mediums. Feel free to follow me on Instagram, TikTok or YouTube.
Let’s Connect:
Connect with me on LinkedIn.
Looking for a speaker at your next AI event? Email thepaigelord@gmail.com.
Email thepaigelord@gmail.com if you want to connect 1:1.