AI Ethics & Policy Insights
The AI ethics & policy info we should know - simplified. Week of January 22, 2024. Issue 20.
INTRO
The Latest:
LABELING AI in AUSTRALIA - In the absence of significant AI regulation, the Australian Government might ask tech companies to voluntarily label or watermark content from their AI platforms. This report comes after a McKinsey report indicated that AI adoption could significantly increase Australia’s GDP, and that citizens across the country have varying opinions on whether current guardrails are enough. This highlights a conundrum that many governments are facing; whether regulation comes before or after AI harm has occurred.
OHIO AI POLICY - The state of Ohio is preparing guidance for the use of AI platforms in K-12 schools. As of now there isn’t a date for the release of the guidance, and a spokesperson for the Ohio Department of Education & Workforce indicated that decisions about the use of AI would still be left to local districts. This is the latest in a series of moves by States to try and provide guidance to city and state employees for the use of AI technology. (Read last week’s newsletter to learn about the new policy in Seattle.)
Deeper Dive: Davos AI Review
The Context:
The World Economic Forum took place in Davos, Switzerland January 15 - 20, 2024. The conference is a who’s who of world leaders across governments, organizations and companies, all of whom gather to discuss the most critical topics expected to impact the global economy. This year, the topic du jour was unequivocally AI.
5 AI Updates to Know from DAVOS (and some ethical considerations):
AI & Energy are Inextricably Linked
In a talk on Tuesday, January 16th, Sam Altman, CEO of OpenAI, stated that AI will consume significantly more energy than people had originally predicted, and that breakthroughs in energy will be necessary. He put an emphasis on climate-friendly energy sources - nuclear fusion, in particular. The ethical considerations for AI and energy are significant. One major consideration is that climate change disproportionately impacts those who fall below the digital divide. Those with access to technology and AI are using the technology, which often consumes significant amounts of compute energy in ways that do not favor the planet. Climate change is partially the result of this use, and those without consistent access to the technology in question are the people who are most significantly impacted by rising sea levels and catastrophic storms. Essentially, the actions of the wealthy are paid for by the poor. Hopefully Sam’s emphasis on the need to invest in cleaner energy will drive world leaders across varying sectors to make changes for the better.
Misinformation & Job Displacement are a Top Concern
The appearance of Generative AI on the world stage has sparked the imagination of many across the globe, but according to the Washington Post, world leaders are beginning to express fears and anxiety related to generative AI. Very real threats that have been discussed are the use of AI generated content to infuse false narratives into upcoming elections. Another threat is that of job losses in favor of AI use, which would have a huge impact on the global economy. The Post reported that Davos was “papered” with ads from Salesforce and IBM calling for trustworthy AI - a significant move by both companies to impress upon world leaders the danger that lurks in the LLM. The conversations surrounding AI risk at the WEF are refreshing to say the least. If world leaders cannot take the risks of AI seriously, the general population will bear the brunt of the unfairness and associated economic costs.
Saudi Arabia wants to be the AI & Tech Hub of the Middle East
Saudi Arabia put significant resources to showcase its Neom development on the Davos promenade this year, in what some are saying is a bid to compete with the United Arab Emirates for the designation as the Middle East’s tech hub. Neom is meant to be a new luxury living and innovation space with a focus on technology, and an aim to attract tourists. This effort to diversity the economy in Saudi Arabia comes as oil, as a percentage of GDP, has significantly decreased. Saudi Arabia is one to watch in the world of technology, not just for innovation but also because the country is known for human rights abuses.
How to Regulate AI is a Giant Question Mark
A recurring (and important) topic related to AI is whether and how to regulate the technology. A question being parsed by world leaders at Davos is whether the AI technology itself should be regulated, or whether the effects of the technology should be regulated. The BBC reported that “researchers for the International Monetary Fund found that AI may affect the world for four in 10 employees worldwide” and that a report from the World Economic Forum “showed half of the economists surveyed believed AI would become ‘commercially disruptive’ in 2024.” It doesn’t appear that there is a right answer. If the technology is regulated before it has an impact, we could over-flex and stifle innovation which could have legitimate, positive impacts on the world. If we let the technology be released without regulation, we will be in the position of cleaning up disasters that are more likely to impact the less fortunate.
Minstral is the New “One to Watch”
Minstral, a French AI start-up of only 9 months, was the talk of the promenade in Davos last week. The company is known for creating an LLM that is hitting high technical benchmarks, and its giving hope that the market can still welcome new competitors. The company has a €2B valuation, and enjoys backers such as General Catalyst, Andreessen Horowitz, and Nvidia. The company currently leverages Azure for it’s AI, a fact that is significant for Microsoft and was currently noted by Satya Nadella.
The Future of Davos:
We’re watching the AI economics conversation evolution in real time and, for most of us, from a distance. Last year everyone was awed by generative AI and filled with hope for the technology. This year, those sentiments remain, but they’re enveloped in a fresh layer of caution as world leaders work to understand possible impacts, and how to protect people from unforeseen, or foreseen and unmonitored impacts of AI.
The Ethical Considerations:
I’ve listed some ethical considerations above, but there are some common themes. One is the planet and how continued use of technology by those who can afford it is having an impact on those who cannot afford it. Both the planet and less fortunate individuals are the victims of operating in this way. To not care about how the climate is changing is a privilege, and we cannot afford to have leaders who turn a blind eye to such challenges.
Another consideration is job loss. Leaders for businesses have a responsibility to shareholders, but they do not have a singular responsibility to shareholders. They have a responsibility to the community in which they exist (physical and digital, alike) and they have a responsibility to the people they employ. Business leaders have the ability to decide NOT to favor AI over people, and the ability to strike a balance between “AI employees” and “human employees”. The leaders faced with striking this balance will face criticism from those with a wide variety of opinions, which is why broader principles must govern their decision making, rather than the almighty dollar.
AI Policy Watch
WHO Releases Governance Guidance for LLMs
According to a news release published by the World Health Organization (WHO) on January 18th, 2024, the organization has released a 98 page guide with over 40 guidelines and recommendations for governments, tech companies and providers of health services to “ensure the appropriate use of LLMs to promote and protect the health of populations.” The WHO is a United Nations agency focused on providing health guidance and resources “to give everyone, everywhere an equal chance at a safe and healthy life” according to their website. The WHO guidance on LLMs addresses five potential uses of LLMs which includes diagnosis, patient-guided use (when patients do independent searches for symptoms, etc.), administrative tasks, education and research. A huge risk of LLMs in healthcare is incorrect, incomplete or biased information, which, as noted by WHO, is especially dangerous when people are making decisions about their health. This is a powerful and important move to get those in the world of medical some real-time guidance for upcoming business decisions regarding how AI could be leveraged for healthcare.
Video of the Week
Multi-Modal AI: 2024’s Topic du Jour
Let’s Connect.
Connect with me on LinkedIn.
Subscribe on YouTube.
Follow on TikTok, where I talk about developments in responsible AI.
Email me thepaigelord@gmail.com if you want to connect 1:1.