AI Ethics & Policy Insights
The AI ethics & policy info we should know - simplified. Week of January 15, 2024. Issue 19.
It’s been a slower week in the world of AI Ethics & Policy, which gives us time to explore less talked about but exceedingly critical items. This week, keep an eye on Davos, the European Council, and those Gen AI lawsuits. Take a look at the info and insights below!
The Latest:
DAVOS - The World Economic Forum takes place in Davos, Switzerland every year. Last year everyone was talking about this new thing called ChatGPT, and this year everyone is talking about the impact, risk and future of Generative AI in general. There are over a dozen sessions that mention gen AI in their titles, and business and political leaders from around the world are expected to speak. Keep an eye out for notable quotes on AI, and for who meets with whom. If you’re wondering what this has to do with ethics or policy - everything. Huge connections and decisions about people and governments are being made at this conference. It’s one to watch.
AI Policy in Seattle - Cities across the country are working to implement their own AI policies into their governments. Seattle just released a new AI policy for city employees, which focuses on rallying employees around seven principles.
Deeper Dive: Multi-Modal AI is the Future
I’m predicting that 2024 is the year we’ll begin hearing more about multi-modal AI - the chatter is already happening. With the release of Gemini, Google’s multi-modal AI model, the competition in the generative AI space is continuing to heat up. But what is multi-modal AI? And what are the ethical considerations for these models?
The Context: What is Multi-Modal AI?
Multi-modal AI models are models that can take in multiple forms of data such as video, audio, text and images as an input and use all these forms of data to produce an output. If the purpose of AI is to mimic humans (per the 1955 Dartmouth AI proposal) this move from single-modal to multi-modal moves AI closer to how humans usually form an output. If I’m looking in my fridge, I can see an apple and deduce that it’s an apple. But that’s not the only way I can determine what it is. I can feel it, taste it and smell it. I can hear what it sounds like when someone bites into it. All of these pieces of information are coming to me in different forms - vision, hearing, touch, smell and taste - and I’m using that information to draw a conclusion. Multi-modal does the same thing but with digital inputs - for now.
The Conversation:
For some people this might not seem revolutionary because they were under the impression that AI was already doing this, or because this seems like the most natural evolution of generative AI, but this really is a big next step because it gets us closer to AI that evaluates and produces in ways more similar to humans. Until now we’ve seen single-modal AI, which can only take in one type of data (text, for example) to produce its output.
With the release of Gemini and the latest update to ChatGPT, we can see the multi-modal AI in action. Not only can it take in multiple forms of data like text and images, but it can put out multiple outputs, like digital art and texts, in the same prompt. Please see the example of my prompts below.
The Future:
Right now multi-modal AI is squarely digital. We haven’t yet been able to create market-ready AI that can smell, feel and taste (although these things are coming) but that’s certainly on the roadmap for the AI of the future. While the technology still has a way to go before AI becomes sensory, multi-modal AI with digital inputs is the next significant step toward AI that can take in smell, taste, and other traditionally “human” inputs to create an output. We’re not there yet, but this is coming and I think we’ll see it become mainstream in our lifetime.
The Ethical Considerations:
The ethical considerations for single-modal AI and multi-modal AI are very similar, with unfairness at the top of the list. When the data provided to train these AI models is bad, biased or unfair data, it will produce those outputs. One major consideration is that the multi-modal AI will exacerbate and perpetuate unfairness that has already been presenting in single-modal models.
Below are screenshots of prompts I typed in to ChatGPT while writing this newsletter. First I asked it to show me a picture of a finance CEO, and the output was a white man in a high-rise office. I then asked it to show me a CEO of a makeup brand, and was provided with an image of a woman in a “chic” office setting. I then asked it to create an image of an executive assistant, and the output was a young woman sitting at a desk with a headset on.
Some would say that this is just a natural reflection of the world we’re in. “There are more female Executive Assistants!” “More men are CEOs of financial institutions than women!” “Women are more likely to start makeup brands!” But there are other elements to consider here, like the fact that each prompt produced an output with white people. And the bottom line is that reflecting the majority or what’s viewed as “typical” is not a reflection of the diverse world we’re in. Multi-modal AI brings with it the biased training it’s received and could potentially multiply the unfairness we see in single-modal models.
As always, one of the biggest risks for single and multi-modal AI is how people use the inputs and outputs. Failure to recognize that these AI platforms can confabulate and provide false information in a very confident tone could fool people into believing a narrative that isn’t true.
One of the major challenges I feel we’re facing is responsible AI fatigue. People are coming to accept that the technology is biased and are fatigued by the idea of asking for change, which puts the onus of responsible AI squarely on the creators of the AI. Will responsible AI still be a priority when it means slowing down revenue to get it right? Time will tell.
Sources: PYMNTS, Spectrum.ieeee
AI Policy Updates
Council of Europe AI Treaty - Is the US pushing for private sector leniency?
The Council of Europe is an independent human rights body - the largest in Europe - representing over 40 member states, 27 of which are members of the EU. The Council of Europe formed the Committee on Artificial Intelligence, a working group writing the Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law public - essentially, a treaty to protect people’s human rights from illicit and ignorant uses of AI.
An article by Euractiv, an independent media network, indicates that the United States has pushed back on a section of the Convention which would require that all private companies under the purview of a signatory adhere to the guidance outlined in the Convention. According the the article, the EU is prepared to push back against the United States. This is an important moment to watch this conversation, especially given the current instability and human rights violations in the Middle East and the United State’s involvement.
Video of the Week
5 AI Books You Need to Read in 2024
Let’s Connect.
Connect with me on LinkedIn.
Subscribe on YouTube.
Follow on TikTok, where I talk about developments in responsible AI.
Email me thepaigelord@gmail.com if you want to connect 1:1.