AI Ethics & Policy Insights
The AI ethics & policy info we should know - simplified. Week of October 30 , 2023. Issue 14.
Major news this week:
BIDEN ADMINISTRATION AI EXECUTIVE ORDER (more below)
UN AI ADVISORY BODY:
The UN announced on October 27, 2023 the creation of the UN AI Advisory Body. The call for expert nominations went out in August of 2023, resulting in the two month turnaround for forming the board.
The body includes 39 individuals from around the world.
The goal of the advisory body is to “undertake analysis and [provide] advance recommendations for the international governance of AI” according to the UN AI Advisory Body website.
UK AI SAFETY SUMMIT:
UK Prime Minister Rishi Sunak is hosting the first UK AI Safety Summit, which is turning out to be a who’s who of responsible AI experts, business leaders and government officials, including a representative from China.
The Summit is viewed as part of the UK’s attempt to establish itself as a leader in AI and responsible AI, but actions from the UK government seem to be working on opposition of those ambitions.
In late August/early September the UK government quietly shut down an 8 person advisory board, meant to hold government groups accountable for how they use AI.
It’s HERE! The Biden Administration AI Executive Order (AIEO)
The Context:
In a significant policy move and a global first, the Biden Administration released an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. The 63 page executive order is intended to:
instruct the Federal Government on how to operate responsibly with AI
protect U.S. citizens and residents from unfair uses of AI
provide guidance for commercial businesses on expectations for building safe AI
establish the U.S. as a leader in AI governance
Why it Matters:
The AI Executive Order (AIEO) is critical to AI policy and U.S. leadership.
It sends a message: In terms of AI policy, the AIEO was released before the highly-anticipated and long-awaited EU AI Act, and is also the first major establishment of governmental AI standards in the world. No doubt drafting off of the proposed EU AI Act, this AIEO sends the message to the world that the U.S. is going to lead in AI governance by requiring its own government to get in line with AI safety standards.
It puts action to thought leadership: In addition to sending a message that the U.S. is serious about AI governance, it also establishes the U.S. as a first-mover in the world of AI governance. Being “first to market”, so to speak, can be an incredibly powerful amplifier of one’s message, and has certainly sent waves through the AI Policy world ahead of the UK AI Safety Summit.
It marks a sea change: This EO represents a remarkable moment in the AI era. It takes thought leadership and puts it into action for both U.S. government entities, as well as commercial businesses creating, using and deploying AI. It requires that these groups consider and implement standards that promote safety, and in some cases it requires reporting of performance against safety standards.
Where we go from here:
The AIEO has varying timelines for its priorities and directives, ranging between 180 and 365 days. Over the next year we will see government and commercial organizations grapple with what’s being asked of them, implement plans, make changes, report on AI models, and establish safe AI use. This directive is not meant to be the end-all-be-all AI policy for the US, and the AIEO calls upon Congress to establish regulation. However, it provides structure and a standard of protection which is desperately needed as the AI market heats up.
AI Policy Updates
The EU AI Act
The EU AI Act is expected to be finalized by the end of 2023. It is currently going through another round of the “trialogue” which is a dialogue between three people or groups. In this case the trialogue takes place with the European Commission, The Council of the European Union, and the European Parliament.
The EU AI Act is expected to be passed by June 2024. Following its passing, companies and organizations conducting AI-related business in the EU will have a two year grace period to get aligned to the requirements.
TikTok of the Week
AI Executive Order Part I: PRIMER
Ask your AI Question.
Submit your AI question here - it can be a responsible AI, AI policy or general AI question. I’ll pick one to respond to every week!
This week’s question: What groundbreaking work are we using AI for in the medical field, and how can we prevent bias in this field?
(Submitted by: Nicholas deLaurentis)
My response: I was at a joint bachelor party recently, and I had a conversation with a radiologist. He does not work with patients. All day, every work day, he reviews images, gets consultations and makes determinations. We talked about AI in his field, and he was excited about the possibilities. So am I. Leveraging AI in medical imaging is (as of now) one of the least dangerous uses of AI, namely because the subject of the AI’s focus is the image itself. It isn’t trained (that I’ve heard of) to identify a patient’s race, guess their age, or make judgements about their person. Its job is to look for the abnormalities and make a statement about what the abnormality is. I find this to be incredibly promising and truly groundbreaking because it can help get patients the answers they need, and therefore the treatment they need, faster.
However, even these technologies have areas of danger. One of the biggest risks is that the AI would be left to its own devices or that the doctor would trust it without checking its work, but this risk isn’t the AI’s problem - it’s a human problem. This is why safety has to be built in to the process end to end, from the creation, training and use of the AI, to human checkpoints along the way. To offer a different opinion, I found this article from Harvard Medical to be helpful. One of the most challenging parts of technological innovation is that these technologies have to be given room and time to develop. This is why tight controls and sandboxes are incredibly important. I still think AI in imaging will be a game changer of the medical field of the future, but we can’t forget safety.
(I’ll address the bias portion of the question next week.)
Let’s Connect.
Connect with me on LinkedIn.
Follow on TikTok, where I talk about developments in responsible AI.
Email me thepaigelord@gmail.com if you want to connect 1:1.