AI in Africa: The Innovative and Exploitative
Your trusted source for insights on the world of responsible AI and AI policy. November 26th, 2024. Issue 38.
Quick Hits 👏
Interesting AI news from the week.
Intel’s Responsible AI Prodigy
We Need to Start Wrestling with the Ethics of AI Agents
NVIDIA Claims a New AI Audio Generator Can Make Sounds Never Heard Before
Amazon is Getting Into the AI Chip Game
*Paige works for GitHub, a Microsoft Company
AI Ethics News
Notable news in the world of AI ethics and responsible AI.
Orange + OpenAI + Meta Partner for African Language AI
Orange, a French telecom company, is partnering with OpenAI and Meta to build AI models that can better understand African languages. Africa is considered one of the most linguistically diverse continents in the world, but the continent struggles with tracking and translation of some of the lesser-used languages, and dissemination of information across specific regions.
👉 Why it matters: This partnership is significant given that technology usually takes years - sometimes decades - to benefit non-western geographies. There are over 3,000 living languages on the African continent, many of which are at risk of being lost to time. AI that prioritizes African languages has several benefits. First, preserving niche languages so that they do not become extinct. Second, AI for African languages can be used for translation and communication, so that messages around health, politics and other critical topics are translated accurately. This partnership is a great example of AI being leveraged for fairness and inclusivity.
Kenya Bears the Brunt of AI Gruntwork
Kenyan’s are bearing the brunt of AI gruntwork by taking jobs with American companies that need humans in the loop to sort, label and analyze data. Because Kenya has a 67% unemployment rate and over 1 million people entering the job market every year, the population is easy to exploit. OpenAI is implicated.
👉 Why it matters: This is an example of wealthy tech companies exploiting vulnerable populations. One individual interviewed likened the AI labeling center to a sweat shop with computers instead of sewing machines. OpenAI is listed among the companies that are participating in the exploitation by using a third party company to hire the workers. Workers are making $2.00 per hour while SAMA, the third party company, is charging OpenAI $12.50 per hour. In this case the third party company is certainly to blame, but OpenAI is just as responsible for participating in the exploitation. Beyond this the workers report having to engage with content that is harmful to their mental health.
AI Policy Beat
A look at what’s happening in the world of AI policy.
Donald Trump’s Search for an “AI Czar”
Donald Trump is looking for an “AI Czar” to partner with Elon Musk in the effort to keep the US at the leading edge of AI development. According to Axios, this Czar would partner with the forthcoming “Department of Government Efficiency” (DOGE). The role would be focused on public and private AI matters. This AI Czar role could also be combined with a “Crypto Czar” role. As of now, no candidates have
👉 Why it matters: This position would not require a senate confirmation hearing, which means they can start working quickly to bring about the Trump administration’s goals. It is still unclear whether Donald Trump would keep certain aspects of Biden’s AI Executive Order, or if he will start from scratch on AI for the Federal government.
The Biden Administration Hosted AI Safety Talks as Trump Presidency Looms
The Biden administration hosted international talks in California this week to discuss AI safety measures as the Trump presidency, and his promise to overturn the AI Executive Order looms. A major topic of note was the proliferation of AI generated deepfakes which fraudulent and abusive in nature, and their harmful impact.
👉 Why it matters: Safety has been a top concern for the Biden administration, which has focused on balancing both safety and innovation in AI. Comments made by former (and soon to be) President Trump have not touched on the topic of safety, but have focused squarely on innovation. It remains to be seen whether a Trump Presidency will prioritize AI safety, prompting world leaders to form tighter alliances to keep AI safe and mitigate harms as the new administration focuses on innovation and AI dominance.
NIST Launches AI Testing Taskforce
The National Institute of Standards and Technology (NIST) formed a new taskforce focused on testing AI for risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together groups across the government to identify and manage the national security and safety challenges that AI innovation brings with it. The group includes representatives from the Department of Defense, Department of Energy, Department of Homeland Security, and the National Institutes of Health.
👉 Why it matters: AI safety challenges continue to grow and proliferate with the technology and its increasing utility. While it’s unclear if the taskforce will last through the impending Trump presidency, it’s an important step in prioritizing the safety of Americans and American AI leadership globally. The group will focus on creating methods for evaluating and measuring AI’s risk, as well as conducting joint national security risk assessments.
AI in Society
Deeper looks at AI in the world. This is an original article written by Paige Lord.
I Implore You: Stop Saying AI Isn’t Taking Jobs
When I ordered my Waymo on a chilly San Francisco morning, I didn’t know what to expect. I was in SF for work and I was nervous to try the autonomous vehicle, but my curiosity won the day. The car arrived - a Jaguar, no less. I fumbled with the handle, opened the door and settled into the back seat. After an introduction from a personless voice, I hit “start ride” and suddenly I was being being chauffeured to the GitHub office. NOBODY was DRIVING. I repeat: Zero other humans were in the car with me. It was, in a word, luxurious. I cut entire paragraph from this article that was essentially an ode to the Waymo. I loved everything about the experience and have shared this broadly since, to whomever might listen. But I’m an AI ethicist, and my love of innovation is checked and balanced by my consideration of the impact innovation has on very real people. Every time I booked a Waymo, I thought through the downstream effect the driverless cars have on the people of San Francisco who drive for a living - taxi, Uber and Lyft drivers, among others.
I broke with my beloved Waymo when, in the midst of my San Francisco trip, I jetted to Dallas for 18 hours to give a talk at a conference. In the talk just before mine the speaker said something that caught my attention. “We all know that AI isn’t taking jobs.” The first time they said it, I was certain it was an accident. Surely, I thought, they don’t mean to paint with such a broad brush. But they kept saying it - six times in a 25 minute presentation. (Continue reading)
Spotlight on Research
AI and the Future of Work in Africa White Paper
This white paper is the output of a multidisciplinary workshop in Nairobi (Nov 2023). Led by a cross-organisational team including Microsoft Research, NEPAD, Lelapa AI, and University of Oxford. The workshop brought together diverse thought-leaders from various sectors and backgrounds to discuss the implications of Generative AI for the future of work in Africa. Discussions centred around four key themes: Macroeconomic Impacts; Jobs, Skills and Labour Markets; Workers' Perspectives and Africa-Centris AI Platforms. The white paper provides an overview of the current state and trends of generative AI and its applications in different domains, as well as the challenges and risks associated with its adoption and regulation. It represents a diverse set of perspectives to create a set of insights and recommendations which aim to encourage debate and collaborative action towards creating a dignified future of work for everyone across Africa.
Find Rhinos without Finding Rhinos: Active Learning with Multimodal Imagery of South African Rhino Habitats
Much of Earth's charismatic megafauna is endangered by human activities, particularly the rhino, which is at risk of extinction due to the poaching crisis in Africa. Monitoring rhinos' movement is crucial to their protection but has unfortunately proven difficult because rhinos are elusive. Therefore, instead of tracking rhinos, we propose the novel approach of mapping communal defecation sites, called middens, which give information about rhinos' spatial behavior valuable to anti-poaching, management, and reintroduction efforts. This paper provides the first-ever mapping of rhino midden locations by building classifiers to detect them using remotely sensed thermal, RGB, and LiDAR imagery in passive and active learning settings. As existing active learning methods perform poorly due to the extreme class imbalance in our dataset, we design MultimodAL, an active learning system employing a ranking technique and multimodality to achieve competitive performance with passive learning models with 94% fewer labels. Our methods could therefore save over 76 hours in labeling time when used on a similarly-sized dataset. Unexpectedly, our midden map reveals that rhino middens are not randomly distributed throughout the landscape; rather, they are clustered. Consequently, rangers should be targeted at areas with high midden densities to strengthen anti-poaching efforts, in line with UN Target 15.7.
Watch: Meta & The US Military
Here’s some content from my other mediums. Feel free to follow me on Instagram, TikTok or YouTube.
Two weeks ago I had the privilege of connecting with Eric Kimberling to discuss Ethics and AI. Check it out!
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai
Connect with me on LinkedIn.
Looking for a speaker at your next AI event? Email thepaigelord@gmail.com.
Email thepaigelord@gmail.com if you want to connect 1:1.