Open Source AI vs US Policy: A Dance
Your trusted source for insights on the world of responsible AI and AI policy. November 13th, 2024. Issue 36.
An update from Paige
This week I bundled all of my responsible AI multi-media efforts (including this newsletter), as well as my consulting and speaking services, under the umbrella of my news business Just AI Media, so named because the goal of all my endeavors remains the same: to evaluate and question the intersection of “justice” and “AI,“ promoting digital and AI literacy, and responsible AI in the process. This has been a long time coming, and I’m excited to share with you today.
So what does this mean?
This newsletter will be the Just AI Newsletter moving forward, and you’ll see a mini rebrand.
I (Paige) am still writing the newsletter every week.
The newsletter will remain free. That’s a promise. Read more about the Just AI business model here.
Now let’s move on to the business of the week!
Quick Hits 👏
Interesting AI news from the week that I won’t explore in-depth, but is important to acknowledge in the current AI moment.
The European Commission is Hosting an Open Call for Consultations on Aspects of the EU AI Act
SoundHound AI is Dominating the AI Game in the Quick Serve Restaurant Industry
SoftBank Gets First Dibs on Nvidia’s New AI Chip
AI Ethics News
Notable news in the world of AI ethics and responsible AI.
The UN Pushes Against an International Agency for AI
Carme Artigas, the Co-Chair of the U.N.’s AI Advisory Body has stated concerns about the formal creation of an international AI agency. The primary reason for the pushback is simply that there are so many groups, treaties and guidelines that influence how members of the UN navigate AI. Artigas indicated that matters relating to AI could be handled by other already established entities, like the Geneva Convention, or the World Intellectual Property Organization
👉 Why it matters: Creating an international AI agency would certainly come with challenges, but one challenge the world faces without an international AI agency is the continued inequity caused by AI for parts of the world that cannot meaningfully implement the technology. While maintaining a boundary in creating an international AI agency under the UN umbrella is understandable, there may come a time in the future when it is necessary, especially as disparate efforts across treaties and organizations tends to lead to communication breakdowns.
Translation in AI Heats Up - But is it Ethical?
Company Unbabel has created a new proprietary large language model (LLM) named Tower, which allows AI translation in 32 languages. The company, a competitor of Google’s “Google Translate,” among others, believes that their translation product, Widn.AI, translation will soon no longer need humans.
👉 Why it matters: Statements of this nature are incredibly ambitious, especially given that there are over 7,000 languages spoken in the world. Of course, 90% of languages are spoken by less than 100,000 people, but that still leaves 700 languages with over 100,000 speakers.
Continued innovations in this space come with pros and cons. On the positive end of the spectrum, there is a great chance that LLMs and the data that feeds them will help identify and preserve language, and even track language changes over time. On the negative end of the spectrum, this will impact those employed by the translation industry. It also paints a false image of language dominance, when the technology itself only addresses .005% of known languages on the planet.
AI Policy Beat
A look at what’s happening in the world of AI policy.
Denmark Launches Framework Guiding Public & Private Sector Entities to Use AI Assistants in Compliance with the EU AU Act
Compliance with the newly in-force EU AI Act is paramount for those wishing to conduct AI business in the EU. The government of Denmark partnered with Netcompany to launch the “Responsible Use of AI Assistants in the Public and Private Sector,” a white paper that guides these entities through their use of generative AI given the complexity of regulation. The white paper prioritized compliance with both the EU AI Act and the General Data Protection Regulation (GDPR). According to CNBC, Microsoft is “on board” with the Danish approach to generative AI use in light of the AI Act. It appears that Microsoft* has been a frontrunner in sighing up to leverage the guidelines as they do business in the EU.
👉 Why it matters: The EU AI Act largely entered into force in August of 2024 and puts strict requirements on how AI can be marketed, built and used. The responsible AI field is flooded with frameworks for leveraging AI in all manner of situations, but this appears to be the first framework that can broadly guide large companies through the process of complying with the AI Act. This is excellent for AI companies looking to do business in the EU, as a violation of the act could result in a hefty fine to the tune of tens - or even hundreds - of millions of dollars. The heavy restrictions of the EU AI Act serve to protect the EU from disjointed distribution and use of AI, and to protect citizens from AI’s potential to have an overly-broad reach in terms of privacy and data collection.
*Paige is an employee of GitHub, which is owned by Microsoft.
US Policy & Open Source AI - at Odds?
In early November Reuters reported that researchers in China used one of Meta’s Llama models for military purposes. The Llama models are one set of few frontier models on the global market that are open source or “open weight,” which means the information governing how the models behave is open to anyone who choose to examine or use the model. This means that safety measures built into the AI to protect users could be removed. Once the news was reported, the Chairman of the House Foreign Affairs Committee, Michael McCaul indicated his belief that prohibiting American AI developers from releasing open-weight models was critical to national security, and proposed the ENFORCE ACT to make this belief a reality. The Chairman’s logic appears to be that by keeping American AI models closed, it reduces the risk of China, and other global adversaries in the AI race, from using the technology to gain a military leg up.
👉 Why it matters: Open source has long been critical to innovation the world over. So it was with software, and so it is with AI. Blocking US companies and developers from creating open source AI models is a dangerous move that could have a negative impact on US security interests. For one, it would work in opposition to the US lead in as open source contributors. It would also compromise the ability for non-adversarial actors to leverage AI, forcing these developers to build and train their own AI, the costs of which are prohibitive for most. Finally, as mentioned in the article, China has its own well performing AI models. The use of an early iteration the Llama model isn’t the threat that it appears to be. Requiring US open source AI to become closed would have a similar impact on innovation as over-regulation, stifling the ability of American people and companies to improve AI, and hindering global adoption of the technology. It would almost certainly work in opposition to US national security and global AI leadership goals.
Meta Allows the U.S. Military to Use AI Models
In a shift from its prior policy positions, Meta will now allows the US government to leverage its Llama AI models for military purposes. The Llama models, which are open source, will be used for “responsible and ethical uses” supporting “democratic values” according to Meta’s president of global affairs, noting that “widespread adoption of American open source AI models serves both economic and security interests.”
👉 Why it matters: This shift in policy from Meta calls into question whether Meta’s vision for “responsible and ethical uses” aligns with that of the US military. Meta, a US company, will not have oversight into how the technology will be used, and it is unlikely that the US military would confer with Meta before leveraging the technology. However, allowing the US military to leverage the Llama models will help the US to remain ahead in the AI race and in military AI innovation. According to a report from Reuters, China has leveraged Meta’s Llama models for military purposes, which likely influenced this change in policy by Meta.
AI in Society
This section will highlight new and interesting uses of AI, so you can stay up-to-date on how the technology is changing.
Large Language Model Innovation Hits Roadblocks
The rate of innovation for large language models, which power generative AI, is slowing, leading AI companies to explore new ways to innovate and optimize. In the AI race, one major goal is constant improvement of the models in question. The size, training, data input, energy consumption and other factors have significant effects on how a language model performs.
Model innovation is being impacted by a variety of factors. First, the massive cost of pre-training a model, which is the process that proves out how the model will perform. It can cost tens of millions of dollars to pre-train a model. The second is scarcity of resources like energy and the AI chips needed to power the model. Third is that “AI models have exhausted all the easily accessible data in the world” according to Reuters.
To navigate these challenges, companies are working to alter model functionality in pursuit of the way OpenAI’s o1 models perform, by allowing more time for inference and reasoning to produce a high quality outcome. This method relies on training that occurs on top of the base model (in this case GPT-4) rather than making the base model bigger or creating a new base model altogether.
👉 Why it matters: The pace technological (AI) innovation has been unbelievably fast compared to previous industrial revolutions. Within two years since it’s initial public release, Open AI has made myriad modifications and released several versions of their base models. To see a slow-down in innovation so soon after the initial launch, and to comprehend the sheer amount of resources needed to create these models is nearly unbelievable. As the world moves forward with AI innovation we can expect to see increased competition as AI companies fight to find the best way to optimize current models. This could include (even more) competition for resources like AI chips, data, and energy - all limited resources under strain - among other things. Open AI maintains the lead with the launch of their GTP 4-o1 and o1 mini models, leading others like Anthropic to try and recreate their success. Continued focus on innovation that uses more and more resources could negatively impact individuals, and keep the barrier to entry for AI innovation unreasonably high.
Spotlight on Research
Navigating AI in Social Work and Beyond: A Multidisciplinary Review
This review began as a brief exploration of how artificial intelligence (AI) intersects with social work but expanded to examine AI's profound influence as a transformative innovation. Drawing from interdisciplinary perspectives, it situates AI within broader societal conversations, analyzing its real-world impacts, ethical challenges, and implications for social work. The review envisions AI-driven tools like Advanced Personalised Simulation Training (APST) to revolutionize social work education, while critically balancing AI's advancements with caution about its complexities and challenges. (AI was used to shorten this abstract.)
Persuasion with Large Language Models: a Survey
The rise of Large Language Models (LLMs) has transformed persuasive communication by enabling automated, personalized content generation at scale. This paper surveys the emerging field of LLM-based persuasion, examining their use in politics, marketing, public health, and beyond, where they achieve human-level or superhuman effectiveness. While highlighting their potential, we also address the ethical and societal risks they pose, such as misinformation, bias, and privacy invasion, emphasizing the urgent need for ethical guidelines and updated regulations.Traditional drug discovery is a long, expensive, and complex process. Advances in Artificial Intelligence (AI) and Machine Learning (ML) are beginning to change this narrative. Here, we provide a comprehensive overview of different AI and ML tools that can be used to streamline and accelerate the drug discovery process. By using data sets to train ML algorithms, it is possible to discover drugs or drug-like compounds relatively quickly, and efficiently. Additionally, we address limitations in AI-based drug discovery and development, including the scarcity of high-quality data to train AI models and ethical considerations. The growing impact of AI on the pharmaceutical industry is also highlighted. Finally, we discuss how AI and ML can expedite the discovery of new antibiotics to combat the problem of worldwide antimicrobial resistance (AMR).
Watch: Responsible AI with NNgroup
Here’s some content from my other mediums. Feel free to follow me on Instagram, TikTok or YouTube.
Two weeks ago I had the privilege of connecting with Eric Kimberling to discuss Ethics and AI. Check it out!
Let’s Connect:
Connect with me on LinkedIn.
Looking for a speaker at your next AI event? Email thepaigelord@gmail.com.
Email thepaigelord@gmail.com if you want to connect 1:1.