Meta, Nvidia, xAI, Build and Expand AI Facilities - Some More Ethically Than Others
Your trusted source for insights on the world of responsible AI and AI policy. December 5th, 2024. Issue 39.
Quick Hits 👏
Interesting AI news from the week.
Google’s New AI Agent Predicts 15 Day Weather Forecast, Beats World-Class System
Google Announces new GenAI Models
AWS Tackles AI Hallucinations
*Paige works for GitHub, a Microsoft Company
AI Ethics News
Notable news in the world of AI ethics and responsible AI.
Apple is using Amazon’s AI Chips
Apple has disclosed its use of Amazon Web Services' (AWS) custom AI chips, including Inferentia and Graviton, for services such as search, gaining a 40% efficiency improvement. At AWS's annual Reinvent conference, Apple also announced plans to evaluate Amazon's Trainium2 chip for pretraining its AI models, potentially improving efficiency by 50%. Apple’s partnership with AWS, alongside its use of other cloud services like Google Cloud, signals a diversification in AI training infrastructure and endorses AWS as a competitor to Nvidia. This collaboration supports Apple's growing generative AI initiatives, including the "Apple Intelligence" services launched earlier this year
👉 Why it matters: This move signals that Amazon is a major competitor to Nvidia in the AI chip race. The benefit of new competitors coming to market is the diversification of the market. It increases competition for the chips and may cause chip prices to reduce. One AI chip from Nvidia can cost anywhere from $10,000 to $40,000 dollars. The potential reduction in price make AI invention more accessible. No matter where the chips are made, there are questions about the impact on the environment. The higher the chip performance, the more power it generally needs. With businesses clambering for AI chips, it’s important to watch for increased energy consumption and work toward more energy efficient solutions.
Amazon introduces Amazon Nova, new foundation models
Amazon has launched Amazon Nova, a new generation of foundation models (FMs)—large AI models pre-trained on vast datasets that can be fine-tuned for versatile applications across industries. Nova includes models for text, image, and video tasks, optimized for speed, cost-effectiveness, and customization through fine-tuning and Retrieval Augmented Generation (RAG). Integrated with Amazon Bedrock, Nova supports creative content generation and advanced AI applications. Future plans include speech-to-speech and multimodal "any-to-any" models, reinforcing Amazon's focus on innovation.
👉 Why it matters: With its new models, Amazon provides “AWS AI Service Cards” which offer information on use cases, limitations of the models and responsible AI practices applied. Amazon boasts about supervised fine-tuning to ensure that the model has guardrails, and watermarking for images in indicate whether the images were AI generated. While AI models in general are not “new news” this marks a significant investment from Amazon, which appears to be joining the AI race from every angle, including cloud AI services, chip creation and model offerings. Amazon views responsible AI across eight excellent dimensions, providing great coverage for responsible building, use and deployment. As with any AI provider, it’s important to note what they say about privacy, security and responsible AI, and watch what they do.
Music sector workers projected to lose 1/4 of income to AI, according to global study
A global study by the International Confederation of Societies of Authors and Composers (CISAC) warns that AI could reduce incomes for music sector workers by nearly 25% within four years, as generative AI grows from €3 billion to €64 billion by 2028. While AI enriches tech companies, it risks undermining creators' rights and incomes unless strong regulations are implemented. The study highlights unauthorized use of creative works by AI and shrinking opportunities for human creators as key concerns. Australia and New Zealand are leading efforts to establish AI policies that protect creators, with calls for global action to safeguard creativity and culture.
👉 Why it matters: Generative AI’s ubiquity continues to signal that jobs are at risk across myriad industries. From AI leveraging works by artists without authorization to performing tasks normally performed by humans, it means the music industry is in for an adjustment. While some of these challenges (such as unauthorized use of works) are legal issues, others are issues related to the changing of the times and natural evolution of technology within a variety of fields. This study adds to the growing body of evidence that indicate a sea change across industries where workers will have to decide if they’ll evolve with the technology and learn to leverage it, or choose not to.
AI Policy Beat
A look at what’s happening in the world of AI policy.
Nvidia & the Vietnamese Government form a Partnership
Nvidia has signed a cooperation agreement with the Vietnamese government to support artificial intelligence (AI) development in Vietnam. While details of the agreement were undisclosed, Nvidia CEO Jensen Huang emphasized AI as a key driver of growth, referring to it as "new infrastructure." The signing took place in Hanoi, attended by Vietnam's Prime Minister Pham Minh Chinh. This follows Nvidia's ongoing efforts to expand partnerships in Vietnam, including a $200 million AI factory project announced by Vietnamese tech firm FPT earlier this year. The agreement highlights Vietnam's increasing focus on advancing its AI capabilities.
👉 Why it matters: Vietnam is considered a developing country with lower-middle-income. The country underwent a transformation from a centrally planned economy to a market-oriented economy. This means that all major economic decisions were formerly made by the government, and now the economy is influenced by supply and demand, with minimal business intervention. It’s hard to say what the agreement will do for Vietnam given that the details are still under wraps, but this could be a significant win for the country, bringing jobs and a new opportunity for growth. AI partnerships in developing countries can open the country and region up to significant growth and change. However, partnerships like these can also open vulnerable job-seekers up to exploitation, which is something to watch.
Meta plans to build a new AI datacenter in Louisiana
Meta has announced a $10 billion investment to build its largest-ever data center in northeast Louisiana, marking a significant milestone for the state. The 4-million-square-foot facility, spanning 2,250 acres, will serve as a hub for artificial intelligence operations and is expected to create over 5,500 jobs, including 500 permanent positions. Governor Jeff Landry described the project as "transformational," positioning Louisiana as a leader in technology infrastructure. In addition to the data center, the initiative includes three new energy power plants to meet the facility's substantial energy demands, further cementing Louisiana's role in the AI and tech industry. Construction is already underway, with a focus on local hiring and economic growth.
👉 Why it matters: Meta has taken advantage of significant tax credits in Louisiana to build its next, and largest, datacenter, which will specifically focus on AI. It appears they have worked in tight partnership with the government in Louisiana to ensure the deal benefits the community and not just Meta, making it a win. This construction also includes it power plants, meaning Meta has considered the impact on the local power grid and sought to solve that problem so it won’t negatively impact the people in the state of Louisiana. In contrast to similar work being conducted by xAI in Memphis (see below) it appears that Meta has taken a responsible approach to building this facility.
xAI is expanding its Memphis-based supercomputer
Elon Musk's AI startup, xAI, plans to significantly expand its Memphis-based supercomputer, Colossus, from 100,000 to over one million GPUs, bolstering its efforts to compete with rivals like OpenAI. Nvidia, Dell, and Super Micro will support the expansion by establishing operations in Memphis to supply and assemble hardware.
👉 Why it matters: This project raises several concerns. First, the energy demand which is not unique to this situation, but is being seen across the entire AI landscape. Most concerning is the way in which xAI has gone about creating the facility, which is located near predominantly black neighborhoods and it’s reported that those communities have dealt with health risks and pollution due to factories like these. Lawyer Patrick Anderson of the Southern Environmental Law Center stated that xAI has operated with “a stunning lack of transparency” as they’ve built and conducted business with their facility. If a facility - AI or otherwise - negatively impacts the community it’s in, it’s a problem. Historically, communities of color and low income communities have bore the brunt of “expansion” and “innovation”. My conclusion is that xAI needs to hire someone to conduct their AI business more responsibly.
Spotlight on Research
Large Language Models (LLMs) have been increasingly adopted by professionals for work tasks. However, using LLMs also introduces compliance risks relating to privacy, ethics, and regulations. This study investigated the compliance risks professionals perceive with LLM use and their risk mitigation strategies. Semi-structured interviews were conducted with 24 law, healthcare, and academia professionals. Results showed that the main compliance concerns centered around potential exposure to sensitive customer/patient information through LLMs. To address risks, professionals reported proactively inputting distorted data to preserve privacy. However, full compliance proved challenging, given the complex interactions between user inputs, LLM behaviors, and regulations. This research provides valuable insights into designing LLMs with built-in privacy and risk controls to support professionals' evaluation and adoption of emerging AI technologies while meeting compliance obligations.
Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration
Clinnova is a collaborative initiative by France, Germany, Switzerland, and Luxembourg to advance precision medicine using data federation, standardization, and AI. Focused on diseases like multiple sclerosis (MS), it aims to enhance personalized treatments and healthcare efficiency through federated learning (FL) and digital health innovation. Led by IHU Strasbourg, Clinnova's MS project uses FL to develop models for detecting disease progression and validating biomarkers. Its first cross-border federated proof of concept on MRI segmentation marks a milestone in advancing AI-driven healthcare while addressing technical and ethical challenges.
Watch: Meta & The US Military
Here’s some content from my other mediums. Feel free to follow me on Instagram, TikTok or YouTube.
Two weeks ago I had the privilege of connecting with Eric Kimberling to discuss Ethics and AI. Check it out!
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai
Connect with me on LinkedIn.
Looking for a speaker at your next AI event? Email thepaigelord@gmail.com.
Email thepaigelord@gmail.com if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.