"Nudify" Apps and Lockheed Martin's New AI Adoption Business
Your trusted source for insights on the world of responsible AI and AI policy. December 17th, 2024. Issue 42.
A Few Words:
This is the final Just AI Newsletter of 2024, as I will publish on December 24th or December 31st, so that people can enjoy the holidays. HAPPY NEW YEAR!
Don’t want to read? Listen instead. I recap the whole newsletter here 👇
Quick Hits 👏
Interesting AI news from the week.
The Rise of SoundHound
Does AI Think? Here’s One Theory
OpenAI’s o1 Model is Found Trying to Deceive Humans
*Paige works for GitHub, a Microsoft Company
Notable AI Ethics News
“Nudify” AI Apps: Fresh, Hellacious Violations
AI-powered "nudify" websites, such as Clothoff, use artificial intelligence to create realistic nude images from photos of clothed individuals without their consent. These sites often lack transparency regarding ownership and employ deceptive payment methods, including redirect sites, to process transactions. Despite disclaimers against using images of minors, enforcement is minimal, leading to the creation and online sharing of explicit images of underage individuals. This misuse of AI technology raises significant ethical and legal concerns, highlighting the need for stricter regulations and oversight to prevent non-consensual exploitation.
👉 Why it matters: This use of AI technology is egregious in more ways than one, and is a top example of how lagging laws are leaving room for people to make significant profit off of the exploitation and violations of others - and in many cases, children. At present there does not appear to be any major conversation by lawmakers about putting an end to these apps. Because these apps are privately owned and there are no major AI laws keeping them from operating, they’re essentially able to continue to do business unhindered.
AI’s Helping Hand for Cyber Criminals
Researchers have uncovered a sophisticated macOS malware called Realst that masquerades as video-calling software. Hackers use AI to build fake websites, social media profiles, and AI-generated content to lend credibility to their malicious campaign. Unsuspecting users download the malware, which steals sensitive personal data and cryptocurrency without detection. The malware targets both macOS and Windows, posing significant cybersecurity risks. Experts emphasize staying vigilant, verifying sources, and avoiding software downloads from untrusted websites to protect against such AI-powered deception.
👉 Why it matters: As AI becomes more accessible and prevalent in people’s everyday lives, AI is lending a hand in those efforts. The malware technology isn’t altogether new, but the AI is used to lend legitimacy to the scheme so that people are less likely to know if they’re being scammed. All individuals should be remember the guidelines of internet safety - check that the email you got is from a legitimate email address, don’t click links in emails in pop-ups, and if you’re not sure about something, ask a reputable source before downloading anything.
Study Investigates if LLMs are Equitable Enough for Mental Health Support
The study, conducted by MIT, NYU, and UCLA researchers, evaluates AI chatbots like GPT-4 for their equity and empathy in providing mental health support. Using Reddit data, clinical psychologists assessed AI responses, finding that GPT-4 showed higher overall empathy and was better at encouraging behavioral changes than human responses. However, GPT-4 exhibited racial bias, providing less empathetic responses for Black (2-15% lower) and Asian users (5-17% lower), while it was found that explicit instructions to consider demographics reduced bias.
👉 Why it matters: Mental health apps leveraging AI are beginning to pop up, but leveraging AI to help humans navigate humanity is a complicated subject. Before mental health apps leverage an AI model, they should understand the challenges and potential biases and harms that can come from any given LLM. This study highlights both AI’s potential in mental health support and the need for improvements to ensure equitable responses across demographics. This research can help mental health AI companies adjust the underlying model to better serve their customers - fairly and equitably.
Google Gifts HBCU $1M to Shape the Future of AI
North Carolina Central University (NCCU) has received a $1 million grant from Google to establish the nation's first HBCU-based AI Institute. Led by Dr. Siobahn Day Grady, the initiative aims to impact 200 students over two years through research and mentorship, preparing them to lead in the AI industry. Google's Lilyn Hester emphasized the need for diversity in tech, noting NCCU's existing AI involvement and the importance of increased representation of women and people of color in the field.
👉 Why it matters: In the world of business and in the world of AI, representation matters, because the more perspectives there are in the room, the better the outcome usually is. Google’s investment of $1M to North Carolina Central University is a significant show of support for ensuring that the AI job pipeline has diverse talent, leading to increased representation. This is a significant move for Google - and dare I say - a win.
Happenings in AI Policy
Donald Trump Meets with SoftBank CEO to Discuss Investment in AI Projects
SoftBank CEO Masayoshi Son and President-elect Donald Trump have announced a $100 billion investment in U.S. technology projects over the next four years, focusing on artificial intelligence (AI) infrastructure. This initiative aims to create approximately 100,000 jobs, doubling Son's 2016 commitment of $50 billion and 50,000 jobs. The investment is expected to bolster the U.S. economy and aligns with Trump's agenda to stimulate job growth and technological advancement. The funds may be sourced from SoftBank's Vision Fund, capital projects, or its subsidiary Arm Holdings.
👉 Why it matters: As the new administration prepares to take the helm, the AI industry continues to grow, expand and change with great speed. All eyes are on Donald Trump to see how he and his “AI Czar” promote AI and strike deals in a way that promotes US interests and the safety of US citizens and residents. Much is unknown, but a significant investment and an increase in jobs is a significant win, if it comes to fruition as planned.
Lockheed Martin’s Focus on Helping Defense Companies Adopt AI
Lockheed Martin has established a subsidiary, Astris AI, to assist U.S. defense companies in integrating artificial intelligence into their operations. While AI adoption has accelerated across various sectors, defense firms have been cautious due to the sensitive nature of their data. Astris AI will also explore AI applications in select commercial areas. Donna O'Donnell, formerly of Xerox, will lead the subsidiary. This initiative aligns with anticipated government efficiency efforts, potentially fostering increased collaboration between major defense contractors and tech firms specializing in AI and autonomous systems.
👉 Why it matters: AI adoption is a major challenge for companies and government entities alike. Part of this challenge is a resistance to the technology, and part of it is simply getting the workforce to adjust to a new way of working. Additionally, it can be challenging for companies and governmental departments to know what AI to take on, where their dollars will be best spent, and how to maximize benefit. Ultimately, Lockheed Martin sees tremendous benefit from the power of AI adoption (not to mention profit) and has decided to address the problem within the government. This may also be a move in alignment not just with trends, but the incoming administration. It’s important to keep watch over how Lockheed Martin prioritizes safety in AI deployment and adoption.
Spotlight on Research
The growing use of AI conversational agents for productivity and well-being highlights the need to address their psychological risks, often underrepresented in existing research. This study introduces a novel psychological risk taxonomy based on lived experiences, using a mixed-method approach with 283 survey participants and workshops with mental health experts. The taxonomy identifies 19 AI behaviors, 21 negative psychological impacts, and 15 user contexts. A multi-path vignette framework illustrates the interplay between AI behaviors, impacts, and contexts. Design recommendations are provided to develop safer AI agents, offering actionable insights for policymakers, researchers, and developers to mitigate psychological risks.
Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes
The integration of AI into HR processes transforms recruitment, performance evaluation, and employee engagement, offering benefits like efficiency and bias reduction but raising concerns about job security, fairness, and transparency. This study reveals that AI can both enhance and undermine employee well-being, depending on implementation and perception. Transparency is critical for fostering trust and positive attitudes. The AI-Employee Well-being Interaction Framework illustrates AI's influence on perceptions, behaviors, and outcomes. Strategies like clear communication, upskilling, and employee involvement are vital for mitigating risks. Successful AI integration requires a balanced approach prioritizing well-being, ethical practices, and human-AI collaboration.
Watch: SORA and its human scandal
Here’s some content from my other mediums. Feel free to follow me on Instagram, TikTok or YouTube.
Let’s Connect:
Find out more about our philosophy and services at https://www.justaimedia.ai
Connect with me on LinkedIn.
Looking for a speaker at your next AI event? Email: hello@justaimedia.ai
Email thepaigelord@gmail.com if you want to connect 1:1.
How AI was used in this newsletter:
I personally read every article I reference in this newsletter, but I’ve begun using AI to summarize some articles.
I DO NOT use AI to write the ethical perspective - that comes from me.
I occasionally us AI to create images.