AI Giants Pinky Swear (again) Not To Help Make Deepfake Smut
Some of the largest AI firms in America have given the White House a solemn pledge to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material.
Adobe, Anthropic, Cohere, Microsoft, OpenAI, and open source web data repository Common Crawl each made non-binding commitments to safeguard their products from being misused to generate abusive sexual imagery, the Biden administration said Thursday.
"Image-based sexual abuse ... including AI-generated images – has skyrocketed," the White House said, "emerging as one of the fastest growing harmful uses of AI to date."
According to the White House, the six aforementioned AI orgs all "commit to responsibly sourcing their datasets and safeguarding them from image-based sexual abuse."
Two other commitments lack Common Crawl's endorsement. Common Crawl, which harvests web content and makes it available to anyone who wants it, has been fingered previously as vacuuming up undesirable data that's found its way into AI training data sets.
However, Common Crawl shouldn't be listed alongside Adobe, Anthropic, Cohere, Microsoft, and OpenAI regarding their commitments to incorporate "feedback loops and iterative stress-testing strategies... to guard against AI models outputting image-based sexual abuse" as Common Crawl doesn't develop AI models.
The other commitment to remove nude images from AI training datasets "when appropriate and depending on the purpose of the model" seems like one Common Crawl should have agreed to, but it doesn't collect images.
According to the nonprofit, "the [Common Crawl] corpus contains raw web page data, metadata extracts, and text extracts," so it's not clear what it would have to remove under that provision.
When asked why it didn't sign those two provisions, Common Crawl Foundation executive director Rich Skrenta told The Register his organization supports the broader goals of the initiative, but was only ever asked to sign on to the one provision.
"We weren't presented with those three options when we signed on," Skrenta told us. "I assume we were omitted from the second two because we do not do any model training or produce end-user products ourselves."
The (lack of) ties that (don't) bind
This is the second time in a little over a year that big-name players in the AI space have made voluntary concessions to the Biden administration, and the trend isn't restricted to the US.
In July 2023, Anthropic, Microsoft, OpenAI, Amazon, Google, Inflection, and Meta all met at the White House and promised to test models, share research, and watermark AI-generated content to prevent it being misused for things like non-consensual deepfake pornography.
There's no word on why some of those other companies didn't sign yesterday's pledge, which, like the one from 2023, was also voluntary and non-binding.
- Microsoft teases deepfake AI that's too powerful to release
- Deepfakes being used in 'sextortion' scams, FBI warns
- MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs
- US AGs: We need law to purge the web of AI-drawn child sex abuse material
It's similar to agreements signed in the UK last November between several countries over an AI safety pact, which was followed by a deal in South Korea in May between 16 companies that agreed to pull the plug if a machine-learning system showed signs of being too dangerous. Both agreements are lofty and, like those out of the White House, entirely non-binding.
Deepfakes continue to proliferate, targeting both average citizens and international superstars alike. Experts, meanwhile, are more worried than ever about AI deepfakes and misinformation ahead of one of the largest global election years in modern history.
The EU has approved far more robust AI policies than the US, where AI companies seem more likely to lobby against formal regulation, while receiving aid from some elected officials and support for light-touch regulation.
The Register has asked the White House about any plans for enforceable AI policy. In the meantime, we'll just have to wait and see how more voluntary commitments play out. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more