FTC Asks Normal Folks If They'd Like AI Impersonation Scam Protection, Too

The FTC is moving to make not only the fraudulent AI impersonation of government and business folk illegal but is also now asking the American public if they'd like some protection too. 

The US consumer watchdog announced as much on Thursday, alongside the introduction of a final rule that will give the Commission the ability to directly file federal lawsuits against AI impersonation scammers who target businesses and government agencies. The changes will also make it possible for the agency to target the makers of the code used in such scams more quickly.

The initial proposal doesn't cover the impersonation of private individuals, however. So the FTC is releasing this [PDF] supplemental notice asking for public comment on whether they should be covered by the new rules as well.

"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale," said FTC chair Lina Khan. "With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever." 

Beyond simply making it illegal to impersonate another individual to commit fraud, the proposal also includes a provision to hold businesses accountable for misuse of technology they create.  

The so-called "means and instrumentalities" provision in the proposal would give the FTC the ability to hold companies who create AI tech that could be used to impersonate people accountable if they "had reason to know that the goods and services they provided will be used for the purpose of impersonations," the FTC said.

Despite a provision to hold developers accountable for misuse of their tech, it's not clear who, or to what extent, organizations could be prosecuted. 

According to the proposal, it's illegal for a scammer to call or message a person while posing as another individual, send physical mail misrepresenting affiliation, creating a website, social media profile or email address impersonating a person or placing ads that pose as a person or their affiliates.

Whether the orgs transmitting fraudulent messages could be held liable along with companies that facilitate the creation of AI voices and video isn't clear. We've asked to the FTC for clarification, but haven't heard back.

The FCC made its own moves to combat AI impersonation earlier this month, deciding that it was illegal to use AI-generated voices in robocalls. Unlike this newly-proposed FTC rule, the FCC simply clarified that existing telephone consumer protection laws covered the use of AI-generated voices. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more