If Scammers Use Your AI Code To Rip Off Victims, The FTC May Want A Word

America's Federal Trade Commission has warned it may crack down on companies that not only use generative AI tools to scam folks, but also those making the software in the first place, even if those applications were not created with that fraud in mind. 

Last month, the watchdog tut-tutted at developers and hucksters overhyping the capabilities of their "AI" products. Now the US government agency is wagging its finger at those using generative machine-learning tools to hoodwink victims into parting with their cash and suchlike as well as the people who made the code to begin with.

Commercial software and cloud services, as well as open source tools, can be used to churn out fake images, text, videos, and voices on an industrial scale, which is all perfect for cheating marks. Picture adverts for stuff featuring convincing but faked endorsements by celebrities; that kind of thing is on the FTC's radar.

"Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals," Michael Atleson, an attorney for the FTC's division of advertising practices, wrote in a memo this week.

"The FTC Act's prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that's not its intended or sole purpose."

And to be clear, there are no new rules or regulations at play here: it's just the FTC doing its usual thing of reminding people that today's tech fads are still covered by consumer protection laws, in the US at least.

Atleson highlighted the following scenarios that the FTC will find problematic:

Making generative AI: The legal eagle questioned whether we need ML models capable of producing content so realistic that it would fool people. "If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways it could be misused for fraud or cause other harm," he noted. "Then ask yourself whether such risks are high enough that you shouldn't offer the product at all."

Atleson also urged developers to take all possible steps before the launch of a generative AI model to slash the risk of the software being used to con victims. He also warned against relying on detection engines to pick up abusive use of the technology, as these detectors can be overcome and sidestepped by smart miscreants.

"The burden shouldn't be on consumers, anyway, to figure out if a generative AI tool is being used to scam them," he added.

Finally, he reminded everyone that scamming people using AI models is still scamming:

To us, it all boils down to: breaking the law using some new-fangled model is still breaking the law. And if you just make tools that aid this kind of crime, don't think you're somehow immune from prosecution. ®

Apropos of nothing... Firefox maker Mozilla announced this week Mozilla.ai, a startup with $30 million in funding that's aiming to build "a trustworthy, independent, and open-source AI ecosystem."

 

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more