In Fighting Deep Fakes, Mice May Be Great Listeners

Researchers hope mice may be able to hear irregularities the human ear might missImage copyright Getty Images
Image caption Researchers hope mice may be able to hear irregularities the human ear might miss

There may be a new weapon in the war against misinformation: mice.

As part of the evolving battle against “deep fakes” - videos and audio featuring famous figures, created using machine learning, designed to look and sound genuine - researchers are turning to new methods in an attempt to get ahead of the increasingly sophisticated technology.

And it’s at the University of Oregon’s Institute of Neuroscience where one of the more outlandish ideas is being tested. A research team is working on training mice to understand irregularities within speech, a task the animals can do with remarkable accuracy.

It is hoped that eventually the research could be used to help sites such as Facebook and YouTube detect deep fakes before they are able to spread online - though, to be clear, the companies won’t need their own mice.

“While I think the idea of a room full of mice in real time detecting fake audio on YouTube is really adorable,” says Jonathan Saunders, one of the project’s researchers, “I don't think that is practical for obvious reasons.

“The goal is to take the lessons we learn from the way that they do it, and then implement that in the computer.”

Mice categorisation

Mr Saunders and team trained their mice to understand a small set of phonemes, the sounds we make that distinguish one word from another.

“We've taught mice to tell us the difference between a ‘buh’ and a ‘guh’ sound across a bunch of different contexts, surrounded by different vowels, so they know ‘boe’ and ‘bih’ and ‘bah’ - all these different fancy things that we take for granted.

“And because they can learn this really complex problem of categorising different speech sounds, we think that it should be possible to train the mice to detect fake and real speech.”

The mice were given a reward each time they correctly identified speech sounds, which was up to 80% of the time.

That's not perfect, but coupled with existing methods of detecting deep fakes, it could be extremely valuable input.

Can't rely on mistakes

Most of the deep fakes in circulation today are quite obviously not real, and typically used as a way of mocking a subject, rather than impersonating them. Case in point: a deep fake of “Mark Zuckerberg” talking openly about stealing user data.

But that’s not to say convincing impersonation won’t be a problem in the not-too-distant future - which is why it has been a significant topic of conversation at this year’s Black Hat and Def Con, the two hacking conferences that take place in Las Vegas each year.

“I would say a good estimate for training a deep fake is anywhere from about $100 to about $500, in terms of cloud computing costs,” said Matthew Price, from Baltimore-based cyber-security firm Zerofox.

He’s here to discuss the latest methods of creating deep fakes, but also the cutting-edge in detecting them as well. In his talk, Mr Price discussed how using algorithms to detect unusual head movements, or inconsistent lighting, was an effective method. One clue found within poorly-made deep fakes was that the people in them often didn’t blink.

But these techniques rely on the creators of deep fakes making mistakes, or weaknesses in currently available tech. It’s a luxury that won’t last forever.

“It’s probably unlikely in the 2020 elections that we'll see a lot of deep fakes being released,” Mr Price told the BBC.

“But I think as this technology continues to improve, which makes it harder for us to detect the fakes, it's more likely that we will see these used in an influence operation specifically for elections.”

That concern has been expressed by high-profile US politicians who see deep fakes as a potential new, dramatic escalation in misinformation efforts targeted at American voters.

“America’s enemies are already using fake images to sow discontent and divide us,” said Republican Senator Marco Rubio, speaking to The Hill earlier this year.

“Now imagine the power of a video that appears to show stolen ballots, salacious comments from a political leader, or innocent civilians killed in conflict abroad.”

'Cheap fake'

But others feel the deep fake threat is grossly overblown. Noted cybersecurity expert Bruce Schneier, from the Harvard Kennedy School, said efforts around detecting deep fakes completely missed the point.

“The stuff that is shared, that's fake… it isn't even subtle. And yet, it's shared, right?

”The problem is not the quality of the fake. The problem is that we don't trust legitimate news sources, and because we are sharing things for social identity.”

Image copyright Getty Images
Image caption The video of House Speaker Nancy Pelosi had been slowed by 25%

He points to a recent viral video involving Democratic congresswoman Nancy Pelosi, in which the audio had been altered in an attempt to make her sound drunk. The clip was quickly debunked, but that didn’t matter - it was viewed more than a million times.

“That wasn’t a deep fake, it was a cheap fake,” Mr Schneier pointed out.

“As long as people look at video not in terms of ‘Is it true?’ but ‘Does it confirm my world view?’, then they're going to share it... because that's who they are.”

Indeed, one view shared among the experts here is that the discussion around deep fakes may end up being more damaging than the actual fakes themselves.

The existence and fears about the tech may be used by certain politicians to convince voters that something very real is only a deep fake.

_____

Follow Dave Lee on Twitter @DaveLeeBBC

Do you have more information about this or any other technology story? You can reach Dave directly and securely through encrypted messaging app Signal on: +1 (628) 400-7370

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more