Facebook And Twitter Grilled Over Abuse Faced By MPs
Facebook and Twitter executives have been grilled by MPs over how the social networks handle online abuse levelled at parliamentarians.
MPs argued that such hostility undermined democratic principles.
Twitter representative Katy Minshall admitted it was "unacceptable" that the site had relied wholly on users to flag abuse in the past.
She said there was more to be done, but insisted Twitter's response to abuse had improved.
Harriet Harman, chair of the Human Rights Committee, said: "There is a strong view amongst MPs generally that what is happening with social media is a threat to democracy."
SNP MP Joanna Cherry cited specific tweets containing abusive content that were not removed swiftly by Twitter.
One example was only taken down on the evening before the committee hearing, after Ms Cherry and other high-profile figures, including the journalist and activist Caroline Criado Perez, drew attention to the post.
"I think that's absolutely an undesirable situation," said Ms Minshall, Twitter's head of UK government, public policy and philanthropy.
Ms Cherry argued it was in fact part of a pattern in which Twitter only reviewed its decisions when pressed by people in public life.
MPs also questioned how useful automated algorithms were for identifying abusive content.
Facebook's UK head of public policy, Rebecca Stimson, said their application was limited.
For example, out of two million pieces of bullying content, Facebook's algorithms could only correctly identify 15% as in breach of the site's rules.
"For the rest you need a human being to have a look at it at the moment to make that judgement," she explained.
Labour MP Karen Buck said algorithms might not identify messages such as, "you're going to get what Jo Cox got" as hostile. She was referring to the MP Jo Cox who was murdered in June 2016.
"The machines can't understand what that means at the moment," said Ms Stimson.
However, both Ms Stimson and Ms Minshall said that their social networks were working to gradually improve their systems and were also implementing tools to better flag and block abusive content proactively, before it was even posted.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more