LinkedIn Started Harvesting People's Posts For Training AI Without Asking For Opt-in
Updated LinkedIn started harvesting user-generated content to train its AI without asking for permission, angering netizens.
Microsoft’s self-help network on Wednesday published a "trust and safety" update in which senior veep and general counsel Blake Lawit revealed LinkedIn's use of people's posts and other data for both training and using its generative AI features.
In doing so, he said the site's privacy policy had been updated. We note this policy links to an FAQ that was updated sometime last week also confirming the automatic collecting of posts for training – meaning it appears LinkedIn started gathering up content for its AI models, and opting in users, well before Lawit’s post and the updated privacy policy advised of the changes today.
The FAQ says the site's built-in generative AI features may use your personal info to do things like automatically suggest stuff to write if and when you ask it to; and that your data will be used to train the models behind those features, which you'll have to opt out of if you don't like it.
We're also told that using LinkedIn means the outfit will “collect and use (or process) data about your use of the platform, including personal data … your posts and articles, how frequently you use LinkedIn, your language preference, and any feedback you may have provided to our teams.”
There’s some good news for users in the EU, the UK, Iceland, Norway, Liechtenstein (both of them!) and Switzerland as their data isn’t being used to train LinkedIn's AI at all and won't for the foreseeable future. (Nor for the UK: see the update below.)
The document also states that LinkedIn seeks “to minimize personal data in the datasets used to train the models, including by using privacy enhancing technologies to redact or remove personal data from the training dataset.”
But the FAQ also contains the following warning that the system may provide someone else's info if asked in a certain way:
The Microsoft social media outfit also last week emitted an article titled: “Control whether LinkedIn uses your data to train generative AI models that are used for content creation on LinkedIn.” That text explains how it’s possible to opt out of AI scraping, and points to setting called Data for Generative AI Improvement that offers a single button marked: “Use my data for training content creation AI models.”
That button is in the “On” position until users move it to “Off.”
- 'Uncertainty' drives LinkedIn to migrate from CentOS to Azure Linux
- Kamala Harris's $7M support from LinkedIn founder comes with a request: Fire Lina Khan
- Tech industry sheds some light on the planet's situation via LinkedIn
- Microsoft's Inflection acquihire is too small to matter, say UK regulators
Big Tech has mostly used a 'scrape first, settle the lawsuits for a pittance later' approach to finding the content it needs to develop AI models. Forget about the concept of begging for forgiveness rather than asking for permission – neither question is asked at all.
LinkedIn could not have been unaware of the likely backlash, making its approach curious.
User anger cannot therefore be surprising. On LinkedIn it's not hard to find the service's move described as a breach of trust, along with a rash of posts advising users how to turn off the scraping.
Which thankfully isn't hard to do: Click on your LinkedIn Profile, select "Settings" then "Data Privacy" and look for an item labelled "Data for Generative AI improvement." Click the single button there to opt out, then go back to wading through the rest of LinkedIn. ®
Updated to add on September 20
LinkedIn has paused training of its AI models on UK user data after the nation's privacy watchdog, the Information Commissioner's Office (ICO), complained.
Stephen Almond, executive director regulatory risk at the regulator, told El Reg in a statement:
"In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset," he continued.
"We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place and ensure the information rights of UK users are protected."
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more