Some users of the professional social network LinkedIn drew attention to a new setting that indicates that the platform is using their data to train generative artificial intelligence models without prior consent. This was reported by the resource 404 Media.

Image Source: Abid Shah / unsplash.com

The data is used to improve features such as help in writing text, LinkedIn administration said. This feature can be disabled in your profile settings, but it appears to be enabled by default. “Privacy enhancing techniques” are used to anonymize and protect personal information, the platform says, but its users are unhappy that the automated system can collect inherently confidential information at all. The platform does not use data from citizens of European countries, which have strict privacy laws, to train the AI.

Despite the fact that the social network began collecting data, LinkedIn’s terms of service were not changed accordingly in a timely manner – the platform administration initially did not reflect these actions in the document, but the oversight has now been corrected. “We will make changes that will give people who use LinkedIn more choice and control over the data we use to train our generative AI technology. We will implement new AI tools by default that will benefit all participants, and ensure that those with specific privacy preferences can easily opt out. People may opt out, but they come to LinkedIn to be found for work and networking, and generative AI is our way to help professionals navigate these changes,” a platform spokesperson told TechRadar Pro.

Leave a Reply

Your email address will not be published. Required fields are marked *