LinkedIn says if you share fake or false AI-generated content, that's on you

More and more companies are warning users not to rely on AI

· TechRadar

News By Ellen Jennings-Trace published 9 October 2024

(Image credit: Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)

LinkedIn is passing the responsibility onto users for sharing misleading or inaccurate information made by its own AI tools, instead of the tools themselves.

A November 2024 update to its Service Agreement will hold users accountable for sharing any misinformation created by AI tools that violate the privacy agreement.

Since no one can guarantee that the content generative AI produces is truthful or correct, companies are covering themselves by putting the onus on users to moderate the content they share.

Inaccurate, misleading, or not fit for purpose

ThE update follows the footsteps of LinkedIn's parent company Microsoft, who earlier in 2024 updated its terms of service to remind users not to take AI services too seriously, and to address limitations to the AI, advising it is ‘not designed intended, or to be used as substitutes for professional advice’.

LinkedIn will continue to provide features which can generate automated content, but with the caveat that it may not be trustworthy.

“Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes," the updated passage will read.

The new policy reminds users to double check any information and make edits where necessary to adhere to community guidelines,

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors