How LinkedIn handles abusive content

Last updated: 8 months ago

LinkedIn is committed to providing a safe environment for all our members. To help ensure that LinkedIn remains safe, trusted, and professional, all members must follow our Professional Community Policies, which outline the types of discussions and content that are permitted on LinkedIn. Members posting jobs must also follow our Jobs Policies. When we find content that doesn’t comply with these policies, we may limit its distribution or remove it. Learn more about how we enforce our policies.

LinkedIn uses a three-layer, multidimensional approach to moderate content within our Trust ecosystem.

First layer of protection Automatic and proactive prevention:

When a member attempts to create a piece of content on LinkedIn, various calls (or signals) are sent to LinkedIn’s machine learning services. These services aim to automatically filter out certain policy violating content within 300 milliseconds of creation, meaning the content is visible only to the author and is not shown to anyone else on the platform. As part of this process, Artificial Intelligence (AI) plays a key role in helping LinkedIn proactively filter out potentially harmful content. LinkedIn uses content (like certain key words or images) that has previously been identified as violating its Professional Community Policies to help inform AI models and better identify and restrict similar content from being posted in the future.

LinkedIn measures its preventive defense services regularly to improve accuracy in the filtering process. This is done by sending some positive samples for human review to measure the precision of LinkedIn’s automated defense system. This reduces the likelihood that LinkedIn’s auto-filtering process removes content that complies with LinkedIn’s policies.

Second layer of protection – Combination of automatic and human-led detection:

LinkedIn’s second layer of moderation detects content that’s likely to be violative but for which the algorithm is not sufficiently confident to warrant automatic removal. This content is flagged by our AI systems for further human review. If the human review team determines that the content violates LinkedIn’s policies, they'll take action on it. LinkedIn’s human review team is instrumental in this process and in helping train the platform’s models.

Third layer of protection – Human-led detection:

If members locate content they believe violates LinkedIn's policies, we encourage them to report it using the in-product reporting mechanism represented by the three dots in the upper right-hand corner of the content itself on LinkedIn. Reported content is then sent to LinkedIn’s team of reviewers for further evaluation and is actioned in accordance with LinkedIn’s policies. Learn more about reporting content.

Related articles