Building member trust in the new age of AI

Building member trust in the new age of AI

In today’s rapidly evolving world, Artificial Intelligence (AI) is starting to usher in a new world of work. On LinkedIn, we’re seeing this in a number of ways, including in the share of global English-language job postings mentioning GPT or ChatGPT increasing over 21x since November 2022. As part of this new era of work, we have begun to roll out new AI-powered product innovations and more ways to help our members and customers unlock value and navigate this change. 

While this shift is incredibly exciting and offers a new dimension of possibilities, we also recognize the risks and potential for harm that can come with rapidly building and deploying new AI-powered technologies. For us, building member trust is and will continue to be at the top of our operational priorities. I wanted to take this moment to recognize our team’s commitment and efforts and share more details about how we will continue to work to keep LinkedIn safe, trusted, and professional. 

Our approach includes:

  • Leading with Responsible AI - Aligned with Microsoft’s leadership in Responsible AI, all of the work that we do is anchored behind our Responsible AI principles which summarize how we build using AI at LinkedIn. These principles guide our work and ensure we are consistent in how we use AI to (1) Advance Economic Opportunity, (2) Uphold Trust, (3) Promote Fairness and Inclusion, (4) Provide Transparency, and (5) Embrace Accountability. You can see how these principles are applied when you look at our recent generative AI-powered tools for members and customers. For example, to promote fairness and inclusion, we have a cross-functional team working to design solutions and guardrails to understand and proactively address potential bias in LinkedIn’s AI-powered products.

  • Rigorous Product Testing and Reviews - We execute robust product reviews to help us better understand, prevent, and minimize any potential harm. This includes carrying out automated and manual red teaming efforts with subject matter experts, where we test the effectiveness of our defenses and guardrails by emulating bad actors' tactics, techniques, and procedures. This is an essential practice in the responsible development of systems and features using generative AI which helps us identify gaps and risks, including hallucinations, inaccuracies, low-quality content, and harmful content. 

  • Authenticity and Verification - We’re integrating authenticity signals into our defenses through our verification product, which help inform us and exclude potential bad actors and suspicious accounts from having access to our generative AI products. We will be expanding free verification to more members and countries soon, which will help further safeguard our overall platform experience. It’s important to remember that verifying your account and securing it with two-step verification are the most effective ways that you can help protect yourself and others on LinkedIn. With over 18 million members already verified, we aspire to bring verification to 100 million people by 2025.

  • End-to-End Integrated Trust Defenses - We have systems in place that help limit problematic content at different layers, including prompt safety efforts, which minimize the probability of generative AI creating content that doesn’t meet our high standards or that goes against our Professional Community Policies. As with all content, we also apply trust classifiers on AI-generated content, which helps us detect and remove policy-violating content proactively before it is published on the platform. Our generative AI products also allow members to submit feedback on AI-created content, which helps inform our team when content may not be relevant or constructive to the LinkedIn experience.

  • Enhanced Security - Other measures that we are integrating into our AI-powered experiences include user intent detection capabilities, which dramatically reduce risk by helping AI-powered products stay within their intended professional scope. As part of the user experience, we also provide proactive notice to our members that AI is powering these features, and ask that they review AI outputs before sharing or distributing content broadly.

We are committed to using AI in ways that are aligned with our mission, are responsible, and provide value to our members and customers.  As with any new technology, things won’t be perfect and there will be learnings along the way. We will continue iterating and improving and invite you to provide feedback so that we can help make our product features better over time for all members and customers. We look forward to sharing more in the months to come.

Raphael Rosemblat

Senior Attorney at Law Offices of Raphael A. Rosemblat

1mo

Who would I share a report on a fake account and its repeated use? I see nowhere to send a private message to the staff to investigate a repeat offender and trace its whereabouts. I can provide more details in a private communication.

Like
Reply
Neville White

Retail Sales Assistant at Firebird Marine

1mo

2

Like
Reply
Haitham Khalid

Empowering customers to shape their digital transformation with full potential

7mo

It's great to see your commitment to maintaining member trust and ensuring a safe platform.

Jon Adams

Senior Director, Legal (AI + Data Ecosystem) at LinkedIn

7mo

This is great. Thank you for sharing!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics