LinkedIn Faces Lawsuit Over Alleged Data Use for AI Training
In a rapidly evolving digital landscape, privacy and data usage are hot topics—and LinkedIn, Microsoft’s business-focused social media platform, is at the center of the latest debate. A proposed class-action lawsuit has accused LinkedIn of improperly disclosing user data to train generative AI models. This case not only highlights growing concerns around privacy but also raises important questions about corporate transparency in the age of artificial intelligence.
Here’s a closer look at the situation.
The Allegations
The lawsuit, filed in federal court in San Jose, California, represents millions of LinkedIn Premium customers. These users claim LinkedIn shared their private InMail messages with third parties without their consent.
The complaint alleges that:
A quiet policy update: In August 2023, LinkedIn introduced a privacy setting allowing users to opt out of data sharing.
A critical loophole: Despite this setting, LinkedIn updated its privacy policy on September 18, stating that personal data could be used to train AI models—and that opting out wouldn’t affect data already collected and used.
A breach of trust: Plaintiffs argue that this move contradicts LinkedIn’s prior commitments to use personal data solely for supporting and improving the platform.
The lawsuit calls LinkedIn’s actions a calculated attempt to "cover its tracks," accusing the company of knowingly violating users’ privacy and trust to avoid public scrutiny and legal repercussions.
What the Lawsuit Seeks
The lawsuit is seeking damages for breach of contract, violations of California’s unfair competition laws, and $1,000 per person for breaches of the federal Stored Communications Act. If successful, this could have significant financial and reputational consequences for LinkedIn and Microsoft.
LinkedIn’s Response
In response to the allegations, LinkedIn has denied any wrongdoing, calling the claims “false” and “without merit.”
This case has already sparked broader discussions about the balance between technological innovation and privacy. It also comes at a time when LinkedIn’s parent company, Microsoft, is deeply invested in AI, including its partnership with OpenAI.
The Bigger Picture: AI and Data Ethics
This case is just one example of the tension between technological progress and ethical considerations. As generative AI continues to evolve, companies are under increasing scrutiny to ensure they are transparent about how user data is being used.
This lawsuit raises critical questions:
Transparency: Should companies be required to clearly and explicitly inform users about how their data will be used—especially for AI?
Consent: Is it enough to provide users with opt-out options, or should consent be mandatory before data is used for purposes like AI training?
Regulation: What role should governments play in protecting user privacy in an era of rapid AI development?
Why It Matters
For LinkedIn’s millions of users, this lawsuit serves as a reminder of the importance of understanding privacy policies and taking control of personal data. For companies, it underscores the risks of failing to prioritize transparency and trust.
The outcome of this case could set a precedent for how businesses handle user data in the future, particularly in the context of artificial intelligence.