Grammarly Faces Class-Action Lawsuit After Using Real Journalists' Identities Without Consent
A widely used writing assistant is now at the center of a significant legal battle over the unauthorized commercial use of real people's identities — raising fresh questions about where AI personalization ends and rights violations begin.
Journalist Julia Angwin filed a class-action complaint on Wednesday against Grammarly's parent company, alleging that the firm used her identity — and those of other real writers — to power its "Expert Review" AI feature without ever seeking their permission. As reported by The Verge and first covered by Wired, the lawsuit alleges violations of privacy and publicity rights, specifically targeting laws that prohibit the commercial use of someone's identity without consent.
The Feature at the Center of the Controversy
Grammarly's "Expert Review" feature presents AI-generated writing suggestions attributed to what appear to be credible, named experts — real journalists and writers whose professional reputations lend the tool an air of authority and trustworthiness. The problem, according to the lawsuit, is that those individuals were never asked, never agreed, and in some cases, never even knew their identities were being used in this capacity. Angwin herself reportedly learned about her inclusion through fellow journalist Casey Newton, who was also among those identified by The Verge as being used without authorization.
This practice had apparently been ongoing for months before any of the affected individuals were made aware. The complaint names the company Superhuman — associated with the legal entity behind the feature — as the defendant accused of exploiting these identities for commercial gain.
A Broader Pattern of AI Identity Misuse
The lawsuit arrives at a moment when the tech and legal communities are grappling with a rapidly expanding set of questions around AI systems and identity rights. Using real names and reputations to validate AI-generated content is a particularly pointed form of misuse — it does not merely extract data from individuals, it actively deploys their professional credibility to make a product more marketable. For a company like Grammarly, which sells itself on the premise of improving communication quality, leveraging trusted editorial voices without permission carries significant reputational and legal risk.
The class-action structure of the complaint signals that Angwin and her legal team believe the harm extends well beyond a single journalist. If the suit succeeds, it could establish meaningful precedent for how AI companies may — and may not — use real people's identities in their product experiences.
Why This Matters
This case underscores a critical and often overlooked dimension of the AI data rights debate: it's not just about training data. Even in product deployment, AI systems can misappropriate identities in ways that cause real reputational and legal harm. As more companies build AI features that simulate expert endorsement or authoritative review, the absence of clear consent frameworks becomes an increasingly urgent problem.
For businesses integrating AI tools into their workflows, this lawsuit serves as a timely reminder to scrutinize not only what data AI systems are trained on, but how those systems represent — and potentially exploit — real individuals at the point of delivery.
---
Source: The Verge — "One of Grammarly's 'experts' is suing the company over its identity-stealing AI feature" by Stevie Bonifield