New York Times: What Grammarly’s Expert Review Fallout Means Now

New York Times: What Grammarly’s Expert Review Fallout Means Now

new york times readers and writers are seeing a moment of reckoning in how generative AI borrows names and styles: Grammarly’s paid “Expert Review” feature has been disabled after criticism that it produced AI-generated editing suggestions tied to the identities of prominent writers and academics without their consent.

What Happens When -style Expert Review Is Simulated?

The feature, presented as offering “insights from leading professionals, authors, and subject-matter experts, ” generated feedback that used real people’s names to frame AI-created edits. The company’s own support text warns that “References to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities. ”

Critics say that design — pairing authoritative names with machine-generated output — created the appearance of human endorsement where none existed. Among the named figures reproduced in the feature were a mix of living and deceased public intellectuals and writers. A class-action lawsuit filed in the southern district of New York contends that using people’s names for commercial gain without permission is unlawful and seeks damages in excess of $5 million.

A journalist who tested the tool described being surprised to discover that the edits presented as coming from named experts were in fact AI output, and said the product design made it easy for a paying user to treat machine suggestions as if they were the work of a named professional. The company has said the feature has been taken down for redesign and that it saw very little usage while live.

  • Feature name: Expert Review — generative-AI editing feedback tied to named individuals.
  • Company actions: Feature disabled for redesign; public apology and pledge to rethink approach from the chief executive.
  • Legal action: Class-action lawsuit in the southern district of New York, lead plaintiff is a journalist and counsel is named in filings; damages sought exceed $5 million.
  • Scope: The company reports a large user base for its broader product and a paid subscription framework; an annual subscription price was cited in testing context.

What If experts, users and courts decide the rules must change?

The dispute highlights three immediate tensions: product design that leverages recognizable identities to increase perceived value; commercial use of machine-generated emulations tied to living and deceased figures; and the legal boundary between imitation and unlawful appropriation. Named individuals have pushed back on having their voices invoked without consent; one academic pointed out the ethical grotesqueness of including a recently deceased scholar.

Company leadership has acknowledged the criticism and apologised, saying the team will rethink its approach going forward and defending the decision to remove the feature for redesign. The legal complaint, driven by a lead plaintiff who works as a journalist, is being advanced by counsel who have indicated interest from others similarly affected.

For users, the episode raises practical questions: when a writing tool ties suggestions to a named authority, is the product offering a shortcut to expert labour or a marketing overlay on synthetic output? For creators, the issue is clear: some see the feature as monetising identities and editorial skill without consent or compensation.

Uncertainty remains. Courts will weigh the legal claims; designers will judge whether reliance on named personas is defensible; and companies building on large language models must decide if transparency and opt-in consent are sufficient safeguards. Meanwhile, the operational choice to pause Expert Review and promise a redesign signals that even limited usage can trigger outsized reputational and legal risk for AI features tied to identities.

Practically, writers and institutions should audit how AI tools represent endorsements and clarify consent frameworks when names are invoked. Product teams should test whether label, disclaimer, and opt-out mechanisms actually prevent misunderstanding. Regulators and litigants will be watching how those design choices interact with existing law and commercial norms.

Those following the story should expect further legal and design developments as the parties contest whether respectful simulation of expert editing is permissible or whether it crosses a line into unlawful appropriation — a debate that will shape how AI tools integrate human identities going forward, and how platforms communicate that integration to the public and to

Next