Commentary

April 2026 Tech Litigation Roundup & Analysis: As Companies Monetize AI, Courts Will Weigh In

3D dice, all black, one yellow die in the center with a red arrow hitting a bullseye on the front face of the die

This Tech Litigation Roundup gathers and briefly analyzes notable lawsuits and court decisions across a variety of tech-and-law issues.

Each roundup in 2026 will dive deep into a major development with expert analysis from TJL legal fellow, Maddy Batt. This month’s focus is a new case that previews the types of consumer safety claims that may emerge as companies try to monetize their generative AI products. This case, voluntarily dismissed by the plaintiff, alleged genAI companies Perplexity, Meta, and Google disclosed peoples’ chatbot transcripts to facilitate targeted advertising.

Here is an excerpt:

The recent lawsuit Noel v. Perplexity brought the question of AI monetization onto a courthouse docket. Since voluntarily dismissed by the plaintiff, the details of the class action provided a window into how adtech in AI is likely to be challenged in the courts.

The lawsuit targeted generative AI company Perplexity, along with Meta and Google, alleging they disclosed transcripts of users’ conversations with chatbots for targeted advertising. The case highlighted a burgeoning monetization strategy for the AI industry to solve generative AI’s profitability problem with a function the technology has proven especially adept at: collecting intimate information about users. Coming a few months after announcements from Meta and OpenAI that they would use data from AI products to target ads, the action and its voluntary dismissal leave the viability of legal challenges to ad-based monetization strategies unresolved.

Continue reading April’s Roundup on Tech Policy Press.


Roundup of other tech litigation developments:

  • Musk v. Altman jury trial: A trial is underway in a high-profile lawsuit in which Elon Musk alleges that his former co-founders and now competitors at OpenAI deviated from the organization’s purported founding mission to ensure artificial general intelligence benefits all of humanity. Among other remedies, Musk is seeking to remove Sam Altman from OpenAI’s leadership and board and to force the company to restructure as a non-profit.
  • Alleged authoritarian enablers: Oral argument was heard at the US Supreme Court in a case against Cisco for allegedly designing specialized surveillance technology to help the Chinese government find and persecute a religious minority. The Court is expected to reverse the Ninth Circuit’s decision that let a lawsuit under the Alien Tort Statute and Torture Victim Protection Act to move forward against the company. Meanwhile, the telecoms company Telenor was sued in Norway for allegedly sharing sensitive data about dissidents with the military junta in Myanmar, contributing to their persecution and torture. In the US, a judge held that the removal of ICE-watch apps and groups by Apple and Facebook was likely a government-coerced action violating the First Amendment.
  • Section 230 decisions: A lawsuit brought by the Massachusetts Attorney General alleging youth social media addiction will proceed against Meta, after the Massachusetts Supreme Judicial Court concluded that Section 230 of the Communications Decency Act did not immunize the company against the state’s design-based claims. Meanwhile, a Ninth Circuit suit against Meta for Facebook’s alleged role in fueling anti-Rohingya violence in Myanmar was dismissed on Section 230 grounds, but all three panel judges expressed that Circuit precedent had unduly expanded the law’s protections. Two judges called for reconsideration en banc so that Section 230 interpretation could be brought back within the scope of the law’s intended meaning.
  • NAACP sues Musk over data center: The NAACP has sued Elon Musk’s company xAI, alleging that the company is illegally operating unpermitted methane gas turbines to power its data center in violation of the Clean Air Act, spewing toxic pollutants into historically Black residential neighborhoods.
  • Chinese workers’ rights against automation: The Intermediate People’s Court of Hangzhou in China ruled that firing a worker because AI can do their job more cheaply is illegal.
  • DOJ intervenes in challenge to Colorado AI safety law: The Department of Justice has intervened in Elon Musk’s legal challenge to Colorado’s AI Act, arguing that the law’s anti-discrimination provisions violate the Equal Protection Clause. The filing signals that the Trump Administration is prepared to litigate to further its opposition to state-level AI regulation and that lawmakers should be ready to defend AI bias protections under a “color-blind” legal regime that makes addressing existing racial inequality a constitutional minefield.
  • Mass casualty and stalking chatbot cases: Families of victims of the Tumbler Ridge shooting sued OpenAI after the company flagged and deactivated the shooter’s ChatGPT account for violent content without informing authorities. The company is also facing a criminal probe and an expected civil lawsuit associated with the mass shooting at Florida State University. Separately, a woman who alleges her stalker was fueled by ChatGPT has sued OpenAI.
  • Mercor AI sued: Mercor AI, a start-up that provides data and training for leading AI models, has been hit with multiple lawsuits after a third-party data breach. The lawsuits following the breach have exposed Mercor to allegations that it uses proprietary information from its contractors’ work with other companies to provide training data for its clients’ AI models.

TJL would love to hear from you on how this roundup could be most helpful in your work – please contact us with your thoughts.