Consumer Electronics Daily was a Warren News publication.
'Human-Like Language'

AI Privacy Lawsuit vs. OpenAI, Microsoft Seeks Accountability, Safeguards

Defendants OpenAI and Microsoft’s “disregard for privacy laws is matched only by their disregard for the potentially catastrophic risk to humanity,” said 16 plaintiffs in a Wednesday privacy class action (docket 3:23-cv-03199) Wednesday in U.S. District Court for Northern California in San Francisco (see 2306280052).

The complaint cited a 2023 New York Times article quoting OpenAI CEO Sam Altman in 2015, saying, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Microsoft, which made a multibillion-dollar investment in OpenAI, “led the charge on the rapid proliferation of ChatGPT” and “integrated the ChatGPT language model into almost all of its cardinal products and services,” said the complaint.

OpenAI and Microsoft use AI products, “integrated into every industry,” to collect, store, track, share and disclose the private information of millions of users, alleges the complaint. The 16 plaintiffs, identified by initials only “to avoid intrusive scrutiny,” plus any “potentially dangerous backlash,” range from a 6-year-old, K.S., who used the mic feature to ask ChatGPT-3.5 questions and to generate art, to plaintiff B.B., an actor whose likeness appears across YouTube and social media sites.

AI technology from OpenAI, Microsoft and others uses stolen personally identifiable information (PII) from “hundreds of millions of internet users,” including children, to train their products without individuals’ knowledge or consent, said the complaint. In developing, marketing and operating their AI products -- including ChatGPT-3.5, ChatGPT-4.0, 4 Dall-E and Vall-E -- defendants “continue to unlawfully collect and feed additional personal data from millions of unsuspecting consumers worldwide, far in excess of any reasonably authorized use,” to continue developing and training their products, the complaint said.

Though founded as a nonprofit research organization with a mission to ensure AI would be used for the benefit of humanity, in 2019, “OpenAI abruptly restructured itself, developing a for-profit business” and elected instead “to pursue profit at the expense of privacy, security, and ethics,” the complaint said. It “doubled down on a strategy to secretly harvest massive amounts of personal data from the internet,” including private information and conversations, medical data, and information about children, “without notice to the owners or users of such data, much less with anyone’s permission."

OpenAI used the stolen data to train and develop products using large language models and deep language algorithms to analyze and generate “human-like language” that can be used for chatbots, language translation, text generation and more, it said. Its products’ sophisticated natural language processing allows them to “carry on human-like conversations with users, answer questions, provide information, generate text on demand, create art, and connect emotionally with people, all like a ‘real’ human.”

Plaintiff S.J. of California alleges defendants stole his data from his interactions on Snapchat, Spotify and YouTube to train the AI products, and plaintiff N.G., also of California, posted a “great deal of personal content,” such as photos and videos of auditions, performances and training sessions in his position as a teacher and actor. He expected that information he exchanged with YouTube, and Facebook prior to 2021, “would not be intercepted by any third-party looking to compile and use all his information and data for commercial purposes.”

Plaintiff S.A., a personal assistant in the entertainment industry, began using ChatGPT-3.5 in January to “rewrite snippets” on topics for work and personal projects. The California resident is concerned defendants “have taken her skills and expertise, as reflected in her online contributions, and incorporated it into Products that could someday result in professional obsolescence for social media managers like her.”

ChatGPT’s privacy policy says information which has already been incorporated into defendants’ large language models “can never really be removed,” said the complaint. In addition, ChatGPT lacks age controls to prevent children under 13 from using ChatGPT and subsequently providing their information. The privacy policy doesn’t disclose that all conversations are “wiretapped, recorded, and shared with numerous entities,” it said.

The complaint alleges violation of the New York General Business Law; the Electronic Communications Privacy Act; the Computer Fraud and Abuse Act; California’s Invasion of Privacy Act and Unfair Competition Law; Illinois’ Biometric Information Privacy Act and Consumer Fraud and Deceptive Business Practices Act; negligence; invasion of privacy; intrusion upon seclusion; larceny/receipt of stolen property; conversion; unjust enrichment; and failure to warn.

Plaintiffs seek injunctive relief in the form of a temporary freeze on commercial access to and development of products until defendants can demonstrate certain stipulations: establishment of an independent thought leader council to approve use of the products before they’re deployed; accountability protocols; cybersecurity safeguards; transparency protocols; opt-out options; technological safety measures; review procedures; an administrator to determine compensation for “stolen data” the products depend on; and confirmation defendants have deleted, destroyed and purged information of all class members. They also seek statutory damages, equitable relief and attorneys’ fees and legal costs.