OpenAI Unlawfully Collects Data From 'Millions' of 'Unsuspecting Consumers': Class Action
OpenAI’s AI products use “stolen" personally identifiable information (PII) from “hundreds of millions of internet users,” including children, without their "informed consent or knowledge," alleged a class action Tuesday (docket 3:24-cv-01190) in U.S. District Court for Northern California in San Francisco.
OpenAI and Microsoft continue to “unlawfully collect and feed additional personal data from millions of unsuspecting consumers worldwide, far in excess of any reasonably authorized use,” to continue developing and training their products, said the complaint. Their “disregard for privacy laws is matched only by their disregard for the potentially catastrophic risk to humanity,” it said, citing a 2015 comment by OpenAI CEO Sam Altman that “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
Plaintiff A.S., a Florida resident, has had an account with ChatGPT since 2022, using her Google account as her password, the complaint said. She used ChatGPT several times on her computer and mobile devices but was unaware of OpenAI’s collection of her personal data, it said.
A 2019 restructuring at OpenAI changed the company from a nonprofit research organization tasked with ensuring AI would be used for the benefit of humanity to a “for-profit business that would pursue commercial opportunities of staggering scale,” said the complaint. In its goal to pursue profit at the expense of privacy, security and ethics, OpenAI “doubled down on a strategy to secretly harvest massive amounts of personal data from the internet, including private information and private conversations,” plus medical data, information about children and “every piece of data exchanged on the internet it could take” without notice to owners of the data or their permission, it said.
The defendants rushed AI products to market without implementing safeguards or controls to ensure they wouldn't support “harmful or malicious content and conduct that could further violate the law, infringe rights, and endanger lives,” said the complaint. Without safeguards, defendants’ AI products “have already demonstrated their ability to harm humans,” it said.
AI products are being incorporated into an expanding roster of apps and websites through application programming interfaces or plug-ins, “onboarding humanity onto an untested plane,” said the complaint. Aggressive deployment of AI is "reckless," without proper safeguards, it said, citing Steno AI: “No matter how tall the skyscraper of benefits that AI assembles for us… if those benefits land in a society that does not work anymore, because banks have been hacked, and people’s voices have been impersonated, and cyberattacks have happened everywhere and people don’t know what’s true [… or] what to trust, […] how many of those benefits can be realized in a society that is dysfunctional?” it said.
Through AI products “integrated into every industry,” the defendants collect, store, track, share and disclose the PII of millions of users, such as account information, contact details, login credentials, emails, payment information and transaction records, IP addresses and geolocation, social media and chat log data, analytics, cookies, keystrokes and searches, the complaint said. The defendants “unlawfully obtain" access to and intercept individuals’ information from devices that have integrated ChatGPT-4, including images from Snapchat, financial information from Stripe, musical taste data from Spotify, private conversation analysis from Slack and Microsoft Teams, and private health information (PHI) from MyChart, it said.
Information is collected in real time, and combined with “scraping of our digital footprints” -- from 15 years ago and “online yesterday” -- defendants “have enough information to create our digital clones, including the ability to replicate our voice and likeness and predict and manipulate our next move,” said the complaint. Defendants can “misappropriate our skill sets and encourage our own professional obsolescence,” it said. That ability would “would obliterate privacy as we know it and highlights the importance of the privacy, property, and other legal rights this lawsuit seeks to vindicate,” it said.
The massive tracking of users’ PII by defendants “endangers individuals’ privacy and security to an incalculable degree,” said the complaint. The information “can be exploited and used to perpetrate identity theft, financial fraud, extortion, and other malicious purposes,” it said. It can also be used to “target vulnerable individuals with predatory advertising, algorithmic discrimination, and other unethical and harmful acts.”
Pausing commercial deployment of AI now, which plaintiffs seek, would enable joint development and implementation of “shared safety protocols, overseen by independent outside experts, to manage the risks and render [AI products] safe to usher in an exciting new era of progress for all,” the complaint said. With the right safeguards, products could help discover new drugs to save lives, contribute to efficiency and artistic expression and to the “greater societal good” for human rights, social justice and “empowering marginalized groups,” it said.
Claims include violations of the Electronic Communications Privacy Act, Computer Fraud and Abuse Act, California’s Invasion of Privacy Act and Unfair Competition Law; negligence, intrusion upon seclusion, larceny/receipt of stolen property, conversion and unjust enrichment. Plaintiff A.S. seeks compensatory, statutory, punitive and nominal damages; non-restitutionary disgorgement of all profits derived from defendants’ conduct; attorneys’ fees and costs; and an order requiring defendants to establish an independent body of thought leaders responsible for approving use of AI products “before, not after,” they’re deployed for said uses, it said.
A.S. also seeks implementation of cybersecurity safeguards, transparency protocols, technological safety measures that will "prevent the technology from surpassing human intelligence and harming others,” implementation of a threat management program; establishment of a monetary fund to compensate class members for defendants’ past and ongoing misconduct; and confirmation they have deleted and destroyed the PII and PHI of class members.