Meta 'Violated the Public Trust, ' Says Tech Oversight Project on Redacted AG Complaint
Social media companies “are going to keep committing deliberate child safety violations just as long as they can continue raking in billions in profits off kids,” said Tech Oversight Project Executive Director Sacha Haworth in a Thursday news release.
Meta “has violated the public trust to a shocking degree, committing calculated harms to child well-being that leaves no option but immediate public accountability,” said Haworth, saying new evidence in a newly redacted amended complaint (docket 1:23-cv-01115) filed Wednesday by New Mexico Attorney General Raul Torrez (D) against Meta and CEO Mark Zuckerberg shows why state legislatures “need to act urgently on New Mexico Kids Code, legislation that aims to make technology safer for young users and to make social media companies “accountable for their reprehensible actions.”
Haworth referenced new “smoking-gun evidence” in the unredacted amended complaint, a case originally filed in the First Judicial District Court of Santa Fe County and removed to the U.S. District Court for New Mexico in Albuquerque. The action seeks to hold Meta accountable for conduct that violates the New Mexico Unfair Practices Act (UPA) “and creates a public nuisance.” It seeks to force Meta to institute protections for kids “because it refuses to do so voluntarily," the complaint said.
Meta’s Facebook and Instagram platforms are a “breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation,” said the complaint. Teens and preteens can easily register for unrestricted accounts because the platforms don’t have age verification, it said. When they do, Meta “directs harmful and inappropriate material at them,” it said. A note on page 2 of the 259-page amended complaint warns of 66 pages containing redacted images of child sexual exploitation and other sexually explicit images, plus images of self harm.
Facebook and Instagram allow “unconnected adults” to have “unfettered access” to young users for purposes of grooming and solicitation, despite having the capability to determine that the users are minors, said the complaint. The platforms could provide warnings or other protections against material that’s harmful to minors and that poses “substantial dangers of solicitation and trafficking,” it said. The company has been on notice internally and externally “for years” about sexual exploitation dangers that exist on the platforms “but has nonetheless failed to stem the tide of damaging sexual material and sexual propositions delivered to children,” it said.
A study by Meta consultant Arturo Bejar in September 2021 found that 51% of Instagram users had a bad or harmful experience on the platform in the previous seven days, but Meta “knew that only 1% of users reported the content or communication" and that only 2% of the user-reported content "is taken down," the complaint said. That means for every 10,000 users who have bad experiences on the platform, “100 report and 2 get help,” it said. A company that’s genuinely committed to a safe product that’s good for children “would have ensured that harmful content and interactions were reported and addressed,” it said.
Meta’s content moderation rules were “narrowly written” to prohibit “only unambiguously vile material,” but “first-step grooming behaviors, including ‘adults … flooding the comments section on a teenager’s posts with kiss emojis' or sending invites to ‘see more’ in private Facebook Messenger groups were not prohibited,” the complaint said, citing a November Wall Street Journal article.
Meta targets children’s “age-based vulnerabilities” by using algorithms and platform designs that are “additive” to young users, the complaint said. The company “knowingly sought to maximize teen engagement” on its platforms through features including engagement-based feeds, infinite scroll, push notifications, “ephemeral content” and auto-play video, “while inhibiting the ability of those users to self-regulate,” it said.
The defendants knew that the design features “fostered addiction, anxiety, depression, self-harm, and suicide among teens and preteens,” but they “rejected repeated internal proposals, and external pressures, to implement protections against youth mental health harm,” said the complaint. The company chose a metric by which it measures conduct that violates its community standards policies that “it knows to grossly underreport harmful material” on its platforms, the complaint said.
The company profits from young users' exposure to harmful material and by not implementing design features that would protect them from sexual exploitation and mental health harm, the complaint said. It doesn’t charge children for accessing its platforms but monetizes the data it gathers from them through targeted advertising, it said. To protect its revenue -- $116 billion in 2022 that’s “substantially all” attributable to ads -- Meta has “deceived” advertisers about the ability to ensure that paid advertising doesn’t appear with sexually explicit and violent content, it said. Complaints “from prominent companies make clear that Meta’s assurances were and are plainly false,” it said.
An investigation in support of the complaint documented ways in which Meta “has facilitated human trafficking and the distribution” of child sexual abuse material, said the complaint. It has “proactively served and directed” kids to sexually explicit images through recommended posts “even where the child has expressed no interest in this content,” it said. The company has enabled adults to find, message and groom minors, “soliciting them to sell pictures or participate in pornographic videos,” it said. It also allowed, and “failed to detect,” a fictional mother offering her 13-year-old daughter for trafficking and “solicited the 13-year-old to create her own professional page and sell advertising,” it said.
An investigator's search for pornography was blocked on Facebook, but the same search on Instagram “yielded numerous accounts,” including one that remained active on Instagram but was suspended on X, formerly Twitter, for violating that platform’s rules, the complaint said. The account included stories that were viewable and videos of individuals having intercourse, it said.
Contrary to Meta’s public representations, its platforms contain numerous accounts with incest and sexual fetishes, the complaint said. The accounts often had large social networks of individuals following and commenting on pornographic videos and images posted on Facebook and Instagram, it said. Many of the images found on Meta platforms were excluded from the complaint “as too graphic and disturbing,” it said. Investigators found numerous posts and accounts related to dominating father figures and young girls with one of the “least explicit” images showing a girl with underwear reading, “hurt me harder,” it said.
Torrez seeks a declaration that each act, statement or omission of the defendants described in the complaint constitutes separate and willful violations of the UPA and that their behavior created a public nuisance, the complaint said. He seeks civil penalties on each defendant of up to $5,000 for each UPA violation; costs of the investigation and litigation; and pre- and post-judgment interest. He also requests orders enjoining Meta and its agents from engaging in deceptive practices in violation of New Mexico law and to abate the public nuisance conduct, plus disgorgement of profits and data unjustly obtained. Meta didn't comment Thursday.