FTC Recommends Congress Use Caution on AI Mandates
The FTC voted 4-1 Thursday to issue a report recommending Congress use “great caution” when mandating or promoting use of artificial intelligence in order to reduce online harms. Some AI tools show promise, but overall, AI is inadequate and shouldn’t be overly relied on, the report said.
In December 2020, via the 2021 Appropriations Act, Congress directed the commission to study whether and how artificial intelligence can be used to address a wide variety of online harms, including scams, deep fakes, fake reviews, dark patterns, hate crimes, counterfeit goods, opioid sales, sexually exploitative material and terror content.
Commissioner Noah Phillips dissented, saying FTC staff didn’t complete the requisite study, and the report doesn’t fully answer Congress’ questions. Commissioner Christine Wilson sided with Democrats and agreed with the report’s recommendation that Congress should generally steer clear of laws that require, assume the use of or pressure companies to deploy AI tools to detect harmful content.
Chair Lina Khan called the report an informative and comprehensive document. She highlighted its observation that newer technologies play a key role in amplifying and exasperating many online harms by design. It’s a key area in which the FTC should be deepening its understanding, including the notion that business models result in these harmful practices, she said.
The report doesn’t spend enough time on cost-benefit analysis and too much time on topics outside the congressional query, said Phillips. The report says the only way to deal with online harm is through laws that change tech business models and incentives, but nothing in the report supports that statement, he argued. Phillips generally agreed companies and government should exercise caution when relying on AI tools, but said the report relies on cursory analysis. FTC staff didn’t seek input from stakeholders, including the companies that use the technology, he said.
The report rightly concludes that AI tools show promise, but overall AI hasn’t significantly curtailed online harm, said Commissioner Rebecca Kelly Slaughter. She said the report makes practical recommendations for limiting AI harms. She “respectfully” disagreed with Phillips about the commission’s approach, saying the report is replete with examples of industry use of AI. She cited efforts by Sens. Richard Blumenthal, D-Conn., and Marsha Blackburn, R-Tenn., in introducing their Kids Online Safety Act, which prohibits algorithmic recommendations for children. The real issue is unfettered data collection, which fuels destructive algorithms, she said. Data minimization is one potential brightline solution because it would force companies to collect only what’s necessary to provide services and products. Data minimization is a key component of the bipartisan privacy discussion draft advancing in the House (see 2206140069). Slaughter said she’s “deeply encouraged” about bipartisan momentum on privacy legislation. Wilson seconded Slaughter's “enthusiasm” for congressional progress on privacy.
Wilson raised concerns about how the report addressed misinformation and idea-labeling. The answer to bad speech is more speech, not enforced silence, said Wilson. Commissioner Alvaro Bedoya agreed with the need to proceed with caution. He raised concerns about software trained on one language that is applied to online speech in other languages. Leading machine learning programs are trained in English, even though most of the world doesn’t speak English, he said.
FTC staff concluded platforms sometimes use automated tools to spread toxic or illegal content and other automated tools to filter some of it out, often without success, said Advertising Practices Division attorney Mike Atleson. Congress “should not be promoting the use of these tools” and should focus on putting guardrails on their use, he said. Tech companies should be more transparent and accountable for the use of AI tools and do so in a way that protects privacy, he added. If Congress wants to grant FTC oversight in this area, legislation should be paired with additional agency resources, he said. The report said human intervention is necessary, but given the volume of content, humans can’t review it all. Advanced technologies should be considered to intervene and slow the speed of viral content, the report said: That includes tools that give users more control over what they see.