The FTC on Wednesday unveiled proposed changes to children’s privacy law rules, including more stringent requirements for obtaining parental consent and limits on how platforms can monetize children’s data. The agency issued an NPRM seeking comment on potential changes to the Children’s Online Privacy Protection Rule. The changes would require platforms and apps to “obtain separate verifiable parental consent to disclose information to third parties including third-party advertisers -- unless the disclosure is integral to the nature of the website or online service.” The agency would ban websites from “collecting more personal information than is reasonably necessary for a child to participate in a game, offering of a prize, or another activity.” In addition, it would prohibit operators from “using online contact information and persistent identifiers collected under COPPA’s multiple contact and support for the internal operations exceptions to send push notifications to children to prompt or encourage them to use their service more.” The agency is considering specifying that personal information can be retained “only for as long as necessary to fulfill the specific purpose for which it was collected.” The commission voted 3-0 to issue the NPRM. The public will have 60 days to comment after the notice's Federal Register publication. “Kids must be able to play and learn online without being endlessly tracked by companies looking to hoard and monetize their personal data,” FTC Chair Lina Khan said in a statement. “The proposed changes to COPPA are much-needed, especially in an era where online tools are essential for navigating daily life -- and where firms are deploying increasingly sophisticated digital tools to surveil children.” In a statement Wednesday, Sens. Ed Markey, D-Mass., and Bill Cassidy, R-La., said the FTC proposal is “critical to modernizing online privacy protections” but shouldn’t be seen as a replacement for legislation. Markey and Cassidy wrote legislation updating children’s privacy law (see 2303220064).
The FTC will “closely monitor” generative AI for enforcement opportunities to protect competition and consumers, agency staff said in a report issued Monday. Staff offered takeaways from an October roundtable where creative professionals discussed AI's benefits and risks. Their concerns touched on data collection without consent, undisclosed use of work, competition from AI-generated creators, AI-driven mimicry and false endorsements. “Although many of the concerns raised at the event lay beyond the scope of the Commission’s jurisdiction, targeted enforcement under the FTC’s existing authority in AI-related markets can help protect fair competition and prevent unfair or deceptive acts or practices,” the agency said.
The European Commission is investigating whether X breached the Digital Services Act, it said Monday. X didn't immediately comment. It is the first time the EC has opened proceedings under the DSA. X is one of 19 companies the DSA classifies as "very large online platforms" (VLOPs). The VLOPs are required to analyze systemic risks they create from dissemination of illegal content or the harmful effects such content has on fundamental rights (see 2311100001). On the basis of a preliminary investigation, including X's first risk report, transparency report and replies to a formal request for information, the EC said the company may have violated the DSA "in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers." It launched formal infringement proceedings that will focus on: (1) Compliance with DSA obligations related to countering dissemination of illegal content in the EU. (2) The effectiveness of measures taken to combat information manipulation on the platform, particularly X's "so-called 'Community Notes' system" in the EU. (3) The measures X took to increase its platform's transparency. (4) A suspected deceptive design of the user interface particularly related to checkmarks linked to certain subscription products (the Blue checks). The EC sent an "important signal today" to show that it wants the DSA to change the business models of VLOPs, an EC official said at a briefing. The launch of the inquiry doesn't mean X has breached the DSA, just that the EC has significant grounds to investigate, the official said. Illegal content in the EU is a key area of concern, the official said: X's notification system might not comply with the DSA, and some of its risk assessments for the EU aren't sufficiently detailed, especially in the area of languages monitored for illegal content. Some of the company's mitigation techniques are very broadly defined and may not be effective in combating illegal content such as graphic violence in connection with the Israel-Hamas conflict, the official added. In addition, the way X deals with disinformation relies on a combination of different systems, including blue checks, which may mislead users into believing the checks indicate more trustworthy content, she said. The EC has had strong engagement from all the VLOPs, but it's a "glass half full" because it's unclear whether the serious engagement is enough to mitigate risks, the official said. Asked for a definition of what the EC considers illegal content, the official said the DSA isn't a content moderation rule but an approach to deal with systemic risks and to assess what VLOPs do when they're notified of such content on their sites. The same goes for disinformation, the official said. The EC received examples of material national media authorities flagged, which were sent to X; however, the company did not address them, a second official noted: These include depictions of violent crimes and visible wounds. X's policies forbid publication of this content, but they appear to be available on its site. The EC will continue gathering evidence and, if it finds noncompliance, could impose interim measures, accept commitments from X to remedy the problems or make an infringement decision.
EU privacy law doesn't need tweaking at present, the European Data Protection Board said in response to a European Commission report on how well the general data protection regulation (GDPR) is working. Following its Dec. 15 plenary, the board said it "considers that the application of the GDPR in the first 5 and a half years has been successful. While a number of important challenges lie ahead, the EDPB considers it premature to revise the GDPR at this point in time." It urged the European Parliament and Council to quickly approve procedural rules relating to cross-border enforcement of the measure. Moreover, it said, national data protection authorities and the board need sufficient resources to continue carrying out their duties. The EDPB said it's convinced that existing tools in the GDPR will lead to a "common data protection culture" if they're used in a harmonized way. In a Q&A with Communications Daily, European Data Protection Supervisor Wojciech Wiewiorowski said he expects discussions about changes to the GDPR to begin in 2025 to deal with AI, among other items (see 2312010002).
NTIA will launch a public inquiry into “the risks and benefits of openness of AI models and their components,” the agency said Thursday. Administrator Alan Davidson will announce the launch during an event Wednesday with experts from the Center for Democracy & Technology, GitHub, Princeton University and the Centre for the Governance of AI attending.
DOJ’s decision to withhold support of long-held digital provisions in trade agreements undermines U.S. democratic values, the U.S. Chamber of Commerce said in a letter to Antitrust Division Chief Jonathan Kanter on Thursday. The Chamber questioned the division’s role in the removal of a digital trade chapter in the Indo-Pacific Economic Framework for Prosperity (IPEF). The organization asked Kanter to explain specifically what his office “finds objectionable” within the competition chapter of the United States-Mexico-Canada Agreement that “justified a complete gutting of those provisions in IPEF.” The Chamber asked what role DOJ might have played in U.S. Trade Representative Katherine Tai’s decision to abandon digital trade provisions at the World Trade Organization. DOJ didn’t comment.
Monthly internet service prices are highest in Norway ($79.40 monthly), with Iceland second ($62.10) and Russia least expensive ($5.60), edging out Ukraine ($6.10), according to a ranking of 85 nations by Polish e-commerce platform Picodi published last week. Internet service in the U.S. averages $50 a month, making it the sixth-most expensive, Picodi said.
The FTC on Tuesday unanimously voted to authorize a compulsory process in nonpublic investigations of products and services that use or claim to be produced using artificial intelligence. The commission voted 3-0 to approve a 10-year omnibus resolution the agency said will “streamline FTC staff’s ability to issue civil investigative demands,” a form of compulsory process similar to a subpoena. CIDs can be used to collect documents, information and testimony in consumer protection and competition probes.
New York is allocating $3 million to train higher education professionals to identify misinformation and extremist content following an “uptick in anti-Muslim and antisemitic rhetoric” on social media, Gov. Kathy Hochul (D) announced Tuesday. The state will spend $3 million to expand its threat assessment and management program to all state college campuses, she said. Since Hamas’ Oct. 7 attack on Israel, there has been a “400 percent increase in threats against Jews, Muslims and Arabs,” she said. “I will not allow our state to be defined by the angry few that peddle in hate and violence.”
Elon Musk’s “promotion of Antisemitic and racist hate” on X is “abhorrent” and “unacceptable,” White House spokesperson Andrew Bates said Friday. Musk earlier in the week agreed with a post on his platform X claiming Jewish communities “have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them.” Musk responded to the post, saying, “You have said the actual truth.” Bates said it’s “unacceptable to repeat the hideous lie behind the most fatal act of Antisemitism in American history at any time, let alone one month after the deadliest day for Jewish people since the Holocaust.” President Joe Biden and the administration will “continue to condemn Antisemitism at every turn.” IBM suspended advertising on the platform Friday. IBM "has zero tolerance for hate speech and discrimination and we have immediately suspended all advertising on X while we investigate this entirely unacceptable situation," the company said. Apple, Comcast, NBCUniversal and the European Commission also reportedly suspended advertising. They didn’t comment Friday. X responded to a request for comment Friday with an automatic reply: “Busy now, please check back later.” X CEO Linda Yaccarino said Thursday the platform has been “extremely clear about our efforts to combat antisemitism and discrimination. There's no place for it anywhere in the world -- it's ugly and wrong. Full stop.”