Consumer Electronics Daily was a Warren News publication.
‘Shared Responsibility’

NTIA Fields Recommendations for New AI Liability Framework

Policymakers should consider new liability frameworks when assessing AI technology's impact, tech industry and consumer groups told NTIA in comments due Friday (NTIA-2023-0009).

NTIA Administrator Alan Davidson told reporters last week that developers should be held responsible for AI's societal consequences (see 2403270067). The agency collected public comments through Friday on its inquiry into the risks and benefits of open-source AI development models.

AI developers should be held to a “higher standard of accountability” than the broader software industry, Public Knowledge said in comments. Innovation shouldn’t be “stifled,” but companies should promote responsible development that prioritizes safety, PK said. AI models learn, adapt and make decisions in complex ways, making auditing a challenge, thus creating a “novel” set of risks, PK added.

NTIA should explore solutions that “acknowledge a shared responsibility for safety by model developers, deployers, and users,” Google commented: Updated liability frameworks could be useful for “fully realizing the benefits of open models.” The company argued the entity at the “closest point to the AI product end-user is best positioned to monitor and prevent misuse.” Developers can prohibit harmful misuse of AI models, but once a model is shared, they “relinquish much of this control,” said Google: The developers aren’t legally responsible for third-party misuse of open-source applications. “Clarifying this point for open models can help drive continued investment,” said Google.

Bipartisan AI legislation at the federal level is needed to establish common standards and avoid a “fragmented regulatory environment” in the U.S., Meta commented. Legislation should incorporate language from the White House’s voluntary AI commitments with industry (see 2307210043), federal agency response to President Joe Biden’s AI executive order, and recommendations from academic and industry experts, said Meta. Senate Majority Leader Chuck Schumer, D-N.Y., said last week he remains "focused on working towards reaching a bipartisan consensus on AI legislation" in response to new OMB policies for federal AI use (see 2403280055).

A licensing framework could be helpful in establishing common terms and conditions for developers, the Information Technology Industry Council commented. Traditional software licenses might not be directly applicable to open AI models, said ITI. Like Google, ITI urged the agency to acknowledge that risks associated with the technology are a shared responsibility, not just the responsibility of developers. “Once foundation models with widely available weights are deployed, developers are not able to retract said model, even in cases where that model is being used in malicious ways,” said ITI.

Microsoft recommended “clear and consistent” definitions for open source AI models and distribution methods. Consistent definitions “will allow policymakers to target specific attributes that are introducing risk, such as the breadth of deployment or the ability to fine-tune or otherwise modify a model, increasing the effectiveness of any policy interventions and reducing undue burdens,” said Microsoft. The company suggested “voluntary risk-based and outcome-focused frameworks” could help the responsible release of foundation models and model weights. Such frameworks can help “set expectations for stakeholders while longer-term international standards are developed,” said Microsoft.