Consumer Electronics Daily was a Warren News publication.
Investments 'Exploding'

Top Accenture, Microsoft Executives Want 'Responsible' AI

Accenture CEO Julie Sweet joined Microsoft President Brad Smith and security experts Friday in seeking more focus on “responsible” AI. It's time for a broader national discussion, involving the government, interest groups and the companies building AI systems, said speakers during a Center for Strategic and International Studies webinar Friday. CSIS plans to launch a project in coming months.

Industry needs “global, interoperable standards so that we can have global companies operating around the world with similar standards,” Sweet said. Industry needs to take advantage of lessons learned from around the world, she said. “Responsible AI is really not something that should be contained within the border or that should be competitively different,” she said. Sweet noted the Business Roundtable is already focusing on AI.

AI investments are “exploding,” more than doubling to $90 billion worldwide last year, Smith said. “The size and complexity of AI models has really exploded and Microsoft has been one of the companies at the forefront,” he said. “You’re now talking about AI models that not just have tens, but now 200 billion, 300 billion parameters that are being used to make decisions based on the analysis of data,” he said. Regulations are emerging, with 14 countries passing 18 laws last year addressing AI, he said.

A few lessons have already emerged, Smith said. Microsoft created a “multidisciplinary group to really govern AI development and use,” he said: In contrast to earlier engineering projects you need more people from the social sciences and humanities involved “because, after all, what we’re fundamentally doing is endowing computers with the capacity to make decisions that previously could only be made by people. We have to make sure that all of the values that matter are represented.” These principles have to be part of the “goals and rules,” he said.

Microsoft is now in the second generation of its own AI standard, Smith said. Microsoft set 14 goals “that then get applied across all of our engineering groups as they’re building AI systems,” Smith said. Microsoft found the need to further focus on “high-risk systems,” such as one that could deny somebody a loan from a bank based on “indices that none of us would feel comfortable with or misidentify someone and deny them entry to an event,” he said. Companies need infrastructure with tools engineers can use to evaluate the system they’re building, he said. Companies need to do training and install compliance systems, he said. “You put all of this together and you start to realize, frankly, this is complicated,” he said.

Sometimes you can design technology that is too complicated to build, and sometimes you can build technology that’s too difficult to operate,” at least economically, Smith said. “If we don’t have a dialogue about the practical aspects of all of this I think that there’s a real risk that … we’ll either under-regulate, and fail to address the harms that people worry about, or we’ll regulate with such a heavy touch that we’ll find that the regulation itself makes it difficult, or impossible, to build the kind of AI systems that the world really wants to use,” he said.

Accenture hires 100,000 people every year and processes more than 4 million applications, Sweet said. In 2018, the company started using AI to match applications with the jobs available, she said. “AI is absolutely critical to the core of Accenture,” she said. Accenture focused on having responsible AI, she said: “When we started using AI at scale, we said we need to make sure that every part of the process, from recruiting to hiring, that’s using AI doesn’t have bias, is safe, protects the privacy” of applicants.

When you think about what some of the narrative is, some of the pushback … is dealing with bias and making sure that we’re on a uniform plane,” said former FCC Commissioner Mignon Clyburn.

We are not starting from scratch” on AI regulation, said Gregory Allen, director of the CSIS Project on AI Governance. AI already has to work within other regulation in all industries, he said. “Software is integral to the functioning of nuclear power plants, so if you come with any kind of new technology, AI or otherwise, and you apply it to nuclear power plants you’re going to encounter that existing regulatory framework,” he said: “What is true for nuclear power is true for the electricity grid as a whole, is true for the automotive industry, is true for the financial industry, and on and on and on.”