Tech Policy

Who’s going to regulate AI? It might be you.

As legislators struggle to keep pace with technology, experts say the industry needs to take a more active role in keeping things in check.

Mar 26, 2019

From Facebook’s role in spreading misinformation to new European copyright laws, it’s the hottest topic in technology right now. How should technology companies be regulated? How does that regulation keep up with emerging technologies like AI? And who will make sure new laws don’t stifle innovation?

It’s true that legislators often struggle to understand basic technical concepts, while companies are advancing technologies much faster than governments and the legal system can cope with. Speaking at EmTech Digital, MIT Technology Review’s AI conference, a group of leading experts on AI and policy suggested that new standards and cooperation were needed.

While Google policy chief Kent Walker announced the formation of a new external advisory council for AI development, Rashida Richardson, director of policy research at the AI Now Institute, said that the emphasis should be on technologists and leading companies acting to prevent misuse of the systems they are building.

“Who bears the burden for ensuring that emerging technologies are not discriminatory?” she asked.

Unintended consequences—for example, when face recognition systems make false positives—are too dangerous for many groups of people, she said, and systems trained on bad data only end up reinforcing preexisting bias. But preventing abuses while simultaneously encouraging development is clearly something that the law struggles with.

“The companies and individuals responsible for creating emerging technologies have an obligation. They need to do their due diligence—deeply interrogating the context in which a data set was created, for example,” Richardson said. “In other cases, there are times that companies may find their technology cannot be made discrimination-proof, and they will have to make a tough decision on whether they should bring that product to market.”

Brendan McCord, an advisor to the US Department of Defense, said that the largest and most influential companies should use their “immense power” and take a more active role in helping shape regulatory efforts.

“Civil society groups are doing a good job in trying to raise awareness of these issues,” he said. “But companies have enormous capacity to drive this conversation.”

McCord, who previously worked on the Pentagon’s controversial Project Maven, suggested that a consortium of leading companies could help establish industry norms or even work with legislators to design future-proof approaches to regulating AI, machine learning, and other fast-evolving technologies.

“I think a good strategy is that companies [like Google] band together with other companies and create momentum, create a push for the right kind of regulation, and have that codified, which drives a virtuous cycle where other companies have to comply with that regulation,” he said.

However, this would require companies to work much harder to put the interest of the public ahead of their own profits, he added.

Google’s Walker said there were lots of examples of companies making good decisions—and that Google itself was considering which elements of Europe’s new data privacy laws it might be able to import into the US.

But the evidence suggests that current approaches to self-regulation have shown many weaknesses—and often only manifest in the face of threats from governments or the courts. Facebook announced less than a week ago that it was going to stop allowing advertisers to target race, gender, and age, for example. That decision, however, came only after a string of lawsuits charging that the company was violating civil rights laws established in the 1960s.

AI Now’s Richardson said it is difficult to regulate emerging technologies because they are moving so quickly and often leave out important stakeholders.

“There is very ambiguous rhetoric around equality,” she said. “It’s really hard to say ‘We will not harm people with this technology.’ Who makes that decision?

“It’s harder to regulate, because either you have a full moratorium until we understand it, or you live in the world we live in right now, in which you’re trying to catch up.”