Prof. Gillian Hadfield. Photo by Wade Hudson

Will AI Be a Force for Good?

New technologies are difficult to regulate. With artificial intelligence, it may be time to rethink our approach, says Gillian Hadfield

About The Author

Author image: Scott Anderson

Scott Anderson

Editor, University of Toronto Magazine

Artificial intelligence already powers countless computer applications. As the technology evolves, Gillian Hadfield, the director of U of T’s Schwartz Reisman Institute for Technology and Society, wants to ensure that it does so in ways that benefit society. She spoke recently with University of Toronto Magazine.

Could you elaborate on the challenge you see with AI?
How well modern societies serve human goals is a function of the billions of decisions people make every day. We use regulated markets and democratic processes to try to ensure that these decisions are good for everyone. The challenge we’re facing with rapidly advancing powerful technologies like AI is that we’re starting to let machines make many of those decisions – such as sifting job applications or helping doctors diagnose and treat disease. The promise is that machines can help us make better decisions.

But machines powered by AI are not like people. It can be hard to understand why they make the decisions they do. They can see patterns we don’t see, which makes them potentially so helpful. Yet this also makes them harder to regulate. We can write rules that hold people and organizations accountable. But the rules we write for humans don’t easily translate to machines – and that’s the challenge: how do you get a machine to do what we – society – want it to do?

Can you not program an AI to act in a way that aligns with societal values?
This challenge makes engineers scratch their heads. They would happily encode society’s values into their machines, but societies don’t have lists of values to give them. We have diverse and evolving views. That’s why we have complex ways of deciding which values we’ll pursue in any context – who gets to decide if there will be a mask mandate or how safe a vaccine must be.

The question is how do you ensure that AI is responsive to the choices we’re making as a society? We don’t yet know how to build that kind of AI. We could pass legislation that says, “AI must be fair.” But what does that mean? And how would we determine if an algorithm is behaving the way we want it to?

What do you suggest?
We need technologies that can help us achieve our regulatory goals.  We may want to ban content on social media that’s harmful for children, but how do you check billions of posts every week? As a regulator, you can’t send in an army of computer scientists to identify where in the company’s algorithm it allows content that’s harmful to children. But a technology such as another AI could constantly monitor the platform to see whether harmful content is spreading. I call this “regulatory technology.”

Facebook has hired thousands of people to remove posts that violate its guidelines. Wouldn’t it be in Facebook’s interest to develop this kind of technology?
They are building this technology. But here’s the key point: why does Facebook get to decide what to remove and what to keep? What if removing the harmful content causes its advertising revenues to drop? Will it act in Facebook’s interest or in society’s interest?

We need regulatory technologies to be built by entities other than the ones that will be regulated by it. We want to ensure that Facebook is balancing advertising revenues against online harm in a way that meets society’s guidelines. The virtue of a regulatory market like this is that government sets the goals. That balance between revenues and harm is set by our democratic processes.

Wouldn’t building regulatory technologies require big tech companies to reveal their “secret sauce”? Would they?
This is the radical part. Yes, it will require tech companies to share more information than they currently do. But we need to redraw those lines. Those barriers around proprietary information are something legal thinkers invented at the beginning of the industrial revolution. It was originally intended to protect customer lists or the recipe for Coca-Cola. Now we just take it for granted.

We need to rethink public access to the AI systems inside technology companies because you can’t buy the AI and reverse engineer it to figure out how it works. Compare it to regulating automobiles. Government regulators can buy cars and put them through crash tests. They can install airbags, measure what difference these make, and then require them as standard equipment in all new cars. We don’t let the car manufacturers say, “Sorry, we can’t add airbags. They’re too expensive.”

What do we need in order to build these regulatory technologies?
Lots of smart entrepreneurial people are starting to think about how to build AI that tests to make sure an algorithm is fair, or AI that enables people to curate their social media content to be healthy for themselves and their communities. We need our governments to get focused on how to nurture these technologies and that industry. We need to work collectively to fill in the blanks in our regulatory infrastructure. Once we build this shared infrastructure, we can all focus on building our organizations in a way that improves life – for everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *