University of Toronto Magazine University of Toronto Magazine
Photo of a cut-out image of a robot.
Photo by Jonty Wareing/Flickr

Evil Robots

Could the machines we create one day destroy us?

Just last year, IBM’s “Watson” computer won TV game show Jeopardy. But could creating software that smart be a dangerous game? What if the United States’ defence computers, for example, suddenly decide they have a directive to wipe out humanity? It’s a popular sci-fi conceit, but could it actually happen?

“Computers can be evil if programmed that way,” says Graeme Hirst, a professor in the department of computer science and an expert in developing systems that can process human language. He points to viruses that can turn home computers into spambots. Or the Stuxnet worm, which crippled Iran’s nuclear program in 2010 and is widely considered to have been launched with the support of the American and Israeli governments.

People, of course, are behind these nefarious programs. “We can certainly imagine some rogue hacker able to do something really evil,” says Hirst, “probably nuclear.” But rogue robots? “No,” says Hirst. “A computer by definition is not able to take a decision outside its programming.”

That doesn’t mean a computer can’t do something its programmer didn’t intend. “If a computer is going to bring down the world, it’s going to do it because of human error,” says Hirst. Not only are there honest mistakes – he cites the software programming error partly responsible for the blackout of 2003 – but “the big problem is that it’s impossible, sometimes, to really predict what a program is going to do.” If the program was created for a complex situation –perhaps to monitor the weather and control the floodgates of a dam – it’s relatively easy for its human creators to overlook a crucial factor or interaction.

“Talking about [machine] sentience is unhelpful here,” says Hirst. What matters is programming. And while programming ever-smarter computers brings risks and challenges, they’re ultimately still challenges related to human failings. Our fears of a possible computer-generated or computer-enabled disaster (let alone an “implausible” Matrix-style enslavement to machines) are “just extrapolations,” says Hirst, “of the current decade’s thoughts about cyber-warfare and the role of computers in terrorism in general.” The ability to choose between good and evil remains a fundamentally human characteristic.

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *