University of Toronto Magazine University of Toronto Magazine
Illustration of a man who's head dissolved into geometric shapes. Computer science professor Geoffrey Hinton believes artificial intelligence will soon transform almost everything we do.
Illustration by Jesse Lenz, based on a photo by Noah Berger

Getting Smarter

A U of T computer scientist is helping to build a new generation of intelligent machines

Geoffrey Hinton has a news bulletin for you: You’re not conscious.

OK, you’re conscious as opposed to being unconscious – such as when you fall asleep at night, or when you get knocked out during a boxing match or when a doctor administers a general anesthetic before surgery. But you don’t have some intangible mental quality that worms or daffodils – or toasters, for that matter – lack.

“Consciousness is a pre-scientific term,” says Hinton, as we sit in the lounge down the hall from his office in the department of computer science. (Actually, Hinton remains standing, explaining that it’s easier on his back; to show me something on his laptop, he kneels.) He draws an analogy to how we conceived of the notion of “life” a hundred years ago. Back then, scientists and philosophers imagined that living things were endowed with a “life force” – the French philosopher Henri Bergson called it élan vital – that distinguished living from non-living matter. But once we got a grip on genetics and microbiology, and especially the structure and function of DNA, the notion simply faded away. Living matter, it turns out, is just like non-living matter, except for being organized in a particularly complex manner. Eventually, says Hinton, as we come to see brains as machines (albeit extraordinarily complex ones), we’ll see consciousness in a similar way. Consciousness, perhaps, is simply what it feels like to be using a brain.

“Of course, a boxing referee will have his own definition [of consciousness] – but all of them are just a muddle,” says Hinton. “When we get down to doing science, it’s just a useless concept.”

And with that philosophical hurdle out of the way, there’s nothing to stop us from constructing truly intelligent machines, Hinton says. To be sure, with today’s technology, no machine can perform as well, at so many different kinds of cognitive tasks, as a real live person with a fully functional brain. But a machine that’s modelled on the brain – a machine that can recognize patterns, and learn from its mistakes, just like people do – can think, too. And it won’t be mere illusion. It’s not just that they’ll look or sound as though they’re being smart; Hinton believes they’ll actually be smart. The first signs of the coming sea change are already here, from advances in computer vision to speech recognition to the self-driving car – and Hinton is confident that the revolution in machine intelligence is only just beginning.

Born in Bristol, England, Hinton was still in high school when he began to wonder about the parallels between computers and brains. (A point of trivia: Hinton is a great-great-grandson of George Boole, the 19th-century English mathematician whose work on logic paved the way for today’s digital computers. To my mind, however, he has the facial features of Isaac Newton – at least, as one imagines Newton would have looked without the wig.)

Hinton went on to earn a BA in experimental psychology from the University of Cambridge, and a PhD in artificial intelligence from the University of Edinburgh. After holding a number of teaching positions in the U.K. and the U.S., he joined the faculty at U of T in 1987, and is now the Raymond Reiter Distinguished Professor of Artificial Intelligence. In 2013 he also took a part-time position at Google, with the title of Distinguished Researcher, and Hinton, now 67, divides his time between Toronto and Google’s headquarters in California.

The brain is still very much on Hinton’s mind. His most peculiar and yet endearing habit is to run down the hallway, excitedly declaring that now, finally, he understands how the brain works. “He’s got this infectious, contagious enthusiasm,” says Richard Zemel, who did his PhD under Hinton, and now, as a faculty member, works in an office down the hall from his former supervisor. He says he’s lost count of how many times Hinton has run to his office and knocked on the door, declaring “I’ve solved it! I know what the brain is doing!” Of course, repetition would seem to take the steam out of such claims – but Hinton, with his own brand of dry humour, has that angle covered, too: Hinton, according to Zemel, will typically add: “I was wrong every other time, but this time I’m right!”

Not just anyone could get away with such shenanigans. It helps if you’re brilliant. Or, to put it another way, some of the time, your ideas have to be right. “There aren’t that many people in the world who could make these claims,” says Ruslan Salakhutdinov, another former student of Hinton’s, who, like Zemel, is now on faculty and has an office along that same hallway. “Geoff is very humble,” Salakhutdinov says. “He generates a lot of good ideas – but you’d never hear him saying ‘I developed this idea on my own,’ even though he did . . . He doesn’t take as much credit for his work as he deserves.”

Hinton is recognized as a world leader in a particular branch of artificial intelligence (AI) known as “deep learning.” In fact, he pretty much invented the field. Deep learning uses neural networks – computer programs that simulate virtual neurons, which can exchange signals with their neighbours by switching on or off (or “firing”). The strength of those connections, which determines how likely the virtual neurons are to fire, is variable, mimicking the varying strengths of the connections between neurons in the brain.

The network can be trained by exposing it to massive data sets; the data can represent sounds, images or any other highly structured information. In response, the strength of certain connections increases, while others decrease. For example, two spots with lines above them could be a pair of eyes, but with nothing more to go on, that’s a very uncertain conclusion. But if there’s a dark, horizontal patch below it, which could be a mouth – then the whole thing could be a face. If there’s a nose-like path in between, and a hair-like area above, the identification becomes almost certain. (Of course, further cues are needed to know if it’s a human face, an animal face or C-3PO.)

The value of Hinton’s work is recognized far beyond the world of computers and algorithms. “Geoff Hinton is one of the most brilliant people in cognitive science,” says Daniel Dennett, who teaches at Tufts University in Massachusetts and is known for a string of popular books, including Consciousness Explained. “I think neural networks are definitely as close as we’ve come to a thinking thing.” He cautions that, as clever as neural networks are, they have yet to match the mind “of even a simple critter.” But neural networks “will almost certainly play a major role” in the future of AI and cognitive science.

Neural networks are not a new idea. In fact, the first papers on the subject go all the way back to the 1950s, when computers took up entire rooms. Hinton worked on neural networks early in his career, and almost managed to bring them into the mainstream in the 1980s, when deep learning first began to show some promise. But it didn’t quite “take.” Computers were still painfully slow, and lacked the power needed to churn through the vast swaths of data demanded by neural networks.

By the 2000s, that had changed – and Hinton’s research was moving in promising new directions. He ushered in the modern era of neural network research with a 2006 paper in the journal Neural Computation, and another key paper in Science (co-authored with Salakhutdinov) a few months later. The key idea was to partition the neural network into layers, and to apply the learning algorithms to one layer at a time, approximating the brain’s own structure and function. Suddenly AI researchers had a powerful new technique at their disposal, and the field went into overdrive.

Some of the biggest breakthroughs involved machine vision. Beginning in 2010, an annual competition known as the Large Scale Visual Recognition Challenge has pitted the world’s best image-recognition programs against each other. The challenge is two-fold. First, the software has to determine whether each image in a vast database of more than a million images contains any of 1,000 objects. Second, it has to draw a box around each object every time one turns up in the database.

In 2012, Hinton and two of his students used a program they called SuperVision to win the competition, beating out five other teams of researchers. MIT’s Technology Review called it a “turning point for machine vision,” noting that such systems now rival human accuracy for the first time. (There are some intriguing differences between the way computer programs and humans identify objects. The Review article states that the best machine vision algorithms struggle with objects that are slender, such as a flower stem or a pen. On the other hand, the programs are very good at distinguishing similar-looking animals – different bird species, for example, or different breeds of dog – a task that’s challenging for many humans.)

The software that Hinton and his students developed can do more than classify images – it can produce text, even whole sentences, to describe each picture. In the computer science lounge, Hinton shows me an image of three different kinds of pizza slices, on a stove-top. The software generates the caption, “Two pizzas on top of a stove-top oven.” “Which isn’t right – but it’s in the ballpark,” Hinton says. Other images get correctly labeled as “a group of young people playing a game of frisbee” or “a herd of elephants walking across a dry grass field.” Standard AI, he says, “had no chance of doing this.” Older programs would have stumbled at each stage of the problem: not recognizing the image; not coming up with the right words to describe the image; not putting them together in a proper sentence. “They were just hopeless.” Now, it seems, computers can finally tell us what they’re seeing.

The practical uses for sophisticated image recognition seem almost endless, but one of Hinton’s projects is deceptively simple: getting a machine to read handwritten numbers. With great effort, he and his colleagues at Google have pulled it off. The software they’ve developed lets Google read the street addresses on people’s homes (vital for connecting the data from its “map” function to its “street view” function). “The techniques we developed for that are now the best way of reading numbers ‘in the wild,’” Hinton says. “And the neural nets are now just slightly better than people at reading those numbers.”

Other potential applications for machine vision include medical diagnostics, law enforcement, computer gaming, improved robots for manufacturing and enhanced vehicle safety. Self-driving cars are one of Google’s major current projects. But even when there’s a human behind the wheel, automated systems can check to make sure no pedestrian or cyclist is in front of the car – and even apply the brakes, automatically, if necessary.

On the medical front, another potentially life-saving application is in pharmaceutical design. Drug companies experiment with countless chemicals, involving molecules of different shapes. Predicting whether a complicated molecule will bind to another molecule is maddeningly difficult for a human chemist, even when aided by computer-generated models – but it may soon be fairly easy for sophisticated neural networks, trained to recognize the right patterns. In 2012, Merck, the pharmaceutical giant, sponsored a competition to design software to find molecules that might lead to new drugs. A team of U of T graduate students, mentored by Hinton, won the top prize. Using data describing thousands of different molecular shapes, they determined which molecules were likely to be effective drug agents.

Between the machine vision prize and the Merck prize, 2012 was obviously a good year for Hinton. As it happens, it was also the year that he won a $100,000 Killam Prize from the Canada Council for the Arts, for his work on machine learning. In 2010, he’d won the Herzberg Canada Gold Medal for Science and Engineering, which comes with a $1 million research grant over a five-year period.

Pictures are made up of distinct patterns – and so too are sounds, which means that speech recognition is a prime target for neural networks. In fact, anyone using Google’s Android phone already has access to a speech recognition system developed by Hinton and his students. Launched in 2012, the feature is called Google Now – roughly comparable to the Siri personal digital assistant that runs on iPhones – and can also be found on the Google Chrome web browser on personal computers. Popular Science named it the “Innovation of the Year” for 2012. Ask Google Now a question, and it combs the Internet for an answer, which it delivers in clear English sentences.

Recognizing speech is a good start; converting speech to text is also invaluable. And then there’s text-based machine translation – the task of translating one written language into another. One way of doing that – the old way – is to scour the Internet for words and phrases that have already been translated, use those translations, and piece together the results (and so every time you input “please,” the French output will be “s’il vous plaît”). “That’s one way to do machine translation – but it doesn’t really understand what you’re saying,” says Hinton.

A more sophisticated approach, he says, is to feed lots of English and French sentences into a neural network. When you give it an English sentence for translation, the network predicts the likely first word in French. If it is told the true first word in French, it can then predict the likely next word in French, and so on. “After a lot of training, the predictions become very good,” says Hinton. “And this works as well as the best translation systems now, on a mediumsized database. I think that’s an amazing breakthrough.”

An even bigger breakthrough would be to skip the textual mediator, and translate speech from one language directly into speech from another language. In fact, Microsoft unveiled a demonstration version of such a system last year; the company has added it as a “preview” feature to its popular Skype communications platform. Hinton, however, says the Microsoft translator is still fairly rudimentary. “What you really want is to put something in your ear, and you talk in French, and I hear it in English.”

As soon as Hinton mentions this, I immediately think of Douglas Adams and his comedic science fiction classic, The Hitchhiker’s Guide to the Galaxy. In the Hitchhiker’s Guide, Adams describes a “small, yellow, leech-like” fish, which, when placed in the ear, functions as a universal translator: With the Babel fish in place, “you can instantly understand anything said to you in any form of language.” Hinton is clearly a Douglas Adams fan, too. Yes, he says, a mechanical version of Adams’ fictitious fish is exactly the technology that he’s talking about. He adds, somewhat optimistically: “That will make a big difference; it will improve cultural understanding.” (In the Hitchhiker’s Guide, the Babel fish has the opposite effect, causing “more and bloodier wars than anything else in the history of creation.”)

Hinton and the other experts I spoke with emphasized the benefits of machine intelligence, but there’s long been a dark side surrounding such technology. Machines may improve our lives, but they can also take lives. This spring, a week-long conference was held at the United Nations in Geneva, where delegates considered the question of autonomous drones making life-and-death decisions in combat, and carrying out attacks without direct human involvement. (As usual, the science fiction writers were the first to explore this territory, with killer machines being a sci-fi staple from The Terminator to The Matrix.) Hinton is well aware that the largest investor in machine learning is the U.S. Department of Defense. He refuses to take money from the U.S. military, but understands that there is nothing to stop them – or anyone – from implementing his ideas.

But Hinton’s tone is positive. Major societal changes are coming, thanks to machine learning, and those changes will do more good than harm. It’s been a long wait. Hinton – and, in fact, all of the researchers that I spoke with – acknowledge AI’s bumpy history; there have been many decades in which hype outstripped actual progress. But now, it seems, machine learning has come of age. “It’s quite exciting seeing ideas I’ve been thinking about for 30 years – seeing them actually now succeeding in practice, and being used by billions of people,” Hinton says. And then – an additional group of neurons has clearly fired – he corrects himself. “Hundreds of millions, anyway.”

I’m mentally tallying machine learning’s possible applications, from language translation to image recognition to designer drugs. Will machine learning, powered by neural networks, simply transform everything? “Yes,” Hinton says. “And quite soon.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. 8 Responses to “ Getting Smarter ”

  2. George Hiraki says:

    Will machine intelligence understand the feelings of Anna Karenina before she fell in front of the train after reading the novel?

  3. ralph chelbea says:

    Do you?

  4. University of Toronto Magazine says:

    Prof. Geoff Hinton responds:

    @George: Yes, eventually.

  5. Ken McCallion says:

    Computer scientist Geoffrey Hinton attempts to diminish the distinction between organism and machine. He neglects to observe that machines can only self-organize from their topmost processing level, downward, and by principles that an organism or another machine has designed into them. In contrast, organisms self-organize from the biochemical level, upward, in successive, hierarchic, integrated and yet discontinuous (in a word, emergent) layers of adaptation to their total environment – and without need of any planning process. Even without flying a banner attesting to “consciousness” or reflexive cognition or whatever one wishes to call it, cognitive processing throughout Kingdom Animalia bears the seams and tool marks of both original and continued self-reorganization at many levels.

    Ken McCallion
    MA 1987
    Toronto

  6. Randy says:

    I am not a computer scientist nor am I a philosopher. And I am certainly not religious, at least in any theistic sense. But I believe we make a grave error when we dismiss human consciousness in this way and then conclude that machines might one day equal us. I believe that such a narrow view of the human mind - equating it with the material brain - is causing us to lose sight of what it means to be human.

    It is clear now, I would agree, that the brain is a very complex arrangement of material, a machine. But is this all that we are? Can we dismiss this "sense of being" so easily? You yourself say, "Consciousness, perhaps, is simply what it feels like to be using a brain." Again I might agree. But you seem to be missing the obvious question: who or what is it that "feels what it is like to use a brain"? Who or what are "you"?

  7. jMicaela says:

    Looks like web site CAPTCHAs will have to be revised to filter out smart bots in the future.

  8. Ross Harley (RR I) says:

    This is staggering! I will use these ideas for my work in neuromusicology. Music is nothing if not "pattern recognition." Many thanks to Dr. Hinton. He is a massive inspiration.

  9. Dave says:

    “His style of improvisation would seem to have combined the highest reaches of instrumental virtuosity with the most tensely disciplined melodic structure and the most spontaneous emotional expression, all of which in one man you must admit to be pretty rare.”

    “Anybody could learn what Louis Armstrong knows about music in a few weeks. Nobody could learn to play like him in a thousand years.”

    Why these two quotes about Louis Armstrong? The first is from American composer/critic Virgil Thomson’s Swing Music (1938), and the second from Benny Green, jazz saxophonist.

    Both eloquently pinpoint the essence of Louis Armstrong’s musicality.

    When AI can ‘produce’ a Louis Armstrong (as just one example of numerous artists one could call upon) then professor Hinton may be on to something. Until then…