The dawn of smart, self-aware machines is a popular sci-fi storyline that, for many years, seemed destined to remain in the realm of make-believe, or at least in the distant future. Recently, though, many problems that long bedevilled computer scientists have fallen one by one – so much so that insiders are calling the last 10 years a golden decade in artificial intelligence. Computers can now outshine humans in recognizing images, can understand what we’re saying and often respond sensibly to questions, can translate competently from one language to another and can defeat us in even the most complicated strategy games.
Recent advances in AI have largely built on work by Geoffrey Hinton, a U of T professor emeritus of computer science and chief scientific advisor at the Vector Institute. For years, he worked in relative obscurity, his main idea on this burgeoning field’s fringes. He was one of a few scientists to back the idea that a machine mimicking the neural networks we have in our brains could, if given masses of data, find patterns there, make its own version of sense out of it all and propose solutions. 1
His approach to building artificial intelligence gained sudden credibility when, in 2012, a neural net that he and two of his graduate students created won a major international competition to identify the content of images.2
One of those graduate students was Ilya Sutskever (BSc 2005, MSc 2007, PhD 2012), whose career since finishing his doctorate at U of T has touched on many of AI’s big wins in the last decade. Currently the chief scientist at OpenAI, a San Francisco-based enterprise he co-founded in 2015, Sutskever has expressed the hope of growing an artificial intelligence there that “loves humanity.” (The company’s mission statement characterizes this, in a more pedestrian way, as ensuring that artificial intelligence “benefits all of humanity.”) 3
OpenAI has the backing of two of the world’s leading tech entrepreneurs – Tesla’s Elon Musk and Peter Thiel, the co-founder of PayPal – who together, with others, invested $1 billion in the company. Sam Altman, the longtime head of the tech accelerator Y Combinator is the company’s CEO, and Greg Brockman, who helped turn the payment-processing company Stripe from a start-up to a global player, is president and chairman.
I spoke recently with Sutskever at OpenAI’s headquarters, located in a mid-rise building in San Francisco’s Mission District. Walking through the office, with its blonde wood, lush plants and sleek futurist furniture, I half-expected to be greeted at reception by one of the robot “hosts” from Westworld. Instead, a very friendly (and very human) staff member led me to a conference room named for the star Betelgeuse – a red supergiant that shines brightly in the night sky. There, Sutskever and I talked about some of the advanced computer tools he and his team are creating.
OpenAI’s flagship product is DALL-E 2, a system released earlier this year that can create original images and edit existing ones based on text commands. (Its name is a play on Pixar’s animated robot WALL-E and the artist Salvador Dali.) Although there is currently a waitlist to try the system, some of the early adopters have shared their creations on social media: a raccoon playing tennis at Wimbledon; an Italian town made out of pasta, tomatoes, basil and parmesan. Users can also specify the style of the image they want. In a demo, I watched it conjure up an illustration of a comic-book rabbit working as a tattoo artist. What’s intriguing about the program is how it can combine completely unrelated concepts in imaginative and seamless ways.
Sutskever is clearly proud of what DALL-E 2 can achieve, though he acknowledges a limitation when compared with humans: “It’s creative but it can’t come up with a whole new aesthetic in the way that a genius like Picasso did.”
OpenAI is also well known for GPT-3 (or Generative Pre-Trained Transformer), an AI that produces humanlike text. 4 Using a supercomputer based in Iowa, the system – soon to be released in its fourth iteration – has consumed all the digitized books in the world and much of the internet’s text. Having learned from this vast corpus, it can now write short, original essays, using a specific prompt. Ask it about the moon, for example, or author Italo Calvino and it can generally supply something informative and well-written in reply. It can write original poetry and even headlines. It can also give short summaries of a much longer text. And, when given a few sentences, it can go on to write several sentences more in the same vein. Though its programmers worked on its competence in English, the program basically taught itself other languages, including Vietnamese and French.
The human seeking to create a human-friendly artificial intelligence is a fairly private, work-oriented man. When this former mathematics student speaks of the theory behind the technology’s development, his eyes go over your head, as he moves into a private space where the ideas are real to him – even more real, perhaps, than you are. He has a monk’s close-shaven head, but sports, on the day of our interview, a T-shirt with penguins – a playful choice that belies the serious way he responds to questions.
Born in Nizhny Novgorod, Russia, but raised mostly in Jerusalem, Sutskever remembers the first time, at about age five, that he saw a computer – at an expo he went to with his father, an engineer: “I was utterly enchanted.” By the time his family emigrated to Canada when he was 16, he had developed solid programming skills. Soon, he began to imagine building computers to do things that until then only humans could. “I remember thinking a lot about the nature of existence and consciousness, as teenagers do, about souls and intelligence. I felt very strongly that learning was this mysterious thing: humans clearly learn, computers clearly don’t.”
U of T admitted Sutskever from Grade 11 into a math program on the basis of his studies at the Open University of Israel. He leapt directly into second- and third-year courses, and stayed for three degrees. “I was really focused on my studies,” he says. (Now, in his spare time, he likes to play the piano and keep in shape, but he remains singularly devoted to his work.) He sought out Hinton early on, working with him closely for more than a decade. For his part, Hinton remembers Sutskever, a student member of his research team, completing a coding task in an afternoon that might have taken someone else weeks. He quickly developed confidence in Sutskever because, Hinton says, “he asked the right questions.”
Based in part on his work with Hinton, Sutskever was hired by Google. There, he implemented a neural-network-driven approach to language translation that produced fewer errors than competing efforts. His work provided the basis for a major upgrade to Google Translate. “Researchers didn’t believe that neural networks could do translation, so it was a big surprise when they could,” he says.
A couple of years into his time at Google, in 2015, Sutskever was invited for dinner at a swanky hotel in Palo Alto by Sam Altman; Musk and Brockman were among the other guests. The talk proposed the sort of initiative that Sutskever had been daydreaming of, “a large-scale project where many researchers and engineers come together” – a kind of moonshot for AI. “We just spoke and we vibed and we continued the conversation. Two days after the dinner I emailed Sam to say, I’d like to be involved if you’re interested.”
Given its up-with-humanity aims, OpenAI was set up, initially, as a non-profit corporation that would use technology to solve big problems (and, with care, avoid creating new ones). But the model didn’t provide the infusions of cash needed to implement the group’s ambitions – the hiring of top minds, the building of the supercomputer in Iowa. So, a different sort of legal entity was created – one that would cap the upside for investors, at 100 times outlay (a good return in Silicon Valley can be much higher) and pump any gains above that back into the company’s mission to create beneficial AI.
About a third of the staff is involved in making sure the company’s AI actually works as intended – that it does “what it’s asked to do,” Sutskever says, and is not used in harmful or offensive ways. For Sutskever and the team who built DALL-E 2, this meant removing a wide range of content from the program’s training data in order to limit the creation of violent, sexual or hateful images, and developing techniques to prevent the program from using faces of real people, such as politicians and other public figures.
Gillian Hadfield, the director of U of T’s Schwartz Reisman Institute for Technology and Society, consults with Sutskever and other members of OpenAI’s leadership on its efforts to ensure the technology heads in a socially beneficial direction. “We could see how it might assist in the management of a future pandemic or in the creation of safer cities,” says Hadfield. “Imagine,” Sutskever says, “having a doctor who refers to every single study in medicine.”
Of course, both Hadfield and Sutskever recognize that AI will likely displace people from certain jobs. Telephone customer service reps seem especially vulnerable, as do bookkeepers and receptionists. On the positive side, Sutskever notes that AI will also create skilled jobs, in fields such as data and computer science, and replace certain parts of jobs, leaving more time for people to focus on the most creative aspects of their work. Lawyers, for example, might employ AI to do legal research, allowing them more time to focus on strategy, arguments and business development. Of the advances in AI that most assuredly are coming, Sutskever says, “there is no area of human life that will be left untouched.”
Hadfield’s research proposes ways to manage the disruption – and the other downside risks that AI poses. “Fear is the human condition,” she wrote in a Toronto Star op-ed. “But so too is designing rules and systems to manage what frightens us.” Her work, accordingly, focuses on what kind of regulation, private and government, would minimize the most harmful aspects of AI.
Looking much further into the future, could those sci-fi storylines about artificial intelligence developing consciousness – becoming self-aware – come true? To Sutskever, the very idea of sentient AI is less a possibility than it is a provocation – often hyped by the media – that distracts us from how the technology is actually advancing. As he notes, in his typically understated way, “It’s an unlikely scenario.”