A lot can happen in a short time. The sexiest new gadgets can suddenly become old school as they’re replaced by even cooler technology. Seemingly impossible computing feats become commonplace. It’s not just the advance of computing power – progress is driven at least as much by the creative thinkers who develop, research, adopt and adapt, discovering new ways for us to interact with our world, our devices and each other.
Inevitably, the most influential new tech trends are the ones nobody saw coming. Consider that a decade ago, the word “tweet” hadn’t yet entered the public lexicon, and no one had ever visited an “app store.”
It’s difficult to look forward even just a few years, but the following predictions about how technology will change over the next decade come from people who do more than just predict the future – as computer scientists, their job is to create that future. Even as the pace of technological change accelerates, these members of U of T’s computer science department – a hub for new tech ideas since its founding 50 years ago – outflank it. They’re already thinking ahead to a new generation of tools and toys that will be ready to change the world , just as soon as technology catches up with their innovations.
The Perfect Assistant
Craig Boutilier wants to put a real estate agent in your pocket. And a financial advisor. And a travel agent. And possibly an insurance broker, personal shopper, guidance counsellor, entertainment guru, athletic trainer and life coach. Anyone who can help you make informed decisions.
Naturally, all of these assistants would live on your smartphone – or whatever wearable technology has replaced it by 2025. Not only would they have access to the very latest news and information, but they would also understand your likes and dislikes better than your closest friends and family members. Boutilier, a professor of computer science, researches and develops “decision-support technologies,” akin to, but vastly more sophisticated than, the recommendation systems used by Amazon and Netflix.
“Right now, the entire planet is really focused on this idea of big data,” he says. “Every communication, business transaction, social interaction, car movement and so on can be monitored.” Data miners can now make precise predictions about traffic patterns, trends in consumer behaviour, stock market performance and other subjects by using computers to carefully analyze vast streams of information.
The trick is to use big data to help inform individual decisions. A smartphone app might have a very good sense of where the real estate market is headed, but that’s not the same as being able to recommend which house an individual should buy. “The bottleneck now is that we don’t have a good idea of what people want,” says Boutilier. That’s because what people want is complicated. When buying a house, for instance, you might factor in price, neighbourhood and proximity to schools, shopping, parks, transit and nightlife. All of these and many other factors must be weighed against one another, meaning that decisions are based not just on what aspects you want, but also on how much you want them. “You might have an idea of your ideal house, but your ideal house doesn’t exist,” says Boutilier. “You have to ask, ‘How do I trade one attribute off against another?’”
Boutilier’s software doesn’t need to know everything about your likes and dislikes in order to help you make an informed decision. “If you like one house more than another, then that already gives me a fair bit of data,” he says.
Human real estate agents already practise a version of this type of learning – they’ll take you to a dozen houses to get a sense of what you love and hate, and then zero in on just a few possibilities that are right in the zone. The difference with Boutilier’s software is that it can learn a lot about you and have access to deep and broad data about every single house currently on the market. Combining big data with a deep understanding of an individual would allow an app to guide someone all the way through the process, from choosing a house to determining how much to bid on it.
Ultimately, Boutilier foresees one piece of software that knows you so well it can help you plan a commute or a vacation, buy a book or a car, and choose an insurance plan or a cancer treatment. After all, many such decisions are interrelated. But it might not happen in the next decade.
“I’m not sure that we will have these fully integrated digital assistants by 2025. But in any one of these specific areas, I think there is a very good chance that there will be a fantastic app, web service or cloud service that will be very personalized,” he says. “They will know much more about you than they currently do and help you make good decisions through unintrusive conversations with you about your goals and preferences at any particular time.”
Ultimately, Boutilier expects all these decision assistants will talk to each other, creating a unified digital counselor with powerful insight into the best choices for each person.
Augment Your Life
“What’s so special about Mona Lisa’s smile?” a Louvre visitor quietly asks of his head-mounted display. Glowing text and graphics pop up, superimposed on Da Vinci’s famous portrait, showing geometry, history and analysis of her fascinating expression. Moving on, the visitor finds a wall-mounted museum map and asks, “Which way to Le Café Mollien?” Directional arrows appear on his headset, alongside menus and reviews. Passing through the crowds, the visitor is told that one of the faces is an old friend he knows from his university days in Toronto.
These are some of the potential applications for augmented reality – the superimposition of computer-generated information and images on the world around us. (Virtual reality, by contrast, is a synthetic world presented to the user.) It sounds futuristic, but it is not far away.
“In as little as five years from now, we will have vastly improved forms of augmented reality,” says Eugene Fiume, a professor of computer science. A single sign at an airport could display differently for each traveller, pointing toward the gate indicated on each person’s boarding pass. It could also offer information on current wait times at customs and security. Mechanics could refer to images of working machinery while tinkering with a broken engine. Surgeons could consult medical texts in the middle of an operation.
Several concurrent advances make it more possible for augmented reality to have its day: displays have gotten lighter and higher resolution; the required computing power can be squeezed into a very small space; Internet connectivity provides access to nearly unlimited data. Augmented reality also benefits from the recent advances in voice recognition and head-mounted devices – both of which will contribute to its widespread use and success. Location-aware devices can figure out for themselves when you’re standing in front of the Mona Lisa and automatically provide context-specific information. And gesture-driven control minimizes the need for a mouse, touchscreen or other pointing device.
Fiume, who directs the University of Toronto’s Dynamic Graphics Project, also has an interest in moving beyond two-dimensional graphics to create 3-D environments that interact with the real world – a task greatly facilitated by the proliferation of head-mounted devices. “I mean real three dimensions,” he says. “You could situate information in a 3-D environment and use glasses and gestures to navigate. The emergence of consumer-level head-mounted displays will allow individuals to do this.”
The Car That Drives Itself
Since 2012, Google’s self-driving cars have been plying public roads in three American states, and yet you won’t find such cars at your local dealership. It’s not just regulations or cultural inertia that keeps self-driving cars from proliferating – the technology itself still has some distance to cover.
In fact, a truly self-driving car – one that can start anywhere and go anywhere without human intervention – does not yet exist and will take some time to develop.
“A Google car works with maps that are annotated extensively,” says Raquel Urtasun, a computer science prof who works on computer vision systems for self-driving cars. The location of stop signs and traffic lights, school and hospital zones, and key landmarks are all known to a Google car before it ever scans its local environment. “And they never drive in scenarios where they haven’t driven manually before. So it’s not as autonomous as one might think.”
Not only do the most advanced autonomous cars on the road today depend on extensive human input in order to function, but they are also prohibitively expensive – just the laser-based scanning system used by many automated cars to survey their surroundings costs about $80,000.
Urtasun would like to bring the cost down. “I’m trying first to replace the expensive sensors with just one or two cameras that cost $30 to $50 each, without sacrificing safety or reliability,” she says. “And I want to get rid of the need for annotations, so you can drive in situations that you [and the car] have never encountered before.”
Less expensive cameras and unmarked maps both create the same challenge: cars must navigate using significantly less information. Urtasun believes smarter machine learning algorithms can eliminate the need for costly equipment, creating cars that are both better at driving and cheaper to buy.
Urtasun’s lab is using crowdsourcing to develop smarter software. Her team has amassed huge amounts of data that was gathered by physically driving a car equipped with the kinds of scanners and other information-gathering tools that might one day be used in commercial self-driving cars. In 2012, they released the data to hundreds of research labs around the world. Machine learning experts who specialize in self-driving cars use this data to test and upgrade their algorithms – constantly improving machines’ ability to make sense of the information. Algorithms are evaluated on their ability to do anything from figure out where in the world the test car had been driving, to predicting the behaviour of other cars.
This process has already helped overcome some of the major hurdles to affordable self-driving cars.
“The algorithms that people were using before this dataset, and the kind we’ve now found to be very, very useful, are quite different,” she says. “People were using simple algorithms that were really looking at the colours and shapes at the pixel level. This doesn’t work when you go to a complex scenario where the information is imperfect. The new algorithms are more global in their ‘thinking.’”
In other words, rather than just scanning for stop signs and traffic lights, the most successful algorithms look at the larger context of the movement and interaction of many elements. This minimizes mistakes, and also leads to more sophisticated analysis.
Urtasun believes you will probably still need a driver’s license in 2025. Even though today’s cars can assist drivers in many different ways – from parking autonomously to proximity sensors – completely self-driving cars could be at least a decade down the road. “We are solving computer vision problems very quickly today that 10 years ago people thought were impossible,” she says.
Now if they could only find a way to eliminate traffic.
Understanding How You Affect the Climate
Climate change happens on a scale too large for people to experience on a visceral level. True, we might notice that winters are different now than when we were children, or that water tables have dropped or weather seems more extreme. But in reality, climate is a global, multi-generational phenomenon – any single person’s experience is merely a tiny part of a barely fathomable whole.
The climate models that tell us where the planet is headed can seem as complex as the real-life Earth systems they simulate. Steve Easterbrook, a professor of computer science, sees this complexity as a problem – it both distances people from the issue, and also makes them vulnerable to the misinformation of climate change deniers.
He’s working on computer tools that would make climate models understandable and real to non-scientists. In the next five years, he hopes the general public will be able to run simulations themselves, plugging in different variables, and getting first-hand insight into the impact of various lifestyle choices.
He hopes that his user-friendly interfaces will both “inoculate people against nonsense,” and also make climate change feel more real. It’s all about the first-person experience. “I’m studying how you put those models in the hands of educators or on websites and blogs, so that people can play with the models themselves and improve their understanding of the physical properties of the climate system,” he says.
Easterbrook is developing apps for smartphones and websites that allow people to tap directly into the big picture of climate change. Ordinary people will be able to run their own simulations, to answer any “What if…” question they would like: What if we stopped all carbon emissions today? What if developing nations used energy at the same rate as developed nations? What if we replaced all planes and cars with rail travel? What if meat were banned worldwide?
“Each time you can set up an experiment like that and run the simulation, it reinforces your understanding of the physics,” says Easterbrook. “Somebody blogging about the science can incorporate one of these models and say, ‘I’m going to tell you a little bit about how the science works but don’t just take my word for it, here’s a model you can play with where you can see the effect that I’m talking about actually happening.’”
Easterbrook says we can’t afford to wait until 2025. “People dealing with climate policies say the next five years are absolutely crucial,” he says. “Today, climate carbon is definitely rising. To turn the ship around, it has to be falling steadily by the end of the decade. And the only way that’s going to happen is if more people are on board with the kind of transformational change that has to occur to make that happen.”
Treating Rare Medical Disorders
Imagine that in 2025, somewhere in Canada, a child is born with bones missing from her hands. Her doctor refers her to a specialist, who has never seen such a condition before. A database check gives the specialist not only a name for the condition, but also information about which gene mutation is responsible. The specialist can offer counsel to the patient’s parents, and provide a prenatal genetic test for the condition for any future children. The physician can do all of this – even though there might be fewer than 100 people in a world of eight billion who share this mutation.
Rare genetic disorders are the bane of diagnosticians. Such conditions are nearly impossible to identify because any one physician will likely see only one such case in their lifetime. For patients and families, the lack of information is often deeply worrisome. Well before 2025, though, researchers such as Michael Brudno, a professor of computer science and the director of the Centre for Computational Medicine at Sick Kids, hope to have a genetic test for every single rare disorder ever discovered – an estimated 14,000 conditions.
Symptoms of rare disorders can include missing bones, seizures, heart defects, developmental delays, disproportionate body growth and pretty much any symptom associated with more common disorders. Each rare disorder occurs due to a unique genetic mutation, and the resulting conditions are both varied and devastating.
To aid both patients and the physicians who care for them, Brudno developed, and continues to refine, an online matchmaking system called PhenomeCentral. Through this portal, doctors around the world can compare symptoms and genetic variants to diagnose rare disorders and expose the genes that caused them.
“There are cases where patients appear to have no known disorder – something is really wrong, and we don’t know what,” Brudno says. “In a case like that, it’s very hard to identify the genetic cause because there is going to be a million differences between any two people’s genomes.”
Through PhenomeCentral, doctors describe their patients and add genetic information if they have it. The system identifies patients who have conditions with significant similarities and connects their doctors.
While there are few treatments for these disorders, PhenomeCentral may still help some patients. “For some cases where, say, your body is not generating a specific enzyme, it can be as simple as, ‘Here is the supplement you need.’ But those are really the exception rather than the rule,” Brudno says.
Because rare disorders are extreme forms of common disorders, Brudno says they provide a valuable testing ground for drug treatments that could be applied in the broader population. “Drug companies are starting to look at rare disorders as areas where they can deploy a drug and see if it’s having an effect. It’s easier to show the effect when you have an individual with more extreme symptoms,” he says.
Ultimately, Brudno expects PhenomeCentral to lead to faster and more accurate diagnosis and testing for genetic disorders by 2025.
Recent Posts
U of T’s Feminist Sports Club Is Here to Bend the Rules
The group invites non-athletes to try their hand at games like dodgeball and basketball in a fun – and distinctly supportive – atmosphere
From Mental Health Studies to Michelin Guide
U of T Scarborough alum Ambica Jain’s unexpected path to restaurant success
A Blueprint for Global Prosperity
Researchers across U of T are banding together to help the United Nations meet its 17 sustainable development goals
One Response to “ The Future As We See It ”
The article just blew my mind. I turned 73 last Thursday and I would be 84 if, God willing, I survive to 2025. One of my major concerns is about managing my disability as I get older. I was diagnosed with bipolar disorder in 1994 and have been on powerful chemicals for the last 20 years. One area where I think medical technology could help is a non-intrusive devise to induce sleep. There should also be simple devices to help people suffering from short-term memory loss. These are simply some of the ideas that came to mind while reading the article.