In a dark basement room in the northwest corner of University College, I’m catching a glimpse of the future of live theatre and performance. Or at least a possible future.
There is no stage of any kind here, and no audience seating. Nevertheless, for the past four years, some of the country’s most talented theatre artists, including writers, actors and directors, have walked into this space (and its predecessor, a few blocks south at the Koffler Centre) to act, speak and move – and test how they could use artificial intelligence and other technologies to push theatre in new directions.
Welcome to the BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies and AI.
“This used to be the University College kitchen,” explains the lab’s genial director, David Rokeby, as he fiddles with computer cables and switches. Yet what Rokeby, his research students and international artists have been cooking up at the lab – which is part of U of T’s Centre for Drama, Theatre and Performance Studies – is anything but traditional theatre fare.
I sit in on a session in July with Nick Flynn, an American poet who is visiting the lab for a few days to experiment with a voice-to-image system that uses artificial intelligence to create visual representations of what a performer is saying. As he reads a poem from his upcoming collection, images flash onto a makeshift screen along one wall: birds fly by, corpses pile up, a forest burns. When Flynn says, “A blue mass of cold,” a blue stone appears, hanging in the sky, like something out of a Magritte painting. To influence the type of image the computer produces, he can provide the system, before he starts reading, with prompts, such as “landscape,” “black and white” or “in the style of Hieronymus Bosch.”
Flynn, who thrives on collaboration, is excited by the possibilities of what he sees. When his book is published later in the year, he hopes to incorporate the technology into his public readings, adding an immersive layer to his performances.
Later, as he gets ready to return home, Flynn tells Rokeby it’s a shame that his book is finished. Because of this experiment, he says, he can “feel the poems rewriting themselves.”
“Nick saw the system go off on tangents from the words he used, showing him new possibilities,” says Rokeby. “But [the system] also pointed things out the way a proofreader might, asking him to clarify a point or make things more concrete. It made me think about the potential creative roles of these technologies in real life.”
Real life. Real space. These are terms Rokeby returns to during our conversations. As an installation and interactive artist, Rokeby has been working adjacent to new technologies since the early 1980s. He’s interested in how they affect living, breathing actors and vice versa – not in flashiness for its own sake (he mentions the Royal Shakespeare Company’s Intel-funded production of The Tempest, complete with 27 projectors, as a spectacle that didn’t add anything to the play). So, he’s choosy about the kind of technology the lab will work with. Video has been around for too long. “It’s not a place for the lab to invest its time,” he says. Virtual reality doesn’t interest him so much, either: he finds it alienating for audience members to have to put on a clunky headset.
Although he enjoyed parts of Canadian playwright Jordan Tannahill’s 2021 show Draw Me Close, which combined virtual reality with a live performance by a single actor for one audience member at a time, he says the technology is still unproven – and may not gain widespread appeal. “The fact that VR did not take off massively during the COVID pandemic was fascinating to me,” he says. “Some people are 100 per cent in, but others aren’t going there.”
But he is intrigued by working with artists who want to pose questions about the role these technologies should play in society. And he wants to give U of T students from a variety of fields a chance to work creatively with these kinds of tools, which they will likely use at some point during their career.
During the early days of the pandemic, the lab and Canadian Stage created a paid eight-month residency to bring in artists to experiment with the technologies. The program launched with Sébastien Heins and Ryan Cunningham, both writers and actors. It quickly expanded to include writer and director Rick Miller and Stratford star Maev Beaty. “Nobody was working [because live theatres were closed], and so this was an amazing opportunity for all of us,” says Rokeby.
Heins worked closely with Rokeby on his autobiographical play, No Save Points, figuring out how to use the latest motion-capture technology, which tracks human movement to animate a digital character, to tell a story about his mother’s Huntington’s Disease, a debilitating neurological condition.
Heins was in his early 20s when he found out about his mother’s illness and was confronted with the possibility that he had inherited it as well. The play, which enjoyed sold-out shows in Toronto in June, illustrated his attempts to escape from his real-life worries by retreating into a fantasy world through his Game Boy. “I found myself wanting to escape from the truth of her diagnosis – and so the Game Boy became a symbol of taking back control,” Heins said in a U of T News interview just before the play’s debut.
The lab helped Heins use motion-capture to create an animated, 10-year-old version of himself that would appear on a screen onstage. When certain audience members pressed buttons on a control pad they’d been given, they sent signals to sensors planted on Heins that would tell him whether to move left or right, jump or duck. His movements triggered his on-screen avatar, who was trying to escape danger, to do the same.
Using this technique, Heins effectively brought several “video games” in the show to life. The metaphor illustrated Heins’s anxiety about his mother’s – and his own – health. And it captured his feeling of helplessness. “The use of technologies as a metaphor here is really key because it shifts it from being, ‘OK, here’s a cool thing you can do,’ to ‘Here is a way this character is working through things,’” Rokeby told U of T News.
“I was really specific about what I thought we needed the tech for,” says Heins. “I needed to test a prototype of a 3D-looking avatar that moved on the screen. If the motion-capture character looked a little weird, David tweaked it. And he also gave philosophical mentorship as well, talking about what to do if and when the tech went wrong.”
One day, as Heins was working on No Save Points, he had finished up what he was doing and took off the motion-capture suit. A stick-figure avatar of himself connected to the sensors in the suit was still up on a screen. And suddenly Heins saw his avatar’s head flop over and its arm begin to move erratically – much like his mother’s limbs would move with her disease. That haunting image – a poignant mix of theatre and technology – made it into the final moments of No Save Points. “That’s the sort of thing you don’t get through discussion and theorizing,” says Heins. “It’s pure discovery.”
The lab, which is supported with a $5-million gift from BMO Financial Group, is also experimenting with a large language AI tool that works like ChatGPT, the popular chatbot released late last year. Rokeby has fed the system the complete works of Shakespeare and can now get the computer to generate lines of text that resemble the Bard’s. This in itself may not seem especially useful. But what if a contemporary playwright fed the system their own collected works? Could the AI help generate ideas for future projects? Or assist them in writing?
Rokeby tried something like this with Rick Miller, who was interested in seeing what the system could come up with. He wasn’t especially impressed with the output, but he saw its potential. “You wouldn’t say [the result] was a play, but it was a kind of live performance,” Miller observed. “It made me wonder, ‘Am I expendable?’”
The text generator also offers a “chat mode,” which allows for a real-time, improvised dialogue between the performer and the system. Such an AI could act, effectively, as a rehearsal partner, challenging actors to respond authentically and creatively, and encouraging them to experiment with different emotional nuances.
For the next several months, Rokeby will focus on preparing for the lab’s move into a new space in UC’s Laidlaw Wing. There, it will occupy one of three state-of-the-art studios equipped with full lighting grids and sound systems. (The other two studios will belong to the Centre for Drama, Theatre and Performance Studies.) Set to open in 2024, the new facility will be used for teaching, experimenting and workshopping new productions.
Rokeby’s goal is to outfit the lab in such a way that artists – both students and visitors – can follow their instincts and intuition. Do they want to use the voice-to-text-to-image tech for a scene? It’s available. Do they want an interactive light and sound system that responds to gestures you make while wearing a motion-capture suit? It’s also there. Do they want to use a large-language-model AI system to help generate ideas for characters or plot for a play they’re writing? That’s there, too.
“We want to be able to shift the space quickly and fluidly, and show them what’s possible,” he says. “It’s one thing to make cool, interactive experiences. But the process of making these things should themselves be interactive and responsive – you should be able to use the toolset at the speed of creative inspiration. That’s not just useful and efficient; it actually changes what’s possible, because you can feel your way through the possibilities.”
3 Responses to “ The Theatre of Tomorrow ”
I'm interested in this particular section of the article: "Do they want to use a large-language-model AI system to help generate ideas for characters or plot for a play they’re writing? That’s there, too." It seems like an invitation to student writers to use AI, but when I sign a publishing contract, I have to tick the box that says I did not use AI to generate my book. I want to be able to sell my writing and get it published. I want to be eligible for grants. Right now, we're being told by the industry that they do not accept work that has been generated by AI.
@Laurie
There is a trend in some organizations towards banning AI content. This also happens in the visual art and design space. It is an understandable response as it is not hard to imagine people feeling threats to their jobs or livelihood from AI.
But also there is a large concern about litigation. The copyright status of the training data used to train large language models is unclear, which publishers may feel leaves them open to future litigation.
I don’t personally have a hard and fast position on whole question, as I think from the lab’s perspective, it is important for creative thinkers to explore this challenge more fulsomely than just rejecting it without a deeper grasp of what is at play.
I applaud the recent success of the writers guild, as it is clear that many large corporations are ruthless in taking advantage of existing contractual loopholes to use creators’ IP to train models in the hope that they can reduce costs.
There is also, of course, a difference between what author Stephen Marche is setting out to do, wholly intentionally and relatively transparently writing with AI, as an example, and the general use of AI as a writing tool within the mainstream of writing given traditional assumptions about the human writer imagined behind the writing.
We are in for an interesting and challenging ride. Currently, the largest AI companies are struggling to find an economically viable model for these systems despite the high popular visibility of things like ChatGPT. There is a mismatch at this point between what these systems can do and what we can find ourselves convinced they can do, and we have not, as a culture, really had time to think through what makes writing writing… since we always used to just accept the whole process as something a human did. Do shortcuts offered by AI remove pain points in the process that might turn out to be the truly productive points of struggle that make human writing what it is?
We have already seen ample evidence in the lab that working in conjunction with AI can be a very creatively productive engagement, but the most exciting successes have generally come from surprising or unanticipated. These systems are remarkable and unprecedented on one hand and deeply flawed in unanticipated ways on the other. We as a society are at the start of a very steep learning curve. BMO Lab is working to expand the scope of explorations and discussions as navigating the road ahead is going to require an unprecedented level of highly interdisciplinary collaboration, since this technology seems destined to touch on so many aspects of our lives.
When I write a play, my three-dimensional characters have at least two strata. Deep down, they are archetypal and intended to resonate with any actor playing them. The actor brings the "finish" -- a hoped-for wondrous, insightful and personal "topping off." This veneer is critical for the character's authenticity and emotive honesty. It continues to surprise me how gthe actor's contribution changes and improves what I imagined. A play will be timeless if you get the archetypes right (though a good theme, solid construction and some wit also help).
What about AI? I've often mused to colleagues about how great it would if a few electrodes attached to my skull could capture the play in my mind and present it to an audience as a holograph. But that's way too ghostly. The holograph would have a good use, though, as a directing tool. Or would that be good? A written play can also be imagined by others in a new way, free from my mental imposition.
Who knows where we're going? Modern pop music is very technological and it pales in comparison to 1950s to 1990s band-made, group creations. There's my worry: theatre can already be solipsistic. Will it worsen with AI?