Illustration by Seba Cestaro

Tuning into Tomorrow

AI can help musicians compose and create new sounds. Is it just another music-making tool – or something else?

About The Author

Author image: Adina Bresge

Adina Bresge

Staff Reporter, U of T News

About a year ago, Stephen Brade started noodling around with a guitar composition. He set the unfinished piece aside, but returned to it when he realized that an AI-powered synthesizer he was developing might help him find the swelling yet spacious sound he was striving for.

Brade, a master’s student in the computer science department at U of T, is the creator of SynthScribe, a research project that aims to make synthesizers more user-friendly by allowing musicians to shape sounds through text and audio inputs rather than complex manual adjustments.

To demonstrate SynthScribe, Brade first walks a visitor through the constellation of buttons and knobs that musicians must learn how to precisely manipulate to design synthesizer sounds. One shortcut in this time-consuming process is to draw from vast libraries of premade sounds, but it can be difficult to come up with the exact search terms to match the sound in your head.

There’s a disconnect, Brade says, between the technical jargon often tagged to sounds (“saw wave,” “attack time,” “low-pass filter”) and the subjective everyday language we use to describe them (“warm,” “gritty,” “dreamy”).

SynthScribe uses advanced AI to bridge this gap, making it easy for users to find, change and create synthesizer sounds using both descriptive words and audio clips.

If people like the tune, the voice, the lyrics, the beat, will they care about who – or what – made it?”

– Gregory Lee Newsome, a music technology and digital media professor at U of T

Brade types “sound of the void” into the system’s search bar and a hum of white noise whooshes through the speakers. He asks the system to make a flute sound “harsher” and it adjusts to a more bracing pitch. There’s also a feature that can blend sounds together to create brand new ones. For his own piece, Brade landed on a raspy, resonant sound that he feels adds to the song’s lilting melancholy.

Brade says his own creative experience was consistent with what musicians who were asked to try SynthScribe told the researchers. Many of the artists surveyed highlighted the system’s ability to help them think outside the musical box, generating sounds they did not expect but were still pleased to hear. “There’s a lot of potential for really, really new music,” he says.

Brade is among those who are excited about AI’s potential to unleash a new wave of musical experimentation by creating never-before-heard sounds, streamlining production methods and reducing the technical barriers to creative expression.

But the rise of generative AI has also sowed discord in the music community. There are fears of unscrupulous actors using AI to “clone” singers’ voices and of job losses in sound production. (Brade himself is wary of the potential for musicians to be exploited.) Some even fear that AI will strip music of its soul.

As a coder who enjoys composing, Brade says the outlook might depend on whether AI programs are designed to serve musicians as artistic collaborators or supplant them as all-in-one, automated music creators. So far, Brade thinks humans still have the edge. “Generative models tend to create music that sounds derivative,” he says. “This is less likely to be the case with a composer or musician who is trying to push boundaries.”

Digitally illustrated collage of a miniature sized musician playing a guitar and sitting on top of one of several colourful discs coming out of an oversized mobile phone screen, along with hands holding a microphone and playing digital drums
Illustration by Seba Cestaro

Gregory Lee Newsome, an assistant professor, teaching stream, in U of T’s music technology and digital media graduate program, sees AI as simply the latest – but most powerful – example of technology’s influence on the trajectory of music.

Newsome, who provided technical support on SynthScribe and co-authored the preprint paper, says artists have always made use of new tools. But he worries that generative AI might be qualitatively different than any innovation that’s come before: “It’s so powerful that it may not require human intervention at all.”

Technologies such as Stability AI’s Stable Audio and Google DeepMind’s Lyria already allow users to compose music in a variety of genres and styles without having to play a note.

Meanwhile, an AI-generated “collaboration” between Drake and The Weeknd that went viral last year has raised alarm about vocal clones, spurring mixed reactions within the music industry.

Last fall, Universal Music Group and other music publishers sued AI company Anthropic over allegations that its chatbot Claude copies and distributes copyrighted song lyrics. It’s one of several similar lawsuits filed by copyright owners – including writers, visual artists and the New York Times – claiming their content was improperly used to train AI models.

At the same time, however, a number of music industry players are looking to get in on the AI action. Universal, for example, teamed up with YouTube to guide its approach to AI-generated music, while hitmakers including Charlie Puth and T-Pain have lent their voices to a Lyria-powered experiment on YouTube Shorts.

People often experience music emotionally, Newsome says, and many fans form deep personal attachments to their favourite artists. This essential element of human connection could prove difficult for AI to replicate, he suggests.

But Newsome says he still fears for his students, most of whom remain committed to traditional modes of music production, as AI threatens to siphon off the already limited revenue streams available to musicians.

“It’s a little mysterious what’s going to happen, but I would not be surprised if this is a sea change for a lot of music production,” he says. “The test may be: if people like the tune, the voice, the lyrics, the beat, will they care about who – or what – made it?”

How SynthScribe Works

We asked Stephen Brade, the computer science master’s student behind SynthScribe, to record a few short tracks to demonstrate what the AI-powered system he helped develop is capable of.

Using a keyboard and synthesizer, he recorded a few bars of a simple and well-known melody – Beethoven’s “Ode to Joy.”

Here’s how the unaltered version sounds:

 

To create an echoey version of the track, Brade typed “the sound of an echo” into SynthScribe, which provided him with examples of how to modify the synthesizer parameters to add an echo to the original sound.

 

By following a similar process, he could also create a harsher sound.

 

Brade worked on SynthScribe with two academic supervisors. He credits Tovi Grossman, an associate professor of U of computer science at U of T, with guiding the design and evaluation of the project.

Sageev Oore, an associate professor of computer science at Dalhousie University, provided technical help with the tool’s deep learning algorithms – and, says Brade, “invaluable perspective as a musician.”

Brade, who will start his PhD in electrical engineering and computer science at MIT this fall, says he has no plans to develop SynthScribe further – it will remain an academic project. But he expects it to inspire future research in music technology.

“I’m excited about designing new musical experiences using generative AI that help musicians expand their creative horizons,” he says. “And I like that it allows me to combine my engineering background with my passion for music.”

This article was published as part of our series on AI. For more stories, please visit AI Everywhere.

To learn more about AI and creativity, watch the fourth episode of U of T podcast What Now? AI, with Sanja Fidler, an associate professor in the department of computer science, and U of T alum Nick Frosst, co-founder of AI company Cohere.

Leave a Reply

Your email address will not be published. Required fields are marked *