Thunder rumbled. You hear the rain falling hard. Imagine being able to extract every sound that comes your way with precise accuracy, select some of them and mix them with the notes of a piano or instrument of your choice to create your own melody.
This is the research creation project that Dominique Thibault, a professor at the Faculty of Music at the University of Montreal, has been working on this summer with his team of five research assistants. It uses a new audio sampling technique called “sequential body-by-body synthesis”, which consists of segmenting and indexing multiple sound characteristics from large groups of files. A graphic representation then enables the group to clearly read the different passages and compose new electronic music.
But for those who are not fond of computer music, it is complicated to find your way around these huge data networks. Wanting to make these technologies more accessible, Dominic Thibault and his team are designing software for digital music creation: Mosaïque. “The intent is to share with the music community sound collections that have already been analyzed and mapped into Mosaque so they can quickly start making music with the instrument,” states the professor.
The project is funded through the Program of Supporting Research Creation for New Professors of the Fonds de Recherche du Québec – Société et Culture and the Interdisciplinary Observatory for Creativity and Research in Music.
Collect the different sounds
This summer, the team is conducting a massive cast of votes to expand its database.
The clarinetist will play for 30 minutes in front of a microphone. “The recorded sound will be divided into small segments. For example, with each note attack, the sound will be cut, giving us several thousand syllables. It will have several characteristics: fairly high or low, strong or weak, loud or harmonic.. ’, explains Dominique Thibault.
David Piazza, who is obsessed with makers, is going to put together a kit with one of these Buchla-branded hardware. Student Gabrielle Caux will set up a set of sound scenes with wind and water noises, birdsong… The team is also incorporating instrumental sounds into its repertoire thanks to the collaboration of Jean-Michaël Lavoie, Professor of Contemporary Performance at the Faculty of Music at UdeM.
Create a Giant Stamp Mapping
Once the votes are collected, they will be indexed according to their characteristics and mapped. Sounds of a similar nature will be grouped together, such as high notes on one side and low notes on the other.
“The user of this map becomes a kind of bell navigator and can navigate within the map according to the different characteristics of the sound, for example by following its pitch,” says Dominic Thibault.
Thousands of music boxes
Each sound is represented graphically by a small colored tile. The thousands of sounds entered and indexed this summer in the music database will form a huge mosaic.
Dominic Thibault’s team is also working so that each musician can add their own set of sounds from the instrument of their choice. Thus it will be possible to upload a melody made for the cello or a recording of a crackling sound. The program will split the sounds and create a mosaic automatically.
But how can thousands of squares be represented on the same screen? Is it better to show it in three dimensions? Will this only be available to people with a graphics card with powerful capabilities? Can we then have a button to activate or deactivate the 3D part? In collaboration with Professor Miriam Boucher, an audiovisual specialist, the team is trying to answer these and other related questions to make this program as accessible as possible.
Play music by clicking thousands of tiles
How do you make the selection of small colored keys musical? Is it “musician” to use the mouse or keyboard to grab the different squares? Will a special keyboard be required? Would it be practical to develop several specific pathways? Could we, for example, connect certain points and restart them at regular intervals, then change them in time? That’s what Mike Cassidy, a visiting student in the Waveforms lab, is trying to find out.
A trial version of the software will be created at the end of summer. Then the research team improves the tool for composing and interpreting the music. See you for the next three years to hear the score in the Concert Hall!
“Hardcore beer fanatic. Falls down a lot. Professional coffee fan. Music ninja.”