Can robots be creative? Beyond the philosophical question, AI has begun to disrupt the music industry in many ways. And it allows us to open a new revolution in front of our eyes.
Will the next hit be… Inhuman? Will artificial intelligence (AI) imagine songs and music that will knock Aya Nakamura, Harry Styles, Lady Gaga or Slimane out of the Top 50 and topple the list of greatest songs of all time? Currently there is no example of planetary musical success signed by artificial intelligence, but it has been inviting itself in the music industry for several years now and changing it from the inside.
Almost 30 years after the release of the album It’s all right Nirvana, artificial intelligence imagined a new song in 2020, suffocate, created by Hal 9000 software. It was based on all of the band’s songs and used a random mathematical process called a “Markov chain” to write a piece in its “style”. If the result is not strikingly similar to the work of Nirvana, the ability to create words and music was already promising three years ago.
In 2021, a student at École polytechnique fédérale de Lausanne (EPFL) developed version 10.1 of 10 using artificial intelligence.e Beethoven’s unfinished symphony. “There is no common theme. Music has such a traditional side. Lack of structure. There is an absence of this bright side of the composer », then conductor Guillaume Berney judged. Since then, AI has come a long way and is capable of much more.
Musical AI: trewide range of possible applications
Artificial intelligence is now widely used in the music industry. We can count at least six different uses:
• Creating music: some AIs can be used to autonomously create tunes, using signal processing algorithms and discography databases to create original tracks. OpenAI (Dall-E) thus imagined Jukebox.
• Transcription of music: it is possible to use AIs to convert song recordings into written scores. A feature that can be especially useful for composers or music publishers. This is Klangio’s situation.
• Music analysis: AIs can be used to extract information about the structure and characteristics of a song, such as the keys used, tempos, and chords. Kyanite service is one of them.
• Music recommendation: Audio streaming services like Spotify, Apple Music or even Deezer use artificial intelligence to recommend music to you based on your tastes and listening history.
• Creation of new musical instruments: AI can also use its knowledge base to imagine new artificially controlled tools. This is the case, for example, in the Patchworld metaverse.
• Improving asynchronous or live performance of musicians: The AI can also modify the singer’s voice or the music of the instrument as an auto-tuning system with more advanced capabilities. During the 2022 Sonar Festival in Barcelona, when Matt Dryhurst took the microphone on stage and began to sing live, it was not his usual voice but that of his wife, electronic musician and technologist Holly Herndon, that came out of the speakers. . It was a live performance of the Holly+ project, an experiment that took one sound and transformed it into another while retaining some of its characteristics.
Could be artificial intelligence every creative?
This is a question that keeps coming back. Could there be some sort of genius in artificial intelligence? This philosophical challenge applies to all generative AIs capable of creating art visually through words or sounds. “You can take any tool and turn it into art”, assures Benoît Raphaël, founder of Flint, an artificial intelligence project for media. Designer Geoffrey Dorne has more: “Since the result of each query will be different with artificial intelligence, there can be ‘accidents’ which are small miracles. » The random variable of the algorithm can then be compared to a kind of “unconscious of its own free will” creativity.
The Argentine writer Jorge Luis Borges liked to remind, quoting the American artist James Abbott McNeill Whistler: “Art is exploding. It happens. Art is a small wonder… somehow escaping the organized causality of history. Yes, art either happens or it doesn’t. It doesn’t depend on the artist. » After all, art is also the way people look at creativity. And its uniqueness.
Stream: When AI composes music by visualizing it
At the intersection of science and culture, the Riffusion project stands out. It is based on the very powerful and very popular AI Stable Diffusion, which rivals DaLL-E 2 and MidJourney. This is a music generator that works from text queries. It creates a visual representation of the sound and converts it to audio for playback. This is called a sonogram (or spectrogram), or a representation of a signal in a frequency-intensity image as a function of time.
Because it is visual, Stable Diffusion knows how to analyze and transform it. The two founders of Riffusion, Americans Seth Forsgren and Hayk Martiros, trained artificial intelligence to make connections between sonograms and the description of sounds or musical genres they represent. Thanks to this, Riffusion can quickly create new music based on text instructions. Just describe the type of music or sound you want to hear in a sentence or a few words. For example, “ragtime in the style of Scott Joplin”, “Ibiza at 3 in the morning” or “acoustic violin”.
On the screen, we see the suggestions that internet users make to the AI and their visual representation in the sonogram… while listening to the audio result. It’s a little weird, it’s pretty simple. But it has the merit of offering a creative universe that is as fun and liberating as generative image AIs. With one constant: the quality of operability has a lot to do with the “art” that emerges from it. Proof that genius is still within the purview of the human brain. For now, at least.