Playlists are awesome. You barbecue, you have a barbecue playlist. You clean up; you have a cleaning playlist. You code in Python, have a “Python coding” playlist…etc. But what would be even more remarkable is generating original melodies adapted to our craziest desires from a simple idea.
Well, my friends, it is now possible, thanks to an artificial intelligence (AI) model called MusicGen. MusicGen, powered by Meta, generates high-quality music samples from text descriptions, audio snippets, or melodic characteristics, making music creation as easy as typing a few words.
MusicGen is a unique language model that can create music from multiple streams of compressed information. We can input text and an MP3 and obtain an original piece. For example, an excerpt from JS Bach plus a description like “I want an 80s song” gives this:
You can describe a musical atmosphere or go into detail about the BPM, the type of instruments, the settings, the sound, and more, and you’ll get a pleasing sound. Like all AIs, the more precise and detailed your prompt is, the better results you will get.
Felix Kreuk developed this template from Meta, and if you’re a coder, you can rejoice because MusicGen can also be accessed through the Hugging Face API.
MusicGen is a big step forward, allowing anyone to create music for their wedding or vacation videos. It also helps artists by providing material (samples) to build new tracks or audio experiences.
But then, how did MusicGen learn to create music?
The model has been trained on over 20,000 hours of licensed music (it might still make your teeth cringe) and 390,000 instrumental tracks from soundbanks like Shutterstock and Pond5.
So, I don’t know if this kind of thing will appeal to all artists, but I think producers should take an interest in mastering these AIs that will shape the sounds of tomorrow.
Check out this article to learn more about MusicGen and maybe even try it yourself. Perhaps you’ll compose your next big hit with this stuff.