Today, I have something super interesting to present to you!
I discovered this amazing tool called Polymath, which uses deep learning to turn any music library into a sample library for your music production.
Imagine you have a ton of sounds that you have collected from YouTube videos, for example, on the left or on the right, just to one day be able to inspire you with this or that little piece. Well, with Polymath, it is no longer necessary to dig into all that and especially extract what interests you in MIDI format.
Polymath does this for us using several neural networks such as Demucs, sf_segmenter, Crepe, Basic Pitch, pyrubberband, and librosa. It automatically separates songs into tracks (rhythms, basses, etc.), quantizes them to the same tempo and beatgrid, analyzes the musical structure, key, and other information (timbre, volume, etc.), and converts the audio to MIDI.
But before you go headlong, here’s how to install and use Polymath. First, you need to make sure you have ffmpeg and Python installed on your system.
You can then clone the Polymath repository using this command:
git clone https://github.com/samim23/polymath
Once done, install the necessary dependencies with the command:
cd polymath pip install -r requirements.txt
If you have a problem with basic-pitch, try running this command:
pip install git+https://github.com/spotify/basic-pitch.git
Most of the libraries used by Polymath are compatible with GPUs via CUDA, so check out this guide to set up TensorFlow with CUDA if you want.
Then, to add songs to your Polymath library, just use the following commands for YouTube videos or local audio files:
python polymath.py -a n6DAqMFe97E
python polymath.py -a /path/to/audiolib/song.wav
Note that the songs will be automatically analyzed for the first time, which may take some time. But once the songs are in the database, you can access them quickly.
You will then be able to find and quantize similar songs at a specific tempo and even convert the processed audio files to MIDI (note that, at the moment, there are some limitations regarding percussion). I strongly urge you to read the documentation available on GitHub to learn how to use the tool. And there’s even the possibility of running this thing in Docker. It’s crazy!
What’s great is that you can adjust various settings in Polymath to tailor the tool to your specific needs. Whether you’re a beginner music producer, a seasoned DJ, or a developer specializing in audio machine learning, you’ll be able to customize each setting to perfectly extract the sounds you’re looking for.
It’s like having a virtual assistant dedicated to creating custom samples from a music library. It’s a crazy time saver. It will undoubtedly transform the way we work with music.