CO:QUO : Sounds of Science
We train raw audio neural networks that generate music 24/7 via livestream on YouTube and imitate bands in lesser-known genres such as death metal, math rock, free jazz, breakcore, skate punk, industrial, and beatbox. We've published scientific papers in our process at NIPS, MUME, and MILC. We worked on a documentary with Reeps One at Bell Labs on AI & beatbox. In 2019 we toured the EU & US teaching workshops, doing performances, and installations. We write software and collaborate directly with artists to create otherwise impossible new music.
Published in 2017 we have one of the earliest examples of using neural nets to make fully-generated raw audio music that went viral with millions of listeners. It was featured in hundreds of international press articles, including VICE & Time, inspiring social commentary.
At the time, most ML music researchers were generating midi because raw audio is significantly harder to model. Midi is good for making classical and pop music, but is insufficient for modern genres which use timbre and space compositionally.
Metal fans loved it, joking "this will be playing at ear-shattering volume on every speaker and headphone on Earth when Skynet gains self-awareness and proceeds to eliminate the human race".