In 2021, technology role in the way art is generated remains to be debated and discovered. From the rise of NFTs to the proliferation of techno-artists who use generative antagonistic networks to produce visual expressions, to smartphone apps that write new music, creatives and technologists are continually experimenting with how art is produced. , consumed and monetized.
BT, the Grammy-nominated composer of the 2010s These machines full of hope, has become a global leader at the intersection of technology and music. Beyond producing and writing for David Bowie, Death Cab for Cutie, Madonna and the Roots, and composing sheet music for The Fast and the Furious, Small city, and many other shows and movies, he helped develop production techniques such as stuttering editing and granular synthesis. Last spring, BT released GENESIS.JSON, software that contains 24 hours of original music and visual art. It contains 15,000 individually sequenced audio and video clips that he created from scratch, which cover various rhythmic figures, field recordings of cicadas and crickets, a live orchestra, drum machines and a myriad of ‘other sounds that play continuously. And he lives on the blockchain. It is, to my knowledge, the first composition of its kind.
Ideas like GENESIS.JSON be the future of original music, where composers use AI and blockchain to create entirely new art forms? What makes an artist in the age of algorithms? I spoke with BT to find out more.
What are your central interests at the interface of artificial intelligence and music?
I am really fascinated by this idea of what an artist is. Speaking in my common language — music — is a very small range of variables. We have 12 tickets. There is a collection of rhythms that we generally use. There’s a kind of vernacular of instruments, tones, timbres, but when you start adding them up it becomes this really deep data set.
At first glance, it makes you wonder, “What is special and unique about an artist?” And that’s something I’ve been curious about my whole adult life. Seeing the research going on in artificial intelligence, my immediate thought was that music is a fruit at hand.
Nowadays we can take the total sum of the output of the artists and we can take their artistic works and we can quantify it all in a training set, a massive multivariable training set. And we don’t even name the variables. RNNs (Recurrent Neural Networks) and CNNs (Convolutional Neural Networks) name them automatically.
So you’re referring to a body of music that can be used to “train” an artificial intelligence algorithm that can then create original music that resembles the music it was trained to. If we reduce the genius of artists like Coltrane or Mozart, say, into a practice set and can recreate their sound, how will musicians and music connoisseurs react?
I think the closer we get it becomes this weird valley idea. Some would say that things like music are sacrosanct and have to do with very basic things about our humanity. It’s not difficult to get into some sort of spiritual conversation about what music is as a language, and what it means, and how powerful it is, and how it transcends culture, race and time. So the traditional musician might say, “It can’t be. There are so many nuances and feelings, and your life experience, and that sort of thing that goes into music production.
And the kind of engineer in me goes, well, look what Google did. It’s a simple sort of MIDI generation engine, where they’ve taken all of Bach’s works and he’s able to spit [Bach-like] fugue. Because Bach wrote so many fugues, this is a great example. He is also the father of modern harmony. Musicologists listen to some of these Google Magenta fugues and cannot distinguish them from the original works of Bach. Again, this makes us wonder what constitutes an artist.
I am both excited and very worried about this space in which we are developing. Maybe the question I want to ask is less “We can, but should we?” And more “How can we do it responsibly, because this is happening?” “
Right now, there are companies that are using something like Spotify or YouTube to train their models with living artists, whose works are copyrighted and protected. But companies are allowed to take someone’s job and train models with them right now. Should we do this? Or should we first talk to the artists themselves? I think we need to put in place protection mechanisms for visual artists, for programmers, for musicians.