In a month, Holly Herndon will release her third studio album, Proto. The album’s major point of difference and Herndon’s current area of research is the use of an Artificial Intelligence, created and ‘trained’ by Herndon and her partner Mat Dryhurst and nicknamed “Spawn”, in the album’s creation. While creating commercially released albums, Herndon is also a doctoral student at Stanford University. The American musician and sound artist answered some of my questions over the phone from her place in Berlin.
Do you think artificial intelligence will come to hold a more common and mainstream position in the music industry?
Definitely. (Pause.) Do you want me to expand on that? Haha.
Haha, that’s enough if you want.
I’m just doing Yes/No answers. No I’m just kidding haha. Yeah, I think it’ll be definitely more widespread, especially as we figure out how to make it more approachable. I can imagine little AI patches in Ableton, you know. It could feel as seamless as like a melody generator, a harmony generator, or like beat generators, following certain genres and things like that. That’s such a different approach from what we took, we were dealing with sound material rather than automated composing. We really specifically wanted to take a different approach because the automated composing thing, even though it could potentially make writing really fast, we fear that it gets us in a kind of aesthetic feedback loop, because it’s trained on things that existed before, morphing things that existed before, and so it’s kind of hard to break out of the, you know, what Simon Reynolds calls Retromania, when you’re doing automated composing. So we really wanted to take more of a music concrète approach and generate sound itself, so we saw Spawn more as a performer rather than a composer. By performer I mean ensemble.
How closely do your released musical projects line up with your subject of academic study at the time?
Well, since I’ve been at Stanford I’ve released two albums, Platform and now Proto. I have kind of a weird research-oriented art practice that I release through pop records, which is kind of an unusual fit, so academia is a really great place for me to kind of do the research-oriented stuff. Not everything fits into an album, so it’s really nice to be able to do performances, and I’ll have those conversations that might not fit into this one kind of avenue. But I also do stuff like installation work and performance kind of outside the realm of strictly music, so I’m always kind of oscillating between fields. I feel like no one field has everything that I’m looking for, which makes trying to plan a career trajectory really difficult haha. It’s just like a matter of interest you know, when I get bored in one area I go to the other. But for my final project at Stanford, I have been working on this topic of AI.
As far as the concepts in your music, do you worry about clearly communicating those concepts to people who are listening to it for the first time, or do you feel that your sound should stand alone?
Well, that’s one reason why I’ve tried to build conceptual elements into the production process, the making of the work. Like with Proto, since we’re generating so much sound with artificial intelligence, it has a certain kind of timbre to it, it has a certain kind of raspiness where you can hear the network trying to make sense of what it’s been trained on. So I try to kind of build that in there, and it might not be so obvious at first. I’d like for people to be able to just enjoy it also as music but to be able to hear that there’s maybe a palpable difference. And then maybe if they’re curious they can, you know, look into it, and then there are more layers to peel back. I don’t want to have to provide programme notes, I mean that’s kind of a joke in academic music, that you’ll have like five pages of programme notes and then somebody will get up and do something that sounds improvised, you know, haha. So that’s not an approach that I want to take, I like to make my music more approachable, and not even in a cynical way, I want things to be beautiful cause I enjoy that beauty as well. When I was younger and starting out I was more in the noise scene, and it was all about putting up a noise wall and being just as difficult to hear as possible, and I dunno, I feel like I’m over that teenage impulse haha. And I feel like by working with sometimes more traditional pop forms and things like that then other ideas can be kind of Trojan-horsed into a work that some people might not normally engage with. Yeah, so I like that idea.
What makes the human voice so special to you as a sound source?
Um, God there’s so many things, like for one thing it is kind of like our audible fingerprint, so everyone’s voice is fairly unique. And you can actually create a voice model of people’s individual voices. So while everyone’s voice is unique, cause it’s being modulated through their specific bodies, it’s also something that’s developed by a community. So vocal affect and language and dialect and all these things, they develop over time through mimicry, throughout a community, and there’s just something really beautiful about this instrument that’s both shared and unique to oneself. And also, you know, it’s internal and external, you’re literally like breathing life into the sound. And it’s also an instrument that a lot of people understand, you don’t have to really understand fingering techniques and things like this that you would for a guitar or a piano, it’s kind of like an instrument that most people can relate to.
What do you think the internet has done to perhaps change the context of the human voice?
Hmm, that’s an interesting question. ‘Cause I wouldn’t think about it as necessarily the internet specifically. I mean, I think about the digitisation of the voice quite a bit. With Movement I was creating custom digital processes to try to make my voice live in the same aural environment as my digital instruments, rather than like a layer kind of pasted on top. So I think there are parallels between the digitisation of the voice and the ubiquitous internet. I think if the internet has done anything it’s created kind of like an aesthetic of multiplicity. A kind of hyperactive, multilayered, everything-on-the-same-plane kind of aesthetic. And I think that we probably hear that in, I mean I guess you hear it in my music in the hyper-editing of the voice, the very natural right next to the very synthetic. These things kind of colliding into each other in a way that used to be really separate when things are controlled via channels and things. And also just like trying to make sense of where our physical bodies live in this new, hyper-mediated, super-connected landscape, I think is something that we’re constantly doing. I think that’s one reason why Autotune is so popular. I was reading an article the other day, I think it was in Pitchfork, and they said that “Autotune brings the voice up to code”. I thought that was a really interesting phrase, trying to make sense of our fleshy, messy, imperfect body in this hyperconnected world, I think is also part of all of that.
I was wondering... when you approach a composition, are you approaching it from the perspective of a songwriter?
Um, it depends on what I’m doing, so I put on different hats. I’ve never really done that...well, that’s not true. Let me think. I did that more on this album than I’ve ever done before. I really wanted to focus more on having a protagonist lead vocal and a more kind of recognisable structure. I wanted to try to find a way to make that my own that wasn’t formulaic. But yeah, it depends on what I’m working on. Like we did a piece last year for a planetarium, and I felt like I was more of like a director, so there’s just like different modes.
Right, and is that songwriting element something you were interested in before your interest in electronic music? Did you ever learn an instrument in the traditional sense?
Yeah, I learnt piano and guitar and contrabass. And I was never really like particularly exceptional at any of them haha.
There’s a path for everybody haha.
Click here to Buy
Released: 10 Apr 2019