Therewolf was a band that some of my best friends and I formed way back in 2009. We got together in our drummer’s garage and wrote these songs. For our one and only performance, we were lucky enough to be asked to play the final show at the legendary Oddstad Gallery in Redwood City before it was sold to a commercial real estate developer. That space was central to our community — it had hosted hundreds of shows and major bands from all around the world. The energy of that final event was incredible.
For reasons none of us are really clear on 13+ years later, Therewolf disbanded shortly after that show. A few months later, I chose to move to Washington state and everyone else moved on with their respective lives as well.
I was disappointed that we had never recorded what we had created and tried multiple times to get a remote recording project together. I recorded some demos and tabbed out all the songs to preserve the ideas. The first attempt at a proper recording took place in 2011, but the effort fizzled and the session was scrapped before much progress was made.
I spent 2012-2018 running Big Name Recording Studio full time. In early 2018, my wife and I decided that it was time to switch things up and start our family, and we decided that the best path forward was to pack up and move to Colorado to be close to her family. I had greatly enjoyed having such a flexible recording space for so long and was uncertain what the future might hold for me with regard to being able to produce loud music without irritating any neighbors. I had a few loud recording projects that had yet to get off the ground that I felt could be great — I thought, “It might be now or never!” (Thankfully, I did end up finding a great recording space in Colorado.)
So I set up the necessary gear and got to work on those projects before we moved. This is one of those sessions.
Our drummer Mike told me at some point prior to 2018 that if I wanted to make the Therewolf recording happen, I should just play the drums on it, since it would be much easier logistically than having him do it, and because it would probably never come to fruition otherwise. So I did, trying to stay as faithful as possible to what he had written based on our full-band demos from 2009. (I am thankful that he was still able to participate in this release as the artist behind the album cover!) I then recorded my guitar parts and sent the session off to Steve (guitar) and Kol (bass), who recorded in California and Illinois, respectively.
The session sat dormant until 2021 when Albert announced that he had booked a recording session (also in California) and would be tracking his vocals. This was quite a surprise after the long pause and I was really excited when I heard the result.
Of course, soon after that I began my journey into the world of software development. I took about a year where I did not work on music at all, so again the session sat dormant. But finally, in 2023, more than 13 years after writing the songs and more than 5 years after beginning the session, the music is ready to be shared.
There are plans in the works to resurrect the band and perform live again. I’m not sure exactly what my capacity will be in the new iteration of the project due to living in a different state, but I am excited to see what the future holds for THEREWOLF.
Gapless w/ Lyrics on YouTube:
Stream or Download/Purchase (name-your-price) on Bandcamp:
It is with great excitement that I present my latest work, [escape]!
This album is an exploration of the concept of escaping. Escaping difficult personal circumstances, escaping reality, escaping self-imposed limitations, escaping wider social disaster, etc.
On a lighter note, it also explores the concept of “escaping” the confines of the predominant musical tuning system in the world today.
Like my previous [syzygy] releases [xendeavor one], [ouroboros], and [loiterer], my Yaeth album MMXX, Melopoeia’s ongoing Valaquenta, and my web app Color Horizons, this album explores microtonality, the spaces between the notes found in 12 Tone Equal Temperament (also known as 12 Equal Divisions of the Octave [also known as 12edo]). This release focuses on one particular alternative system, 10 Equal Divisions of the Octave (10edo).
Earlier on in my microtonal experimentation, I commissioned a custom 24edo neck from Metatonal Music. I thought it would be a great way to dip my toes into alternate EDOs — 24edo contains all the normal 12edo notes, plus every pitch exactly in between. I was very happy with the craftsmanship of the custom neck, but I quickly discovered that I found the guitar was pretty difficult to play due to having so many frets to keep in mind (some people can play 31edo guitars with precision, I don’t know how they do it!) Also, most of the new extra notes create rather dissonant harmonies. I was struggling to do anything I found worthwhile with it.
Later, I made [xendeavor one], which had one song in 10edo, The Katechon. While composing that song, I was surprised by how consonant I found the tuning. In my experience, when hearing a new alternative system to 12edo, there’s always an adjustment period — initially the tuning sounds strange, but after listening to it for a period of time it can end up sounding as “normal” as 12edo… just… different. I found my ears normalized 10edo very quickly compared to some other tunings. Later, I made MMXX, where I experimented with 10edo further on the song Rise. Again I was particularly intrigued by it as a tuning.
After much consideration, I decided that I wanted to try 10edo on guitar. Firstly, I knew at this point that I loved the sound that 10edo offers. Secondly, where 24edo is more complex due to having twice as many notes, trying 10edo would go the other direction — in theory, it would be simpler to play than 12edo, due to having fewer frets to keep in mind. Thirdly, while playing my 24edo guitar, I found I was always still mentally locked into 12edo thinking due to the fact that all of the 12edo notes are still present. 10edo shares only one interval with 12edo, the 600 cent tritone (which is present in all even-numbered EDOs). Other than that, it offers entirely different potential harmony. (Though due to having 2 fewer frets, it offers a more limited palette of scales with which to experiment…)
Once again, I began the process of getting a custom neck fabricated. As with my 24edo guitar, I was highly pleased with the result. The big difference was that with this guitar, I was immediately able to pick it up and play things that I found usable/worthwhile. The songs on [escape] are each the result of picking a particular mode of a 10edo scale and seeing what comes naturally from exploring it on this guitar.
One interesting side effect of playing the 10edo guitar for a while is that I now find it much easier to play my 24edo guitar than I did before. Where I used to play the 24edo guitar and get stuck thinking in terms of 12edo with extra notes, playing the 10edo guitar helped break the habits built up by more than 2 decades of 12edo guitar thinking. Now that I can comfortably play the 24edo guitar, there will definitely be some quarter-tone work coming in the future that will feature it.
This is my first complete solo release in over 2 years. Between the pandemic, having a second baby in our family, and spending every single personal free moment I had for a year on a career transition, I had very little time or energy left to produce musical projects. The effects of the pandemic on daily life have lessened, our baby is growing up, and I’ve settled in comfortably in my new line of work. This has left me with much more time and energy to make music. But once I had free time again, I found my musical momentum was low. I have had 6 projects in various stages of completion that have built up (the oldest of which was begun in 2016), but I couldn’t bring myself to actually open Pro Tools and do any work on any of them. This album is an active refreshment of my creative process. I used it to rebuild my momentum. Now that this is done, I am finding much joy in picking those other sessions back up. I look forward to sharing each one of them as they reach completion.
On another note, this is the first release on which I felt inspired to do fully sung vocals since my 2014 math rock offering, Vanishings. For many years I had only felt inspired to make instrumental music or music with harsh vocals… but I’ve always loved singing. Now that I’ve removed that mental block I will definitely be releasing more music with sung vocals as time moves forward. It was especially fun to sing on something microtonal! I’ve wanted to try that since first experimenting with alternative tuning systems.
Thanks to Ron Sword of Metatonal Music for the alternate-EDO neck fabrication, installation, and setup.
Here is an unusual project for which I am playing drums. It falls under the classification of scriptophonic microtonal black metal.
For anyone unfamiliar, in scriptophonic music, the artist designs an algorithm which converts text into musical notes.
Again for anyone unfamiliar, microtonal music utilizes pitches that fall outside of the standard twelve tone equal divisions of the octave musical system (12edo). In this case, we are dividing the octave into 26 equal parts (26edo), one for each letter of the alphabet.
We are converting the entirety of Tolkien’s Valaquenta into music via an algorithm that Dave Tremblay programmed using Python. The algorithm generates the guitar parts from the text, which are played by Dave on a real 26edo guitar and bass. I am then given free rein to write and record drum parts that support those guitar parts. Finally, Brian Leong is providing the vocals, placing the words that were used to generate the guitar parts on top of each section.
There are 29 pieces total and we are recording and releasing them one at a time. The scope of this project is very large and so it will probably take a long time to finish!
This album is a look back at the hellish year behind us with its attempted authoritarian takeover of the US government, mass death brought about by a bungled federal government response to a deadly global pandemic, crushing misery and isolation brought about by that same pandemic, severe civil unrest due to deep-seated racial inequities, and massive natural disasters fueled by the poor environmental choices collectively made by humanity. It is a prayer for a better world ahead.
Like my album [xendeavor one] from February 2020, this album explores ways of dividing the octave other than the standard 12 equally spaced notes, which is the system that the vast majority of music in the world utilizes. These alternate systems can result in strange and otherworldly tonalities. I had been wanting to make a microtonal black metal album for a few years, and I began this album in early March. As we all know, this is right when the year’s events really kicked off, and so the creation of this album is intimately tied to and influenced by these world events.
Yaeth is the pseudonym I use in my band Bull of Apis Bull of Bronze. This album turned out stylistically and thematically similar in many ways to that project. Once I finished work composing the album I realized that, due to those ties, Yaeth was the only fitting name for this project.
My 2014 release Bird Surgeon – Vanishings is a math rock album that was heavily influenced by screamo. Ever since putting that out, I’ve wanted to do the inverse and write some music that is primarily skramz but heavily influenced by math rock. This is that.
This EP was inspired by the corruption and ineptitude of the Trump administration, the smoke-filled-room political dealings that took down the Sanders campaigns in the 2016 and 2020 primaries, and the bungled pandemic response which has resulted in millions losing work while the billionaires continue to rake in unconscionable profits. Our representation in this country is bought by the mega wealthy and the big corporations to protect profits for shareholders above all else. This is an indictment – not only against the cartoonish evil of the Republican party leadership but also against the complicity of the Democrats in maintaining this system. It is a call to fight, to do what you can with what you have in order to make change to this broken system.
Once again, I present a [syzygy] album. Once again, it has an entirely different sound than anything that has come from the project before.
What ties [syzygy]’s albums together is an approach of minimalistic experimentation and a musical atmosphere of unease. In the case of [visitor], the experiment was to see what I could do with just my untuned piano and a fretless bass. In the case of [ouroboros], it was to see what could be done with only a mixing board plugged into itself. With [xendeavor one], the experiment was to delve headfirst into the world of microtonal* electronic music.
The vast majority of Western music uses the 12 equal divisions of the octave tuning system (12edo). These songs use 7 other possible EDOs (10, 11, 15, 19, 31, 37, and 85), an overtone scale, and an undertone scale.
The term “microtonal” tends to conjure expectations of extremely dissonant, noisy, and/or unstructured music for some people because a lot of music labeled “microtonal” fits those descriptions. But it doesn’t have to! There is plenty of melody and harmony available to utilize within these systems. And you get the bonus of unique intervals and chords that are impossible in 12edo.
Since I discovered microtonality a few years ago I have really wanted to make this album. I tried to start a couple of times and was never happy with my results but this time it all clicked. I am very excited to share this with the world.
* If after reading this you have no idea what I’m talking about here, see the “Microtonality” section of my Loiterer – Adrift writeup for some further explanation. Or check out the Wikipedia article.
Tonally Inverted Version of The Legend of Zelda: Ocarina of Time’s Title Screen
What is tonal inversion? Basically, every melody, chord progression, or complete song has a “mirror reality” version hidden beneath itself. In order to discover this alternate version, you take the notes of the composition and flip them upside down, so that every interval becomes inverted. For example, if the melody goes up two full steps and then down a half step, you make the melody go down two full steps and then up a half step. When you apply this process to a major chord, it becomes a minor chord. Ionian (major) scales become phrygian scales. Aeolian (minor) scales become mixolydian scales. And so on and so forth.
The part I find most interesting is that if the original composition is musically coherent, the negative version will always be musically coherent as well. It may not be as moving as the original, or it might be a totally bizarre piece of music overall, but it will always at least work musically.
So what does the Nintendo 64 have to do with this? It turns out N64 games are great for applying this concept because you can directly rip the MIDI files and soundfonts from the games. What this means is that you can create tonally inverted versions that are extremely accurate to the original compositions. The timbres, timing, and note velocities are identical; the only difference is that the notes are upside down. The other benefit to applying this process to songs from N64 games is that, at least for millennials, these songs are fairly universally known and beloved. These types of transformations are much more intriguing than when you hear the concept applied to an unfamiliar piece of music.
Just over five years ago, while walking on the beach on a gloomy November day in Aberdeen, I had a thought: “What would Christmas carols sound like if they were turned into black metal?”
It was just a passing idea. I think the possibilities of playing around in that style were on my mind because I had just finished recording A God or an Other’s debut album, Towers of Silence, which heavily drew from the black metal lexicon. For whatever reason, the idea for this album really stuck with me. I knew I had to make it.
Obviously it had to be released during the Christmas season, but it was definitely too late to do so that year. There was no way I could arrange, record, and release it in just a few weeks. I could have recorded it and waited until the next year to release it, but I’ve always found it unpleasant to sit on finished releases for more than a short period of time. So I decided to wait ten or eleven months to begin.
Well, the next year came and the same thing happened; by the time I finally started considering working on it, it was too late to begin. This process repeated for the next four years. I was annoyed with myself each year, but now I recognize it’s for the best. Shortly after I came up with the idea, I ended up joining A God or an Other as the band’s drummer, which dramatically increased my blast beat chops over time. Half a year after joining, I started contributing vocals as well. My bandmates were really good at tremolo picking on guitar, and so being in the band also encouraged me to develop that skill. All those hours of practice within the style really helped bring about a superior end result compared to what this album would have been if I had recorded when the idea first struck me.
One other benefit to waiting 5 years between the inception of the idea and its execution is that I spent a lot of time between then and now expanding my understanding of music theory. This made the transmogrification process much more successful than I think it would have been in 2013.
Anyway, inspiration finally struck in September. I knew that this was the year. But there were two things making it a little more challenging than it normally would have been. First, I was insanely swamped with overtime at work. Second, my wife and I had (and still have) a newborn at home, and taking care of him requires quite a bit of work. My time was quite limited. But when I feel compelled by those mysterious creative forces to make something, I have to do it. So I found time. Most of this album was arranged and recorded between the hours of 5 A.M. and 6 A.M. on weekdays before I headed off to work. The process was a little rough but totally worth it.
I picked public domain songs purely for legal reasons, as I would like to be able to use these in any way that I may see fit in the future.
In order for a song to be picked, it had to be originally written in a major key, as it would not be as dramatic or as fun of a transformation to do a song that was originally composed in a minor key. I really wanted to play around with some of the more exotic minor scales instead of exclusively using aeolian mode. It was fun to utilize sounds like neapolitan minor and harmonic minor and to build some four part harmonies with them.
The last qualification for picking songs is difficult to explain. They had to make me feel some sort of… resonance… within myself. This has to do with the history of my early life. I’m not a Christian, but I was raised Catholic, and that upbringing had quite an impact on me. I heard these songs so many times in mass over the years. It is enjoyable for me to hear these representations of that part of my life turned around into something that is now meaningful to me in a new and very different way.
Until 2013, I was certain I would never make a Christmas album. Now here we are. Life is strange.
I hope you enjoy listening to it.
Cover art by my wife, Laura Lervold.
Cassettes are available through the Bandcamp and Big Name Records.
On this day 10 years ago, LeBaron’s final recording session took place.
“What’s LeBaron?” You might be asking.
LeBaron was a music experiment that arose organically in the summer of 2007. The band consisted* of Kol Fenton, Stephen Navarrete, and me. It lasted only a few months. The album that we put together, Stambaugh Sessions, is the oldest release that I played on that I still thoroughly appreciate to this day.
The series of recording dates that comprise Stambaugh Sessions began spontaneously. One day, the three of us ended up hanging out in our buddy Anthony’s garage, which had been given the name Chestnut St. Arms. Bella Drive, Steve and Kol’s band with Daniel Hendrickson (who I later collaborated with in Phantom Float), practiced there. As a result of that arrangement, Steve and Kol were there frequently, and I would come by occasionally to see what my friends were up to. That’s how this place was. If you made music there, you were likely to have an audience of a few people hanging out while you worked on stuff.
Somehow Steve and I got the idea to split his drumkit up and each play half of it. In my corner of the space, I flipped the kick drum onto its side and played it with sticks, along with a snare and a ride. Over in his corner, Steve set up a snare, two toms, and a small crash. The hi-hat was positioned so that either of us could use the foot control or hit it with sticks.
We messed around jamming for a bit. Kol grabbed a guitar, turned on an amp, and… suddenly LeBaron was happening.
We played for maybe twenty minutes with a few friends watching. As we finished up, someone who was sitting in a recliner on the opposite side of the room (I can’t remember who at this point) said, “That was actually really interesting. You should come back with your recording stuff and do that again.”
So we did.
In total, there were 4 recording sessions. Each one had a very limited audience, but enough to give it a little bit of “event” energy. Now that it’s been so long, I’m not sure who witnessed these performances.
Each session was unique in some way. There are clues in the songs as to which songs were recorded on which occasion. The fretless bass was only used one of the days. The pedalboard was significantly expanded on one of the sessions, bringing in some extra sounds. On a different date, we allowed Aqua Teen Hunger Force to play on the TV in the background of all the recordings. During our third session, Ryan Moyer joined us and played an empty wine bottle with a drum stick. The rototoms and samples are present in some, but not all, of the recordings. And there was a bugle at one point.
Each time we would finish a take, we would listen back to it and see what we had just done. Some songs were titled immediately as we were listening back for the first time. “This part sounds like when you’re doing badly Taking a Test and getting more and more frustrated.” A few were actually given titles before we even played them. “Alright. This next song is called Gangsta Situations no matter what it sounds like.” Most of them were left untitled at this point, though.
I think we settled on the name LeBaron during our first recording session. The conversation went something like this:
“Okay, I have an idea. What’s the most non-descript, not-noteworthy car that you can think of? Something you wouldn’t be embarrassed to drive, but also wouldn’t be excited to drive at all?”
After our final recording session, I took the Tascam 4 track cassette recorder we had been using back to my house and digitized all of our tapes. We posted the files online, but had to cycle them out over time since Myspace would only let you post 3 songs at once. Within a few months we had moved on, playing in our more traditional bands, and that was that.
But I wasn’t happy with how our work was left incomplete. The songs had never been properly compiled, mastered, and released. In 2010, after I moved to Washington, I decided it was finally time to work on it. This was when the songs were given an order, and also when all the then-untitled songs were given names. Hard for me to believe that’s now 7 years ago, and that it’s been 10 years since they were first recorded. Time flies.
* We decided shortly after our last session that LeBaron technically never ended. If the three of us ever end up playing together again in the future, it’s still LeBaron. We all live in different parts of the country now, but hey, you never know what could happen.
Okay, so this piece is going to take some time to explain because it combines some uncommon experimental composition ideas. So to make things short for those who don’t want to read something long and dry, I will begin by putting this in quick-and-dirty terms:
This is music co-written by my computer. I coded a piece of music that writes itself.
This piece utilizes notes and harmonies unavailable in “normal” music.
Point 1 is particularly oversimplified. So if you’d like a more detailed, accurate representation of what this is, read on.
A few months ago, I released Key West, an algorithmic/microtonal* piece that was composed using the awesome software known as Pure Data. While I really enjoyed how that piece turned out, I wanted to go further with Pure Data and create something that had stronger rhythmic content. Within a few days of completing it, I began putting together a new composition.
I put quite a few hours into it over the next few weeks. The code ended up a lot more complicated than I expected. I liked how it all came together, but upon finishing it… I felt that I couldn’t put it out yet. This was because I actually didn’t understand some of the theory behind what I had done, and didn’t know how I was possibly going to explain it. I was told that to someone who wasn’t already deep into lunatic-fringe musician territory, the explanation I wrote up for Key West was mostly incomprehensible gobbledygook, so I wanted to be sure that I could thoroughly explain this new one before I put it out to the public. I had to do some research to figure out what exactly I had come up with.
* Don’t know what these terms mean? Don’t worry, explanations are coming below.
Like Key West, this piece also experiments with microtonality. Most music that we hear is written in 12edo (equal division of the octave), which splits the octave into 12 notes that the human ear interprets as even jumps in pitch. Key West utilizes a temperament known as 5edo, where the octave is split into 5 notes that the human ear interprets as even jumps in pitch, allowing the utilization of notes that are not found in most music.
This song, however, abandons that approach and uses a different kind of system. The composition came to be as a result of my curiosity regarding different ways of splitting up the octave. After trying a bunch of different things, I happened to take the octave and split it up into 16 even divisions… but instead of splitting them up in equal cent intervals (tones that the human ear interprets as even jumps), like with an EDO, I split the octave evenly in hertz (i.e. vibrations per second). This is one of the things that I did that, at the time, I didn’t fully understand, and later got really confused by, so hopefully I can explain.
The human ear interprets frequencies logarithmically. Doubling any frequency will result in an octave harmony. We understand 200hz to be the same note as 100hz except one octave higher. This doubling continues with each iteration of the octave; 400hz, 800hz, 1600hz, et cetera are all perceived as the same note. This means that in a tuning system where every step of the scale is interpreted by the ear as being exactly even, the amount of Hz between each note increases.
So, as I said before, I derived the scale that I used for this piece in the opposite way. Instead of dividing evenly in cents, I divided the octave evenly into Hz. This means that as the scale increases, the ear interprets the intervals as getting closer together. I later learned that this is known as an otonal scale due to its relation to the overtone series (shout-outs to Dave Ryan and Tom Winspear for helping me figure out what the heck this approach is called).
This particular scale is known as otones16-32. From my research, otonal scales seem to be pretty uncommon. I don’t know why that is. They’re a very logical way to construct mathematically harmonious scales. To my ear, they can be utilized in a way which sounds very consonant, but they also allow for some exotic xenharmonic flavor.
From the 16 available notes in this scale, I chose a set of 7 of them that I thought sounded agreeable together. The ratios† of these notes are:
I found the sound of this scale inspiring (especially when I heard how strange it sounded to build chords with it), so I set out to put together a new algorithmic piece. Like I said earlier, I wanted to be more ambitious and give the thing more structure than my last effort.
† “Ratios? What does that mean in this context?” The ratio is what you multiply your base note by to create a particular harmony. For example, say your base note is 100hz. To create a perfect fifth (3/2), you multiply 100*(3/2), yielding 150hz. When 100hz and 150hz are played together, every time that the base note vibrates twice, the harmony vibrates exactly three times in perfect alignment.
An algorithmic composition, in its simplest form, is music that is made by creating and then following a set of rules. The term, however, is more commonly used for music where the artist designs some kind of framework (most often a computer program) that allows the piece to perform itself without intervention from the artist. In this case, I did just that: coded a framework of rules dictating which sounds are generated when by a combination of soundwave generators.
This composition also falls under a closely related musical form known as generative music. A composition is generative if it is unique each time that it is listened to. It needs to be continually reinventing itself in some way. This algorithmic composition is also a generative composition because the computer makes many determinations about the musical output each time that it is played. Every time the program runs and the song is heard, the chord progressions are different, as are the rhythms and notes that the instruments play. Many aspects of the composition are the same each time, but many others are determined by the computer and are unique to every listen.
Here are the rules I chose that are the same on every playthrough:
A base frequency of 200 (roughly halfway between G and Ab in A-440)
The overall rhythm and tempo
Backbeat on beat 3
The intervals that the chord instrument plays
Root note, 3 notes up the scale, 5 notes up the scale
The instruments and their beginning timbres
Hi-hats on both the left and right sides
The polyphonic synth which plays the 3-note chords
A monophonic, airy sounding “ah” synth
A lead “sequencer” that plays in the center, with a 5th harmony that fades in and out
A short-tailed secondary lead layer
The warbly background synths on the left and right
The 17 available chord progressions
The overall structure of the piece
A “trigger” occurring at 3 specific points which activates one of the instrument mute sequences
A slowdown and fade-out for an ending
And here’s what the computer determines on each playthrough:
Which chord progressions (of the 17 available) will be used for each of the 5 parts. There are no restrictions on which progressions can be used for each part, so:
The computer sometimes makes the piece have unique chord progressions for each part, creating a song structure of ABCDE
It is theoretically possible that the computer could choose the same chord progression for all 5 parts, creating AAAAA, though highly unlikely (odds of 1 in 1,419,817)
I am not well versed in the theory behind statistics, so I don’t know the reason for this, but the majority of the time, it tends to pick the same chord progression for at least two of the parts
Most often, it seems the structure will end up being something like ABCAD, or ABACB
Unique melodies and rhythms for each instrument during each of the five parts of the song. That is:
A unique drum pattern for each individual drum sound during each part
A unique bass line during each part
A unique sequencer line for each part
A unique short-tailed synth line for each part
A unique set of warbly background notes on the left and the right for each part
The morphing of the timbre of the instruments
The overtones that create the timbres
The amount and level of frequency and amplitude modulation
When this morphing occurs
Where and when the instruments pan around in the stereo field
When the sequencer harmony fades in and out
Which instrument mute sequence (of the 3 available) occurs at each defined trigger point
FINAL NOTES / OTHER
There is one aspect of the composition that I listed under the “rules I chose” heading that I would actually consider some kind of middle ground: the available chord progressions. While I was working on coding the program, I made a chord progression generator. It automatically created bar-long chord progressions. Each time it would generate a loop, I would listen to it a few times and decide if I liked it or not. If I liked it, I would add it to the list of progressions that the computer could choose from. The ones that were musical nonsense were deleted. About 1/3 of the progressions generated by the algorithm I designed were usable. So in the end, I did choose the chord progressions that were included. But the computer created them in the first place.
On a different note, this method of presentation of the piece raises some art philosophy questions. Is the YouTube video of the piece being played actually the same piece? It’s not truly representative of the generative nature of the song; the YouTube video will be exactly the same every time it is played. It’s a facsimile that only demonstrates one particular runthrough. Eventually I’d like to make the jump over to Max/MSP, an extremely similar visual coding language that allows you to compile your code into a standalone program that anyone can run.
Lastly, all of the sounds you hear in the piece are generated via what is known as waveform synthesis. They are created by adding various combinations of sine waves and white noise‡ to one another. This creates complex waveforms which our ears interpret as different timbres. I could write a post entirely about how the sound generators in this piece work together to create the sounds that you hear, but this post is already long enough. Maybe for my next Pure Data based composition, I will focus my explanation entirely on that aspect of the piece.
‡ Sine waves are the most basic, pure waveform (a sinusoidal shape), and white noise is an audio signal that plays all frequencies at equal power (which ends up sounding like a sharp hiss). The audio demonstrations included in this article are all made up of basic sine waves.
Yikes! That’s a lot longer than I expected. Hopefully someone finds this interesting. At the very least, writing this cemented a bunch of this stuff in my brain.