6 tracks (35 minutes) of crusty, atmospheric post black metal. Tracked, edited, mixed, and mastered at Big Name.
This is our final release, thus marking the end of an era of my life (2013-2018). I’m currently working on a retrospective writeup detailing the creation of this album and my experiences in the band overall. This post is a placeholder for that.
Cassettes and CDs were printed in the print shop and are available via Bandcamp.
I engineered this album for Seattle’s Morrow. Midway through the recording process, I was asked to write and record my own basslines as well. They had a bass player when we first began tracking, but he dropped out of the project right before he was supposed to record his parts. As a result, I was asked to step in. I was given full creative freedom with my bass contributions.
The band spent three years composing and demoing these songs, and then one year in the studio with me recording the real deal. Being able to add my own spin to this album was a pleasure and I look forward to seeing what these guys go on to create in the future.
For people who enjoy melodic/progressive/atmospheric black metal.
Another review, this time accompanied by the full album stream. In case you haven’t seen the last two posts, I produced this recording and played bass on it. The album officially releases on July 21st.
A second song from The Weight of These Feathers has been released with an exclusive stream, this time thanks to Can This Even Be Called Music?. As mentioned in the previous post, I produced this album and played bass on it. The full album releases on July 21st.
This was a nice surprise to find today. I produced this recording and played bass on it for my friends in Morrow. We worked on it throughout the entire last year.
As a side note, I’ve finished my work on 7 recordings since moving to Colorado in March and so far only one has seen its full release. So I have a slew of new stuff on the way. This one comes out July 21st and I’m super proud of it and excited to see it get out into the world.
On this day 10 years ago, LeBaron’s final recording session took place.
“What’s LeBaron?” You might be asking.
LeBaron was a music experiment that arose organically in the summer of 2007. The band consisted* of Kol Fenton, Stephen Navarrete, and me. It lasted only a few months. The album that we put together, Stambaugh Sessions, is the oldest release that I played on that I still thoroughly appreciate to this day.
The series of recording dates that comprise Stambaugh Sessions began spontaneously. One day, the three of us ended up hanging out in our buddy Anthony’s garage, which had been given the name Chestnut St. Arms. Bella Drive, Steve and Kol’s band with Daniel Hendrickson (who I later collaborated with in Phantom Float), practiced there. As a result of that arrangement, Steve and Kol were there frequently, and I would come by occasionally to see what my friends were up to. That’s how this place was. If you made music there, you were likely to have an audience of a few people hanging out while you worked on stuff.
Somehow Steve and I got the idea to split his drumkit up and each play half of it. In my corner of the space, I flipped the kick drum onto its side and played it with sticks, along with a snare and a ride. Over in his corner, Steve set up a snare, two toms, and a small crash. The hi-hat was positioned so that either of us could use the foot control or hit it with sticks.
We messed around jamming for a bit. Kol grabbed a guitar, turned on an amp, and… suddenly LeBaron was happening.
We played for maybe twenty minutes with a few friends watching. As we finished up, someone who was sitting in a recliner on the opposite side of the room (I can’t remember who at this point) said, “That was actually really interesting. You should come back with your recording stuff and do that again.”
So we did.
In total, there were 4 recording sessions. Each one had a very limited audience, but enough to give it a little bit of “event” energy. Now that it’s been so long, I’m not sure who witnessed these performances.
Each session was unique in some way. There are clues in the songs as to which songs were recorded on which occasion. The fretless bass was only used one of the days. The pedalboard was significantly expanded on one of the sessions, bringing in some extra sounds. On a different date, we allowed Aqua Teen Hunger Force to play on the TV in the background of all the recordings. During our third session, Ryan Moyer joined us and played an empty wine bottle with a drum stick. The rototoms and samples are present in some, but not all, of the recordings. And there was a bugle at one point.
Each time we would finish a take, we would listen back to it and see what we had just done. Some songs were titled immediately as we were listening back for the first time. “This part sounds like when you’re doing badly Taking a Test and getting more and more frustrated.” A few were actually given titles before we even played them. “Alright. This next song is called Gangsta Situations no matter what it sounds like.” Most of them were left untitled at this point, though.
I think we settled on the name LeBaron during our first recording session. The conversation went something like this:
“Okay, I have an idea. What’s the most non-descript, not-noteworthy car that you can think of? Something you wouldn’t be embarrassed to drive, but also wouldn’t be excited to drive at all?”
After our final recording session, I took the Tascam 4 track cassette recorder we had been using back to my house and digitized all of our tapes. We posted the files online, but had to cycle them out over time since Myspace would only let you post 3 songs at once. Within a few months we had moved on, playing in our more traditional bands, and that was that.
But I wasn’t happy with how our work was left incomplete. The songs had never been properly compiled, mastered, and released. In 2010, after I moved to Washington, I decided it was finally time to work on it. This was when the songs were given an order, and also when all the then-untitled songs were given names. Hard for me to believe that’s now 7 years ago, and that it’s been 10 years since they were first recorded. Time flies.
* We decided shortly after our last session that LeBaron technically never ended. If the three of us ever end up playing together again in the future, it’s still LeBaron. We all live in different parts of the country now, but hey, you never know what could happen.
To someone familiar with my solo releases, it might seem strange that this one has been put out under the same moniker as my album [visitor]. The two releases are almost diametrically opposed in terms of sound, but in my mind, they clearly belong to the same project.
What determines if something is [syzygy]? The project’s driving question is: “What can I do with only this?”
In the case of [visitor], the “only this” is my detuned, 80-year-old spinet piano and my fretless electric bass. In the case of [ouroboros], it is my Behringer Xenyx 1202 mixing board.
All of the sounds that are heard on this release were generated by only a mixing board. This was accomplished by routing the various outputs of the mixer back into the various inputs on the mixer, creating internal analog feedback loops. This is known as the “no-input mixer” technique.
It’s a deceptively simple tactic. Though it seems like it should result in basic, abrasive feedback squelches, the reality is much cooler. The various signal routings through the mixing console interact with one another to create surprisingly complex waveforms.
Each mixer generates sounds unique to its hardware. This is one of the only situations I can think of where lower quality gear can have a huge advantage over higher quality gear: lower quality components tend to modify the waveform passing through them more than higher quality components do. As a result, when the waveforms sum back together, they coalesce into more chaotic wave-interference patterns (i.e. feedback loops).
Behringer is known for making gear focused more on economy than quality, so the Xenyx 1202 is perfect for this application. When you really crank the signals with this thing, especially the low frequencies, it overloads and creates fantastic drum-machine-like rhythms. It can also generate single notes that sound like an electronic synth, as well as more noisy blocks of sound. Hidden within it, I’ve found sounds reminiscent of motorcycles racing through tunnels, ringing analog phones, air raid sirens, scurrying mice, alarm systems, heavy machinery, ray guns, heartbeats, woodblocks, flutes, and much more. This device has a very dystopian palette.
An improvisation performed on the Xenyx 1202. This is similar to the form in which each track on [ouroboros] began. As you can see, no-input mixer improvs can sound kind of aimless, which is why I wanted to experiment with using them as the building blocks for sample-based composition instead. This video only demonstrates a few of the sounds that the mixer can generate.
The composition process:
Each song started the same way as the improvisation above. I plugged in the mixer, hit record, and played for roughly 20 minutes. This part of the process is very reflexive and intuitive. You can’t really predict how the mixer will react to most changes that you make to the state of the board.
After finishing the improvisation, I went in, listened for parts that I liked, and spliced up the take into dozens of shorter clips. Some of these worked very well as loops. Others worked as transition pieces between looped sections.
At this point, I developed the general structure of each piece by arranging the various clips I had cut out.
Next, I added layers:
Some parts needed noisy layers, so I would find the right sound and apply it.
Many parts needed chords or melodies. At this point, I used the type of feedback that sounds like an analog synth playing a single pitch. I recorded various pitches and applied them over the clips, sometimes layering groups of two or three to create harmonies and chords.
Delay and panning were added to certain sections where I felt like they belonged.
At this point, I rearranged the parts over and over until every part of the song played back in exactly the “right way.” (This was an intuitive process; there was no metric for what was “right” or “wrong” other than feeling it out.)
This is probably the only session I have done so far where I actually utilized the sound of a brickwall limiter as an effect. I use limiters on every session that I master, as well as on select parts of certain mixes, but I usually attempt to keep them as transparent as possible. These songs have the limiter set far beyond the normal levels I tend to use. This smashes the layers together, causing the tonal layers to take on the rhythmic characteristics of the noise layers underneath them.
While working on this project, I was struck by the idea that the no-input mixer is a sonic embodiment of the ouroboros: the snake that circles around, consuming its own tail. This symbol is ancient. It is first known to have been used in the 14th century BCE, and has been used by a plethora of spiritual traditions since.
Carl Jung said, “The Ouroboros is a dramatic symbol for the integration and assimilation of the opposite, i.e. of the shadow. This ‘feed-back’ process is at the same time a symbol of immortality, since it is said of the Ouroboros that he slays himself and brings himself to life, fertilizes himself and gives birth to himself. He symbolizes the One, who proceeds from the clash of opposites, and he therefore constitutes the secret of the prima materia which… unquestionably stems from man’s unconscious.”
The ouroboros symbolizes the universe’s nature of continual creation, destruction, and recreation. Its constant reinvention. The paradox of the non-conflicting dual nature of all things. The hidden oneness of the seeming duality between physical and mental worlds. The infinite. The shadow within.
I enlisted my partner Laura to paint the art and I think the piece is exactly right for the music. This isn’t related to the album, but as a side note, she’s currently doing an awesome 100-piece Instagram series of scenes and objects found around our house. It can be found at instagram.com/ladylervold. Check it out and give her a follow if you like what you see.
Final Notes / Other
One thing that I particularly enjoyed experimenting with while creating this recording was its inherent microtonality.* None of the notes on this recording were created using fixed pitch keys like you find on a keyboard. The no-input mixer is capable of producing an infinite range of pitches. Since I was free of 12 tone equal temperament tuning, I was able to step back and simply use my ears to find harmonies and chord progressions that I enjoyed without being stuck inside the rigid 12tet realm. The other side of the inherent microtonality of this process is found in the base layer in each song. When the mixing board develops complex waveform patterns, it doesn’t use any tuning theory. The harmonies it generates are pure physics and mathematics, and the intervals it spits out are not bound to 12 tone equal temperament tuning.
The other aspect that I really enjoyed playing around with while working on this was the appearance of high-denominator odd-meter rhythms (for example: 27/32). These are rhythms that can only be notated by using 32nd or 64th notes. You don’t often hear them in music because they are very difficult for humans to play accurately, especially at high speed. Complex feedback, however, has no aversion to them, so a lot of them ended up in the final compositions here.
Okay, so this piece is going to take some time to explain because it combines some uncommon experimental composition ideas. So to make things short for those who don’t want to read something long and dry, I will begin by putting this in quick-and-dirty terms:
This is music co-written by my computer. I coded a piece of music that writes itself.
This piece utilizes notes and harmonies unavailable in “normal” music.
Point 1 is particularly oversimplified. So if you’d like a more detailed, accurate representation of what this is, read on.
A few months ago, I released Key West, an algorithmic/microtonal* piece that was composed using the awesome software known as Pure Data. While I really enjoyed how that piece turned out, I wanted to go further with Pure Data and create something that had stronger rhythmic content. Within a few days of completing it, I began putting together a new composition.
I put quite a few hours into it over the next few weeks. The code ended up a lot more complicated than I expected. I liked how it all came together, but upon finishing it… I felt that I couldn’t put it out yet. This was because I actually didn’t understand some of the theory behind what I had done, and didn’t know how I was possibly going to explain it. I was told that to someone who wasn’t already deep into lunatic-fringe musician territory, the explanation I wrote up for Key West was mostly incomprehensible gobbledygook, so I wanted to be sure that I could thoroughly explain this new one before I put it out to the public. I had to do some research to figure out what exactly I had come up with.
* Don’t know what these terms mean? Don’t worry, explanations are coming below.
Like Key West, this piece also experiments with microtonality. Most music that we hear is written in 12edo (equal division of the octave), which splits the octave into 12 notes that the human ear interprets as even jumps in pitch. Key West utilizes a temperament known as 5edo, where the octave is split into 5 notes that the human ear interprets as even jumps in pitch, allowing the utilization of notes that are not found in most music.
This song, however, abandons that approach and uses a different kind of system. The composition came to be as a result of my curiosity regarding different ways of splitting up the octave. After trying a bunch of different things, I happened to take the octave and split it up into 16 even divisions… but instead of splitting them up in equal cent intervals (tones that the human ear interprets as even jumps), like with an EDO, I split the octave evenly in hertz (i.e. vibrations per second). This is one of the things that I did that, at the time, I didn’t fully understand, and later got really confused by, so hopefully I can explain.
The human ear interprets frequencies logarithmically. Doubling any frequency will result in an octave harmony. We understand 200hz to be the same note as 100hz except one octave higher. This doubling continues with each iteration of the octave; 400hz, 800hz, 1600hz, et cetera are all perceived as the same note. This means that in a tuning system where every step of the scale is interpreted by the ear as being exactly even, the amount of Hz between each note increases.
So, as I said before, I derived the scale that I used for this piece in the opposite way. Instead of dividing evenly in cents, I divided the octave evenly into Hz. This means that as the scale increases, the ear interprets the intervals as getting closer together. I later learned that this is known as an otonal scale due to its relation to the overtone series (shout-outs to Dave Ryan and Tom Winspear for helping me figure out what the heck this approach is called).
This particular scale is known as otones16-32. From my research, otonal scales seem to be pretty uncommon. I don’t know why that is. They’re a very logical way to construct mathematically harmonious scales. To my ear, they can be utilized in a way which sounds very consonant, but they also allow for some exotic xenharmonic flavor.
From the 16 available notes in this scale, I chose a set of 7 of them that I thought sounded agreeable together. The ratios† of these notes are:
I found the sound of this scale inspiring (especially when I heard how strange it sounded to build chords with it), so I set out to put together a new algorithmic piece. Like I said earlier, I wanted to be more ambitious and give the thing more structure than my last effort.
† “Ratios? What does that mean in this context?” The ratio is what you multiply your base note by to create a particular harmony. For example, say your base note is 100hz. To create a perfect fifth (3/2), you multiply 100*(3/2), yielding 150hz. When 100hz and 150hz are played together, every time that the base note vibrates twice, the harmony vibrates exactly three times in perfect alignment.
An algorithmic composition, in its simplest form, is music that is made by creating and then following a set of rules. The term, however, is more commonly used for music where the artist designs some kind of framework (most often a computer program) that allows the piece to perform itself without intervention from the artist. In this case, I did just that: coded a framework of rules dictating which sounds are generated when by a combination of soundwave generators.
This composition also falls under a closely related musical form known as generative music. A composition is generative if it is unique each time that it is listened to. It needs to be continually reinventing itself in some way. This algorithmic composition is also a generative composition because the computer makes many determinations about the musical output each time that it is played. Every time the program runs and the song is heard, the chord progressions are different, as are the rhythms and notes that the instruments play. Many aspects of the composition are the same each time, but many others are determined by the computer and are unique to every listen.
Here are the rules I chose that are the same on every playthrough:
A base frequency of 200 (roughly halfway between G and Ab in A-440)
The overall rhythm and tempo
Backbeat on beat 3
The intervals that the chord instrument plays
Root note, 3 notes up the scale, 5 notes up the scale
The instruments and their beginning timbres
Hi-hats on both the left and right sides
The polyphonic synth which plays the 3-note chords
A monophonic, airy sounding “ah” synth
A lead “sequencer” that plays in the center, with a 5th harmony that fades in and out
A short-tailed secondary lead layer
The warbly background synths on the left and right
The 17 available chord progressions
The overall structure of the piece
A “trigger” occurring at 3 specific points which activates one of the instrument mute sequences
A slowdown and fade-out for an ending
And here’s what the computer determines on each playthrough:
Which chord progressions (of the 17 available) will be used for each of the 5 parts. There are no restrictions on which progressions can be used for each part, so:
The computer sometimes makes the piece have unique chord progressions for each part, creating a song structure of ABCDE
It is theoretically possible that the computer could choose the same chord progression for all 5 parts, creating AAAAA, though highly unlikely (odds of 1 in 1,419,817)
I am not well versed in the theory behind statistics, so I don’t know the reason for this, but the majority of the time, it tends to pick the same chord progression for at least two of the parts
Most often, it seems the structure will end up being something like ABCAD, or ABACB
Unique melodies and rhythms for each instrument during each of the five parts of the song. That is:
A unique drum pattern for each individual drum sound during each part
A unique bass line during each part
A unique sequencer line for each part
A unique short-tailed synth line for each part
A unique set of warbly background notes on the left and the right for each part
The morphing of the timbre of the instruments
The overtones that create the timbres
The amount and level of frequency and amplitude modulation
When this morphing occurs
Where and when the instruments pan around in the stereo field
When the sequencer harmony fades in and out
Which instrument mute sequence (of the 3 available) occurs at each defined trigger point
FINAL NOTES / OTHER
There is one aspect of the composition that I listed under the “rules I chose” heading that I would actually consider some kind of middle ground: the available chord progressions. While I was working on coding the program, I made a chord progression generator. It automatically created bar-long chord progressions. Each time it would generate a loop, I would listen to it a few times and decide if I liked it or not. If I liked it, I would add it to the list of progressions that the computer could choose from. The ones that were musical nonsense were deleted. About 1/3 of the progressions generated by the algorithm I designed were usable. So in the end, I did choose the chord progressions that were included. But the computer created them in the first place.
On a different note, this method of presentation of the piece raises some art philosophy questions. Is the YouTube video of the piece being played actually the same piece? It’s not truly representative of the generative nature of the song; the YouTube video will be exactly the same every time it is played. It’s a facsimile that only demonstrates one particular runthrough. Eventually I’d like to make the jump over to Max/MSP, an extremely similar visual coding language that allows you to compile your code into a standalone program that anyone can run.
Lastly, all of the sounds you hear in the piece are generated via what is known as waveform synthesis. They are created by adding various combinations of sine waves and white noise‡ to one another. This creates complex waveforms which our ears interpret as different timbres. I could write a post entirely about how the sound generators in this piece work together to create the sounds that you hear, but this post is already long enough. Maybe for my next Pure Data based composition, I will focus my explanation entirely on that aspect of the piece.
‡ Sine waves are the most basic, pure waveform (a sinusoidal shape), and white noise is an audio signal that plays all frequencies at equal power (which ends up sounding like a sharp hiss). The audio demonstrations included in this article are all made up of basic sine waves.
Yikes! That’s a lot longer than I expected. Hopefully someone finds this interesting. At the very least, writing this cemented a bunch of this stuff in my brain.