Here is an unusual project for which I am playing drums. It falls under the classification of scriptophonic microtonal black metal.
For anyone unfamiliar, in scriptophonic music, the artist designs an algorithm which converts text into musical notes.
Again for anyone unfamiliar, microtonal music utilizes pitches that fall outside of the standard twelve tone equal divisions of the octave musical system (12edo). In this case, we are dividing the octave into 26 equal parts (26edo), one for each letter of the alphabet.
We are converting the entirety of Tolkien’s Valaquenta into music via an algorithm that Dave Tremblay programmed using Python. The algorithm generates the guitar parts from the text, which are played by Dave on a real 26edo guitar and bass. I am then given free rein to write and record drum parts that support those guitar parts. Finally, Brian Leong is providing the vocals, placing the words that were used to generate the guitar parts on top of each section.
There are 29 pieces total and we are recording and releasing them one at a time. The scope of this project is very large and so it will probably take a long time to finish!
This album is a look back at the hellish year behind us and is a prayer for a better world ahead.
Like my album [xendeavor one] from February 2020, this album explores ways of dividing the octave other than the standard 12 equally spaced notes, which is the system that the vast majority of music in the world utilizes. These alternate systems can result in strange and otherworldly tonalities. I had been wanting to make a microtonal black metal album for a few years, and I began this album in early March. As we all know, this is right when the year’s events really kicked off, and so the creation of this album is intimately tied to and influenced by these world events.
Yaeth is the pseudonym I use in my band Bull of Apis Bull of Bronze. This album turned out stylistically and thematically similar in many ways to that project. Once I finished work composing the album I realized that, due to those ties, Yaeth was the only fitting name for this project.
Once again, I present a [syzygy] album. Once again, it has an entirely different sound than anything that has come from the project before.
What ties [syzygy]’s albums together is an approach of minimalistic experimentation and a musical atmosphere of unease. In the case of [visitor], the experiment was to see what I could do with just my untuned piano and a fretless bass. In the case of [ouroboros], it was to see what could be done with only a mixing board plugged into itself. With [xendeavor one], the experiment was to delve headfirst into the world of microtonal* electronic music.
The vast majority of Western music uses the 12 equal divisions of the octave tuning system (12edo). These songs use 7 other possible EDOs (10, 11, 15, 19, 31, 37, and 85), an overtone scale, and an undertone scale.
The term “microtonal” tends to conjure expectations of extremely dissonant, noisy, and/or unstructured music for some people because a lot of music labeled “microtonal” fits those descriptions. But it doesn’t have to! There is plenty of melody and harmony available to utilize within these systems. And you get the bonus of unique intervals and chords that are impossible in 12edo.
Since I discovered microtonality a few years ago I have really wanted to make this album. I tried to start a couple of times and was never happy with my results but this time it all clicked. I am very excited to share this with the world.
* If after reading this you have no idea what I’m talking about here, see the “Microtonality” section of my Loiterer – Adrift writeup for some further explanation. Or check out the Wikipedia article.
Tonally Inverted Version of The Legend of Zelda: Ocarina of Time’s Title Screen
What is tonal inversion? Basically, every melody, chord progression, or complete song has a “mirror reality” version hidden beneath itself. In order to discover this alternate version, you take the notes of the composition and flip them upside down, so that every interval becomes inverted. For example, if the melody goes up two full steps and then down a half step, you make the melody go down two full steps and then up a half step. When you apply this process to a major chord, it becomes a minor chord. Ionian (major) scales become phrygian scales. Aeolian (minor) scales become mixolydian scales. And so on and so forth.
The part I find most interesting is that if the original composition is musically coherent, the negative version will always be musically coherent as well. It may not be as moving as the original, or it might be a totally bizarre piece of music overall, but it will always at least work musically.
So what does the Nintendo 64 have to do with this? It turns out N64 games are great for applying this concept because you can directly rip the MIDI files and soundfonts from the games. What this means is that you can create tonally inverted versions that are extremely accurate to the original compositions. The timbres, timing, and note velocities are identical; the only difference is that the notes are upside down. The other benefit to applying this process to songs from N64 games is that, at least for millennials, these songs are fairly universally known and beloved. These types of transformations are much more intriguing than when you hear the concept applied to an unfamiliar piece of music.
Just over five years ago, while walking on the beach on a gloomy November day in Aberdeen, I had a thought: “What would Christmas carols sound like if they were turned into black metal?”
It was just a passing idea. I think the possibilities of playing around in that style were on my mind because I had just finished recording A God or an Other’s debut album, Towers of Silence, which heavily drew from the black metal lexicon. For whatever reason, the idea for this album really stuck with me. I knew I had to make it.
Obviously it had to be released during the Christmas season, but it was definitely too late to do so that year. There was no way I could arrange, record, and release it in just a few weeks. I could have recorded it and waited until the next year to release it, but I’ve always found it unpleasant to sit on finished releases for more than a short period of time. So I decided to wait ten or eleven months to begin.
Well, the next year came and the same thing happened; by the time I finally started considering working on it, it was too late to begin. This process repeated for the next four years. I was annoyed with myself each year, but now I recognize it’s for the best. Shortly after I came up with the idea, I ended up joining A God or an Other as the band’s drummer, which dramatically increased my blast beat chops over time. Half a year after joining, I started contributing vocals as well. My bandmates were really good at tremolo picking on guitar, and so being in the band also encouraged me to develop that skill. All those hours of practice within the style really helped bring about a superior end result compared to what this album would have been if I had recorded when the idea first struck me.
One other benefit to waiting 5 years between the inception of the idea and its execution is that I spent a lot of time between then and now expanding my understanding of music theory. This made the transmogrification process much more successful than I think it would have been in 2013.
Anyway, inspiration finally struck in September. I knew that this was the year. But there were two things making it a little more challenging than it normally would have been. First, I was insanely swamped with overtime at work. Second, my wife and I had (and still have) a newborn at home, and taking care of him requires quite a bit of work. My time was quite limited. But when I feel compelled by those mysterious creative forces to make something, I have to do it. So I found time. Most of this album was arranged and recorded between the hours of 5 A.M. and 6 A.M. on weekdays before I headed off to work. The process was a little rough but totally worth it.
I picked public domain songs purely for legal reasons, as I would like to be able to use these in any way that I may see fit in the future.
In order for a song to be picked, it had to be originally written in a major key, as it would not be as dramatic or as fun of a transformation to do a song that was originally composed in a minor key. I really wanted to play around with some of the more exotic minor scales instead of exclusively using aeolian mode. It was fun to utilize sounds like neapolitan minor and harmonic minor and to build some four part harmonies with them.
The last qualification for picking songs is difficult to explain. They had to make me feel some sort of… resonance… within myself. This has to do with the history of my early life. I’m not a Christian, but I was raised Catholic, and that upbringing had quite an impact on me. I heard these songs so many times in mass over the years. It is enjoyable for me to hear these representations of that part of my life turned around into something that is now meaningful to me in a new and very different way.
Until 2013, I was certain I would never make a Christmas album. Now here we are. Life is strange.
I hope you enjoy listening to it.
Cover art by my wife, Laura Lervold.
Cassettes are available through the Bandcamp and Big Name Records.
On this day 10 years ago, LeBaron’s final recording session took place.
“What’s LeBaron?” You might be asking.
LeBaron was a music experiment that arose organically in the summer of 2007. The band consisted* of Kol Fenton, Stephen Navarrete, and me. It lasted only a few months. The album that we put together, Stambaugh Sessions, is the oldest release that I played on that I still thoroughly appreciate to this day.
The series of recording dates that comprise Stambaugh Sessions began spontaneously. One day, the three of us ended up hanging out in our buddy Anthony’s garage, which had been given the name Chestnut St. Arms. Bella Drive, Steve and Kol’s band with Daniel Hendrickson (who I later collaborated with in Phantom Float), practiced there. As a result of that arrangement, Steve and Kol were there frequently, and I would come by occasionally to see what my friends were up to. That’s how this place was. If you made music there, you were likely to have an audience of a few people hanging out while you worked on stuff.
Somehow Steve and I got the idea to split his drumkit up and each play half of it. In my corner of the space, I flipped the kick drum onto its side and played it with sticks, along with a snare and a ride. Over in his corner, Steve set up a snare, two toms, and a small crash. The hi-hat was positioned so that either of us could use the foot control or hit it with sticks.
We messed around jamming for a bit. Kol grabbed a guitar, turned on an amp, and… suddenly LeBaron was happening.
We played for maybe twenty minutes with a few friends watching. As we finished up, someone who was sitting in a recliner on the opposite side of the room (I can’t remember who at this point) said, “That was actually really interesting. You should come back with your recording stuff and do that again.”
So we did.
In total, there were 4 recording sessions. Each one had a very limited audience, but enough to give it a little bit of “event” energy. Now that it’s been so long, I’m not sure who witnessed these performances.
Each session was unique in some way. There are clues in the songs as to which songs were recorded on which occasion. The fretless bass was only used one of the days. The pedalboard was significantly expanded on one of the sessions, bringing in some extra sounds. On a different date, we allowed Aqua Teen Hunger Force to play on the TV in the background of all the recordings. During our third session, Ryan Moyer joined us and played an empty wine bottle with a drum stick. The rototoms and samples are present in some, but not all, of the recordings. And there was a bugle at one point.
Each time we would finish a take, we would listen back to it and see what we had just done. Some songs were titled immediately as we were listening back for the first time. “This part sounds like when you’re doing badly Taking a Test and getting more and more frustrated.” A few were actually given titles before we even played them. “Alright. This next song is called Gangsta Situations no matter what it sounds like.” Most of them were left untitled at this point, though.
I think we settled on the name LeBaron during our first recording session. The conversation went something like this:
“Okay, I have an idea. What’s the most non-descript, not-noteworthy car that you can think of? Something you wouldn’t be embarrassed to drive, but also wouldn’t be excited to drive at all?”
After our final recording session, I took the Tascam 4 track cassette recorder we had been using back to my house and digitized all of our tapes. We posted the files online, but had to cycle them out over time since Myspace would only let you post 3 songs at once. Within a few months we had moved on, playing in our more traditional bands, and that was that.
But I wasn’t happy with how our work was left incomplete. The songs had never been properly compiled, mastered, and released. In 2010, after I moved to Washington, I decided it was finally time to work on it. This was when the songs were given an order, and also when all the then-untitled songs were given names. Hard for me to believe that’s now 7 years ago, and that it’s been 10 years since they were first recorded. Time flies.
* We decided shortly after our last session that LeBaron technically never ended. If the three of us ever end up playing together again in the future, it’s still LeBaron. We all live in different parts of the country now, but hey, you never know what could happen.
Okay, so this piece is going to take some time to explain because it combines some uncommon experimental composition ideas. So to make things short for those who don’t want to read something long and dry, I will begin by putting this in quick-and-dirty terms:
This is music co-written by my computer. I coded a piece of music that writes itself.
This piece utilizes notes and harmonies unavailable in “normal” music.
Point 1 is particularly oversimplified. So if you’d like a more detailed, accurate representation of what this is, read on.
A few months ago, I released Key West, an algorithmic/microtonal* piece that was composed using the awesome software known as Pure Data. While I really enjoyed how that piece turned out, I wanted to go further with Pure Data and create something that had stronger rhythmic content. Within a few days of completing it, I began putting together a new composition.
I put quite a few hours into it over the next few weeks. The code ended up a lot more complicated than I expected. I liked how it all came together, but upon finishing it… I felt that I couldn’t put it out yet. This was because I actually didn’t understand some of the theory behind what I had done, and didn’t know how I was possibly going to explain it. I was told that to someone who wasn’t already deep into lunatic-fringe musician territory, the explanation I wrote up for Key West was mostly incomprehensible gobbledygook, so I wanted to be sure that I could thoroughly explain this new one before I put it out to the public. I had to do some research to figure out what exactly I had come up with.
* Don’t know what these terms mean? Don’t worry, explanations are coming below.
Like Key West, this piece also experiments with microtonality. Most music that we hear is written in 12edo (equal division of the octave), which splits the octave into 12 notes that the human ear interprets as even jumps in pitch. Key West utilizes a temperament known as 5edo, where the octave is split into 5 notes that the human ear interprets as even jumps in pitch, allowing the utilization of notes that are not found in most music.
This song, however, abandons that approach and uses a different kind of system. The composition came to be as a result of my curiosity regarding different ways of splitting up the octave. After trying a bunch of different things, I happened to take the octave and split it up into 16 even divisions… but instead of splitting them up in equal cent intervals (tones that the human ear interprets as even jumps), like with an EDO, I split the octave evenly in hertz (i.e. vibrations per second). This is one of the things that I did that, at the time, I didn’t fully understand, and later got really confused by, so hopefully I can explain.
The human ear interprets frequencies logarithmically. Doubling any frequency will result in an octave harmony. We understand 200hz to be the same note as 100hz except one octave higher. This doubling continues with each iteration of the octave; 400hz, 800hz, 1600hz, et cetera are all perceived as the same note. This means that in a tuning system where every step of the scale is interpreted by the ear as being exactly even, the amount of Hz between each note increases.
So, as I said before, I derived the scale that I used for this piece in the opposite way. Instead of dividing evenly in cents, I divided the octave evenly into Hz. This means that as the scale increases, the ear interprets the intervals as getting closer together. I later learned that this is known as an otonal scale due to its relation to the overtone series (shout-outs to Dave Ryan and Tom Winspear for helping me figure out what the heck this approach is called).
This particular scale is known as otones16-32. From my research, otonal scales seem to be pretty uncommon. I don’t know why that is. They’re a very logical way to construct mathematically harmonious scales. To my ear, they can be utilized in a way which sounds very consonant, but they also allow for some exotic xenharmonic flavor.
From the 16 available notes in this scale, I chose a set of 7 of them that I thought sounded agreeable together. The ratios† of these notes are:
I found the sound of this scale inspiring (especially when I heard how strange it sounded to build chords with it), so I set out to put together a new algorithmic piece. Like I said earlier, I wanted to be more ambitious and give the thing more structure than my last effort.
† “Ratios? What does that mean in this context?” The ratio is what you multiply your base note by to create a particular harmony. For example, say your base note is 100hz. To create a perfect fifth (3/2), you multiply 100*(3/2), yielding 150hz. When 100hz and 150hz are played together, every time that the base note vibrates twice, the harmony vibrates exactly three times in perfect alignment.
An algorithmic composition, in its simplest form, is music that is made by creating and then following a set of rules. The term, however, is more commonly used for music where the artist designs some kind of framework (most often a computer program) that allows the piece to perform itself without intervention from the artist. In this case, I did just that: coded a framework of rules dictating which sounds are generated when by a combination of soundwave generators.
This composition also falls under a closely related musical form known as generative music. A composition is generative if it is unique each time that it is listened to. It needs to be continually reinventing itself in some way. This algorithmic composition is also a generative composition because the computer makes many determinations about the musical output each time that it is played. Every time the program runs and the song is heard, the chord progressions are different, as are the rhythms and notes that the instruments play. Many aspects of the composition are the same each time, but many others are determined by the computer and are unique to every listen.
Here are the rules I chose that are the same on every playthrough:
A base frequency of 200 (roughly halfway between G and Ab in A-440)
The overall rhythm and tempo
Backbeat on beat 3
The intervals that the chord instrument plays
Root note, 3 notes up the scale, 5 notes up the scale
The instruments and their beginning timbres
Hi-hats on both the left and right sides
The polyphonic synth which plays the 3-note chords
A monophonic, airy sounding “ah” synth
A lead “sequencer” that plays in the center, with a 5th harmony that fades in and out
A short-tailed secondary lead layer
The warbly background synths on the left and right
The 17 available chord progressions
The overall structure of the piece
A “trigger” occurring at 3 specific points which activates one of the instrument mute sequences
A slowdown and fade-out for an ending
And here’s what the computer determines on each playthrough:
Which chord progressions (of the 17 available) will be used for each of the 5 parts. There are no restrictions on which progressions can be used for each part, so:
The computer sometimes makes the piece have unique chord progressions for each part, creating a song structure of ABCDE
It is theoretically possible that the computer could choose the same chord progression for all 5 parts, creating AAAAA, though highly unlikely (odds of 1 in 1,419,817)
I am not well versed in the theory behind statistics, so I don’t know the reason for this, but the majority of the time, it tends to pick the same chord progression for at least two of the parts
Most often, it seems the structure will end up being something like ABCAD, or ABACB
Unique melodies and rhythms for each instrument during each of the five parts of the song. That is:
A unique drum pattern for each individual drum sound during each part
A unique bass line during each part
A unique sequencer line for each part
A unique short-tailed synth line for each part
A unique set of warbly background notes on the left and the right for each part
The morphing of the timbre of the instruments
The overtones that create the timbres
The amount and level of frequency and amplitude modulation
When this morphing occurs
Where and when the instruments pan around in the stereo field
When the sequencer harmony fades in and out
Which instrument mute sequence (of the 3 available) occurs at each defined trigger point
FINAL NOTES / OTHER
There is one aspect of the composition that I listed under the “rules I chose” heading that I would actually consider some kind of middle ground: the available chord progressions. While I was working on coding the program, I made a chord progression generator. It automatically created bar-long chord progressions. Each time it would generate a loop, I would listen to it a few times and decide if I liked it or not. If I liked it, I would add it to the list of progressions that the computer could choose from. The ones that were musical nonsense were deleted. About 1/3 of the progressions generated by the algorithm I designed were usable. So in the end, I did choose the chord progressions that were included. But the computer created them in the first place.
On a different note, this method of presentation of the piece raises some art philosophy questions. Is the YouTube video of the piece being played actually the same piece? It’s not truly representative of the generative nature of the song; the YouTube video will be exactly the same every time it is played. It’s a facsimile that only demonstrates one particular runthrough. Eventually I’d like to make the jump over to Max/MSP, an extremely similar visual coding language that allows you to compile your code into a standalone program that anyone can run.
Lastly, all of the sounds you hear in the piece are generated via what is known as waveform synthesis. They are created by adding various combinations of sine waves and white noise‡ to one another. This creates complex waveforms which our ears interpret as different timbres. I could write a post entirely about how the sound generators in this piece work together to create the sounds that you hear, but this post is already long enough. Maybe for my next Pure Data based composition, I will focus my explanation entirely on that aspect of the piece.
‡ Sine waves are the most basic, pure waveform (a sinusoidal shape), and white noise is an audio signal that plays all frequencies at equal power (which ends up sounding like a sharp hiss). The audio demonstrations included in this article are all made up of basic sine waves.
Yikes! That’s a lot longer than I expected. Hopefully someone finds this interesting. At the very least, writing this cemented a bunch of this stuff in my brain.
The supposed band consists of members Hans Lederman (drums) and Ashleigh Milton (production) of New York City. Collectively they are known as Antiverb. Their release, General Purpose, is a 7″ disc of 180-grit sandpaper intended to be played on a turntable.
I was greatly amused by both the article and the EP’s concept, and upon completing my reading wondered if anyone had gone to the trouble of fleshing out the non-existent release. I entered “antiverb.bandcamp.com” in my address bar and found that no such page existed. “Antiverb band” on Google yielded no results either. I was honestly shocked that no one was using that name. At that moment I knew: I had to be the one to do it. I had to make this Bandcamp for the sake of the few other weirdos in the world who would read that article and think, “I wonder if anyone has made this into a Bandcamp.”
Anyway, to begin the process, I recorded the sound of a piece of sandpaper on a turntable. As someone who does a lot of work on my house, I happened to have it lying around, so I didn’t even need to hit Home Depot. Score.
As you can see in the video, the needle doesn’t move laterally while the sandpaper spins; it just sits in one position. This means that the sandpaper “record” would play continually until stopped by the listener. A computer program could be coded to replicate this endlessness, but Bandcamp only operates with standard audio files. An audio file can’t be infinite, so the digital recording of this EP needed a chosen length. The article states that the release is a 7″, so I decided to make the digital version 6 minutes long, as this is approaching how long a single side of a traditional 7″ record can be.
After recording and mastering the audio, I registered the Bandcamp page and uploaded the track. At this point I had to decide what to add for other content. I wanted to fill in as many of the fields as possible with information derived from the article. For the “about” section I took the brilliant Harold Zhou “New York Times” review that mentions using the EP to prep a shelf for staining. The artist bio was similarly taken from Lederman’s quote about the intent of the project.
For the credits, the band members’ names and roles were simple enough to fill in. The digital version really was recorded at Big Name, so I put that in there too. At this point I had exhausted the supply of pre-existing ideas. It was time to come up with some original content to fill out the rest of the page.
I used my scanner to scan more of the sandpaper that I had lying around. I was really worried about scratching my scanning bed, but as far as I can tell I got away with it. This scan was used for the album cover as well as the Bandcamp’s header and background. Next, for the artist photo, I used a picture of myself and Laura. I modified the image so our faces don’t show (so it’s not clearly just us) and then I added an overlay using the sandpaper I had just scanned because I thought that was funny.
The article mentioned a label releasing the EP, but did not give that label a name. So I had to come up with one. I chose Acquired Distaste Records; it just strikes me as a perfect noise-label name.
All that was left at this point was a few pictures of the physical release. I took a 7″ record sleeve and cut the piece of sandpaper down to the size of a booklet. Then, I used another piece to create a proper disc. The pictures were taken using the setup that I use to photograph everything for Big Name Records.
I created a fake Facebook for the band’s drummer, posted the link on the article’s page on their website, and watched a scant few plays roll in.
I actually think that if this were a real release, it would be genuinely interesting from a conceptual standpoint. The audio would be fully generative (i.e. each playthrough would be a completely unique waveform), but then, to take that concept to the next level and turn it on its head, the differences between listenings would be indiscernible to the listener. While most generative music is done via computer programming, this release would be different, as it would be the result of purely physical processes. The infinite playback of the disc is also noteworthy; there are plenty of releases with run-out grooves where something repeats over and over again at the end of the disc, but I personally have never seen any kind of disc where the entirety of the audio is infinite. Also, as noted in the article, the disc would likely permanently damage the needle of the listener’s record player. The listener would have to make the conscious choice to put their equipment in harm’s way in order to experience the artist’s piece.
The EP, if real, would be construed by many as extremely pretentious, which would make those people angry. Of course, this is the crux of the satire piece. The pretentiousness, however, (in my humble opinion) could be cool provided that the artist had a solid sense of how ridiculous their piece really was and also provided that they had a good sense of humor about it. The release itself would be comedy gold in its own right, and would be a solid of a jab at noise music’s more absurd artists (while also joining the ranks of noise music’s more absurd artists!). As a final note here, I love that it really does sound kind of cool when you listen to it. I absolutely did not expect to feel that way when I first put the sandpaper on the turntable. The playback sounds exactly like how the surface of sandpaper feels.
It has been brought to my attention that it could be helpful if I lay out a short list of all the reasons why I took the time to do this.
I thought that the idea presented in the article was genuinely artistically awesome.
I thought that the idea presented in the article was also genuinely hilarious.
I really wanted to know what a sandpaper disc would sound like played on a turntable.
I read the article and checked if anyone else had done this already. No one had.
I thought there might be a few other weirdos who would try the same search that I tried after I finished reading the article, and that they would be amused when they found the Bandcamp.
I had the means to do all of this quickly and for almost no cost due to having a recording studio, record label, and physical media production facility.
I thought it would be fun and a good exercise in creativity.
I thought that collaborating with someone (the author) in a way that they never would have expected and didn’t even know about was a curious concept.
I thought going to all this effort for something so odd and pointless was intriguing and ridiculous.
I found the non-existent release’s meaning very ambiguous and wanted to explore its concepts more concretely. I see elements of various forms of art in it and I am not sure if it’s more “music” or “conceptual.” It blurs the lines between different mediums of art. It also straddles the world of comedy and serious, thought-provoking concepts. I see no reason why comedy and art cannot coexist, and I like a lot of art that is humorous.
Taking it to the next step in the process, I wanted to explore just what the heck it would mean if I were to actually do what I did (flesh out the Bandcamp). It felt like doing art, but I wouldn’t even know what form of art to call it. It’s music, conceptual, somewhat performative, and it could be considered an internet installation art piece. Again, I think it’s all funny, but it’s also meaningful in some way. My end of the process felt just as ambiguous as the original concept.
Everything about my own hand in this process (the development of the Bandcamp and also this explanation piece) will certainly be considered very pretentious by some who encounter it. For whatever reason, that just made me want to do it more.
Most importantly, I just wanted to pay homage to The Hard Times, because their work is superb.
Honestly I didn’t consciously think through all of these things before I began, but they were all reasons that I took the time to do this. The simplest explanation is just that it was a fully natural thing for me to do.
To begin, here’s a short but interesting article. The information within comes from secret interviews conducted with Bill Clinton at the end of his administration. The article contains two amusing anecdotes regarding former Russian president Boris Yeltsin, who was found in the middle of the night on the street in Washington DC, in his underwear, drunkenly hailing a cab. He just wanted a late-night pizza.
As an avid King of the Hill fan, I find these slides fascinating. They are a small selection of the rules and guidelines with which animators worked to keep the show consistent. These are little details the average person will never notice, but they add up to a consistent final product that is very cohesive and, in this case, realistic.
These types of decisions add up. A focus on realism suits the point of the show as a whole. Mike Judge wanted everything like textures, perspective, and windows to be sensible. To make the feel match a live action show shot on-location, the “camera” perspectives are set as though it’s a real camera in a room, not like most cartoons or even like a set on a soundstage. They took this level of detail down to getting things like the time correct on a character’s wristwatch in every scene.
It’s a good way to work through art in general, to make rules to work by (and occasionally break them for effect). It begets a lot of creativity and a consistent end product.
These are self adhering extra frets that you can add onto your guitar in order to play music that utilizes microtones. They’re dirt cheap and are not permanent, so they allow a lot of flexibility in experimentation. They can be used to add a very eastern flavor to a person’s guitar playing, as these intervals are traditionally found in forms of Indian, Turkish, and Persian music, among others.
And to further solidify that I am interested in the possibilities of microtonal composition (if 6 entries on my blog so far regarding microtonality didn’t give it away), we have a forum post where someone came up with a method of playing 24edo on a standard guitar without requiring any modifications.
I’ve been very interested in 24edo since I first heard Jute Gyte’s Perdurance. Jute Gyte utilizes guitars with double the normal amount of frets, with an extra fret added exactly halfway between each existing fret. As far as I know, there’s only one luthier out there who makes these necks with any regularity: Ron Sword of Metatonal Music. The problem is that the price is too high for someone who is merely looking to briefly dabble with the idea. This is a good solution for those people.
The system proposed in the forum post is simple and elegant. Instead of tuning the guitar’s strings fourths apart like in standard tuning, the strings are instead tuned half-fifths apart. This interval is also known as a “neutral third.” This is the note 50 cents between the minor and major thirds that most musicians are familiar with. This interval is not available in a conventional tuning. It allows for some interesting tonal possibilities.
Things Full of Beans that Shouldn’t Be Full of Beans
An art project that answers a question everyone has asked at one point or another: “What if life was packed to the brim with beans?‘
Ryan Lockwood – Streets Agent 1:12
[Video contains strong language. If you are at work or around children, you probably won’t want to play this video.]
This is a classic YouTube video that I just rediscovered. For those unfamiliar with speedrunning, the idea is to beat a game or certain level of a game as quickly as possible. There are huge online communities where people obsess over every minute detail of specific (often decades-old) games, trying to figure out strategies (“strats”) to improve current world-record times. It can take many years of dedicated study and practice to improve a world record time by one single second. From the YouTube description:
Ryan Lockwood’s narrated replay of his record-tying 1:12 Streets Agent run with subtitles. He’s narrating his run over stream to a 50+ audience, shortly after achieving it. It was the first time he or anyone else watched the run.
Hundreds know Ryan Lookwood from his Twitch stream; dozens others have met him at our annual meetup. There are a lot of, um… idiosyncratic dudes that take to speedrunning, and Ryan Lockwood is no exception. He’s an intense dude, in short.
Ryan is the second person to achieve the time of 1:12 on this level. Normally record ties aren’t big news, but Ryan hit this time before getting 1:13. This is absurdly unlikely. 1:12 is one of the most frame-for-frame maxed times in GoldenEye, first accomplished by Marc Rützou in 2012, a player whose [sic] made a name for grinding hard to break/set records that are daunting to match. With 20+ people sharing the old record of 1:13 in early 2012, the prospect of 1:12 was a popular debate in the forums. A $100 bounty was posted for anyone who could get this time–legitimately–and after weeks of attempts, Marc won the “race”. Only a few have shown serious intentions (or even interest) in matching the feat since. Technically, Ryan isn’t among them, as 1:13 was his goal. But you’ll see in the video that he had a good sense of what was on the line toward the end of his run.
The nuances of “modern” Goldeneye speedrunning may be hard to detect, but the novelty of this run’s unlikeliness should not be. A 1:14 run Streets is probably a 1 in 20 event, with respect to Lockwood’s ability. A 1:13 absolutely requires something random (see: RNG) — the presence of a grenade launcher guard. Let’s call 1:13 a 1 in 500 event. A 1:12 run leans on “RNG factors” even more, also requiring at least 3 “boosts” from gunfire (getting shot in the pack [sic], pushing you ahead slightly). Let’s suppose 1 in 10,000 odds for 1:12, in which case you can probably expect dozens of 1:13 runs before achieving 1:12. Think of this like a statistical outlier in a distribution plot — perhaps a few hundred data points between 74.0 and 75.0 seconds, thousands between 74.0 an [sic] 76.0, and one 72.9.
[Written by Derek Clark]
More information about this feat and just why this guy was so pumped can be found in this video.