Class 003 w/KAYLEB SASS: Understanding synthesis—From wave to form
KAYLEB SASS IS A FORWARD-THINKING MUSICIAN BASED IN JOHANNESBURG, WHO CREATES INNOVATIVE ELECTRONIC MUSIC AS UNCLE AND PLAYS GUITAR WITH THE LERATO ORCHESTRAL COLLECTIVE* HIS EXPLORATIONS IN SOUND AND TEXTURE SHAPE THE UNIQUE PERSPECTIVE In which He approaches this class
“The purpose of this class is to deepen our relationship with music and sound, through understanding the basics of synthesis. Incorporating synthesis into your workflow can change your perspective on what exactly it means to make music.
There are many forms of synthesis (additive, Subtractive, FM, wavetable etc.), but in the session, we will be using subtractive synthesis to explain how this works. For the demonstrations in this class I will using a free modular synthesizer called VCV rack Before I get into it, I would like to give a basic understanding of what a synthesizer is.
A synthesizer is a variety of simple electronic components that influence and interact with each other. When you look at a synthesizer, some have keys, buttons and nobs on it, but all you really need to know is that these nobs and keys are all just a physical way of controlling how these electrical components interact with one another.
As we go deeper into this class, these components will be explained in greater detail.”
1. What is Sound?
Definition and basic concepts
To understand subtractive synthesis, we first need to have a basic understanding of what sound is.
According to Berg, R.E. (2018), Sound is a vibration, through a medium (such as air) from a point of equilibrium, otherwise understood as a disturbance. In a less scientific sense, sound is a vibration (pulse), that is picked up by our ears and then perceived by our brain. A point that moves up and down in space and has a few properties, such as amplitude (volume), frequency (pitch), wavelength (pitch), speed (time) and timbre (quality).
This can be visualized in the form of a graph that tracks the “point” as it oscillates up and down. The outcome of the graph will look like a wave.
Visualizing waves
an oscilloscope, which allows you to visualize a wave form
changing these various properties, are at the core of subtractive synthesis.
But the most important ones are amplitude (volume), frequency (pitch) and timbre (quality).
So, to start, we need a sound, in synthesis, we call this sound, an oscillator, and hopefully, understanding how sound works, allows you to understand why this is called an oscillator.
Instead of a pulse through air, an oscillator is an electric signal, with all the same properties as a sound. Most oscillators can produce a variety of sounds like noise or a sine wave. These are all jumping off points to start molding your sound.
VCO—Key properties (amplitude, frequency, timbre)
OCS/VCO running into SCOPE
In synthesis, an oscillator is always making a sound or note. Let's think about this like the constant hum of a fridge or Electric box. The sound never stops and none of the previously mentioned properties can be changed, it is simply, a continuous tone. In this state, the sound is not very useful, since we can’t change the volume, pitch, envelop, timbre etc.
In this form of synthesis, we mold the sound, by removing things from it, hence, subtractive synthesis. This is how we can control the sound.
So, the first thing we need to be able to control is the volume of the sound. Basically, a way to turn the sound on and off. In synthesis, the component used to do this is called an amplifier, but for the sake of simplicity, let’s call it a volume nob.
VCA (Loudness Visualiser)
Adding this component allows us to take this continues tone and change how loud it is.
From here, we could make the sound louder or softer, which can be used in many ways.
For example, if I were to move the volume slide down, the wave form would look like this:
As you can see, this volume nob changed the amplitude of the wave, thus making it softer.
This opens a whole world of opportunities on its own, but let’s move forward to the next component for now.
Now that we can change the volume, the next property we would need control over is The pitch.
In VCV rack, the oscillator already has a frequency control on it that can be used to make the pitch lower or higher. If we were to increase the pitch, then the waveform would look like this:
As you can see, the waveform is oscillating more frequently now, resulting in a higher pitch being perceived.
Since oscillators mimic the properties of real-world sound, we can push the frequency so high or low that you can’t hear it anymore. This is not very useful for music, but extremely useful for modulation (which we will get into a bit later).
Now that we can control the volume and pitch, the last thing we want to control is timbre.
First let’s talk about what timbre is.
According to Britannica (2019), timbre can be understood as the quality of a sound, what fundamental frequencies and overtones are in it, and how intense these tones are. If we think about the human voice, every person has a different tone when speaking or singing, subtle nuances that make each voice unique. For example, you may refer to someone having a raspy voice, or if they were to sing, you may describe the sound as pure and resonating. These are descriptions of the timbre of the voice, what does it sound like? Not what pitch it is (high or low) and not how loud it is, but instead, what kind of waveform is it?
If we look at a more complex waveform, we can gain a better understanding of timbre.
Here we have added another oscillator (sin), and now two are playing at the same time.
When we look at this new waveform, we can see that it is no longer a smooth up and down movement, but now the wave has more peaks and valleys. This means that the wave has a timbre with harmonics and overtones, changing the timbre of the original sound. We can hear that a pure sine wave is very smooth, but when I use two at the same time, the way these waves interact with each other creates a new, rougher, timbre.
So how do we control the timbre? In synthesis we use something called a filter.
A filter allows you to control timbre by removing certain frequencies and amplifying resonant frequencies. For example, when I move the slider down on the filter, the sound becomes more muted, less bright and softer, since we are removing certain tones from the sound. This can also be done the other way around, instead of taking out the higher tones, I take out the lower ones, making the sound more nasal and brighter.
As you can see, when we cut out some of higher frequencies, the wave becomes smoother, meaning that we have changed the timbre to a more muted sound.
Now that we can control these three properties, we can mold the sound in many ways to create a variety of different waveforms. Using an oscillator (with control over its pitch), filter and volume nob, we have control over our sound. But this isn't music, just a static sound. We can change the three properties previously mentioned, but we don’t have three hands to change them all at once. We need components that can be used to change the way other components affect things. The first thing you would want to be able to control is time. For this, we use an LFO.
2. MANIPULATING SOUNDWAVES—LFO (FILTERS)
Earlier I mentioned that a sound can be so high or low in pitch that a human cannot hear it, but this sound can be used to control other things. Let’s revise what sound is again. A Sound is a pulse (a blip), how often that pulse happens will determine the pitch. If the pulse is very slow, we stop perceiving it as a tone, but now as many little blips or pulses. The slower you go, the more rhythmic the pulse becomes, and at its slowest, it would be a slow series of pluses.
Here we can see what the LFO is doing. It is sending out a steady pulse (rhythm). If I
increase the frequency of the LFO, then these pulses would happen more frequently,
meaning a faster rhythm.
If I take that rhythm, and use that to control the volume nob, then you will see that the sound is now switching on and off.
The LFO is pulsing, that pulse is being sent to volume nob, and then that volume nob is
following the pulse of the LFO. If I change the pitch of the LFO, that will change the speed
of the sound going on and off.
Here is where things get a little more complicated, since we are using quite a fewcomponents to do very simple things, but once you start to investigate the signal chain,things become clearer.
It starts with the oscillator.
The oscillator runs into the volume nob. From there we add a new component, the LFO, and then run that into the volume nob. Basically, we are using the LFO to control the volume nob. Like a hand that keeps moving the nob up and down at a continues rate.
When you think about this, it becomes clear that the possibilities for the number of combinations of sound sources and ways you can control them are endless.
But in this example, it can be a bit limiting. I can use the LFO to control the volume nod and thus the oscillator meaning the volume nob will follow the wave form of the oscillator (e.g.) but that ALSO means that the volume nob can only move in particular ways, that are dictated by what waveforms the LFO has. The true power of an LFO comes from pairing it with another component and using the LFO to trigger this component.
When I use the LFO, as a square wave, and then use that to trigger the volume nob, you will notice that the sound switches on and off, but what if I wanted to have the sound slowly fade in and then fade out, or come in quickly come in and fade out slowly ?
The LFO does not have waveforms that mimic this kind of movement, and this is where envelope generators come into play. An envelope generator is a component that dictates a movement in a more precise way. There are stages to an envelope, and they are called attack, decay, sustain and release, also known as ADSR. When the envelope generator is triggered, it will go through each of these stages in order. If I set the envelope generator to have a fast attack, that means that the moment that I trigger the sound, it will occur. The decay would be the natural reduction in volume following the attack stage. For example, if you strike a key on a piano, the initial strike of the key is the loudest the sound can get, a few milliseconds after this, the sound gets a bit softer, this is known as decay. This takes us to the next stage, which is sustain.
ImaginE that you struck the key, and now you are holding it down, and the note continues to sound, this is the sustain stage. The last stage (keeping with this metaphor) is when you release the key, and the sound starts to fade away naturally. In synthesis, we can control how long each of these stages are. An example of a sound with a quick attack, short decay, short sustain and short release would be a kick drum. If you kick a kick drum, the sound happens, but no matter how long you leave your foot on the kick pedal, the sound will not continue. An example of a sound with a slow attack, moderate decay, long sustain and long release would be something like a violin. When the string is bowed (depending on how fast it is bowed) it will take some time for the string to reach max volume, from there the sound can keep going indefinitely until the player stops bowing it, from there the body of the violin will still be vibrating even when they have stopped bowing, meaning the sound keeps going after they have stopped playing.
So NOW, how can we pair this with the LFO to create more interesting rhythms and change thenotes of our oscillator?
3. ENVELOPE GENERATORS
Well, firstly, we connect the LFO to the envelope generator. Instead of using the LFO’s wave forms to control the volume nod, we are instead going to use the LFO to trigger the envelope generator, which will go through the four stages mentioned above. We can then send the output of the envelope generator to the control the volume nob. Now the volume can be controlled in a much more precise way.
the waveform is now following the shape of the envelope, rising and falling
to the specifications of the envelope.
From this point it is difficult to say where you should go next since it will depend on what you are trying to create. but a good component to take this into a more musical place would be a sequencer.
4. SEQUENCERS
A sequencer is simply a programable component that can send out a sequence of notes and rhythms. The sequencer is similar to the envelope generator, in the sense that the sequencer runs through different stages when triggered, but the difference is, the sequencer will only go to the next stage when it is triggered again, whereas an envelope generator will go through all four stages with on trigger. If you had a particular melody in mind, then you would enter each note into the sequencer, but alone, the sequencer does nothing. We still need something to trigger the sequencer, and for this we use the LFO.
Now each time the LFO triggers the sequencer, the sequencer will move to the next step in
the sequence, whatever information relating to pitch on that step can then be sent to the
oscillator.
Now that the pitch information is being sent to the oscillator, the sequencer is telling the
oscillator to change the pitch of the oscillator each time a new step in the sequencer is
reached. Now we have a rudimentary melody taking advantage of all the previously
mentioned components. The sequencer can be programed in many ways to create
different melodies with different rhythms to them.
Now we have something a bit closer to music. Using these components, you can craft any sound you can think of, and have these sounds interact with each other in extremely complex ways. There is so much more that we can get into, but these basic components form the foundations of subtractive synthesis.
5. CONCLUSION
Understanding how subtractive synthesis works is a beautiful thing for people who make music/ love music, because it reveals the infinite complexity of a single note. It shows us that sound cannot be taken for granted, and every time you hear a sound, you are also hearing a complex microcosm of interactions. Working like this forces you as a creative, to start from scratch.
When you pick up a guitar and play a note, you don’t think about how the guitar was built to make those notes, but instead the note is there and ready to go. In synthesis, you build your own guitar from scratch. This forces you to simmer in the qualities of those notes, their pitch, their timbre and volume, and consider what those things make you feel. Synthesis pushes you away from noodling and towards starting each session with a goal. To me, that is the most valuable lesson you can get from synthesis, a reminder that music comes from the mind, not from the instrument. Think about what you want to craft, in great detail, and execute that idea with unwavering accuracy, this will strengthen the bond between the ideas you have in your head and being able to externalize them in the real world. But above all else, express yourself, and have fun.