NOT A SCHOOL DAY*

23 AUGust 2025

@ LAPA, 29 CHISWICK ST, BRIXTON, JOHANNESBURG

A peer-to-peer creative workshop featuring classes on songwriting, synthesis, and audio-driven design — hosted by Kyabo Dyakopu, Kayleb Sass, Mike “Bahbah” Ngudle, and Jordan Bareiss — plus an interactive language & sound art experiment hosted by Mbako Moemise.

The day ends with an extra-curricular activity that may or may not involve bragging rights and a custom GUSHER* chess set.

IN partnership with

Gusher is Gusher is

Class 003 w/KAYLEB SASS: Understanding synthesis—From wave to form

KAYLEB SASS is a forward-thinking musician based in Johannesburg, who creates innovative electronic music as UNCLE and plays guitar with the Lerato Orchestral Collective. His explorations in sound and texture shape the unique perspective he brings to this class.

In this class, Kayleb introduces participants to the art and science of sound creation through subtractive synthesis. Using VCV Rack, he breaks down how oscillators, filters, and modulators interact to form the building blocks of electronic music. Drawing from his background in both live instrumentation and modular synthesis, Kayleb encourages students to reimagine their relationship with sound—from consuming it to constructing it. The class invites a hands-on, reflective approach to making music that deepens the bond between thought, feeling, and frequency.

KAYLEB SASS IS A FORWARD-THINKING MUSICIAN BASED IN JOHANNESBURG, WHO CREATES INNOVATIVE ELECTRONIC MUSIC AS UNCLE AND PLAYS GUITAR WITH THE LERATO ORCHESTRAL COLLECTIVE* HIS EXPLORATIONS IN SOUND AND TEXTURE SHAPE THE UNIQUE PERSPECTIVE In which He approaches this class

“The purpose of this class is to deepen our relationship with music and sound, through understanding the basics of synthesis. Incorporating synthesis into your workflow can change your perspective on what exactly it means to make music.

There are many forms of synthesis (additive, Subtractive, FM, wavetable etc.), but in the session, we will be using subtractive synthesis to explain how this works. For the demonstrations in this class I will using a free modular synthesizer called VCV rack Before I get into it, I would like to give a basic understanding of what a synthesizer is.

A synthesizer is a variety of simple electronic components that influence and interact with each other. When you look at a synthesizer, some have keys, buttons and nobs on it, but all you really need to know is that these nobs and keys are all just a physical way of controlling how these electrical components interact with one another.

As we go deeper into this class, these components will be explained in greater detail.”


1. What is Sound?

Definition and basic concepts


To understand subtractive synthesis, we first need to have a basic understanding of what sound is.

According to Berg, R.E. (2018), Sound is a vibration, through a medium (such as air) from a point of equilibrium, otherwise understood as a disturbance. In a less scientific sense, sound is a vibration (pulse), that is picked up by our ears and then perceived by our brain. A point that moves up and down in space and has a few properties, such as amplitude (volume), frequency (pitch), wavelength (pitch), speed (time) and timbre (quality).

This can be visualized in the form of a graph that tracks the “point” as it oscillates up and down. The outcome of the graph will look like a wave.

Visualizing waves

an oscilloscope, which allows you to visualize a wave form

changing these various properties, are at the core of subtractive synthesis.

But the most important ones are amplitude (volume), frequency (pitch) and timbre (quality).

So, to start, we need a sound, in synthesis, we call this sound, an oscillator, and hopefully, understanding how sound works, allows you to understand why this is called an oscillator.

Instead of a pulse through air, an oscillator is an electric signal, with all the same properties as a sound. Most oscillators can produce a variety of sounds like noise or a sine wave. These are all jumping off points to start molding your sound.

VCO—Key properties (amplitude, frequency, timbre)

OCS/VCO running into SCOPE

In synthesis, an oscillator is always making a sound or note. Let's think about this like the constant hum of a fridge or Electric box. The sound never stops and none of the previously mentioned properties can be changed, it is simply, a continuous tone. In this state, the sound is not very useful, since we can’t change the volume, pitch, envelop, timbre etc.

In this form of synthesis, we mold the sound, by removing things from it, hence, subtractive synthesis. This is how we can control the sound.

So, the first thing we need to be able to control is the volume of the sound. Basically, a way to turn the sound on and off. In synthesis, the component used to do this is called an amplifier, but for the sake of simplicity, let’s call it a volume nob.

VCA (Loudness Visualiser)

Adding this component allows us to take this continues tone and change how loud it is.

From here, we could make the sound louder or softer, which can be used in many ways.

For example, if I were to move the volume slide down, the wave form would look like this:

As you can see, this volume nob changed the amplitude of the wave, thus making it softer.

This opens a whole world of opportunities on its own, but let’s move forward to the next component for now.

Now that we can change the volume, the next property we would need control over is The pitch.

In VCV rack, the oscillator already has a frequency control on it that can be used to make the pitch lower or higher. If we were to increase the pitch, then the waveform would look like this:

As you can see, the waveform is oscillating more frequently now, resulting in a higher pitch being perceived.

Since oscillators mimic the properties of real-world sound, we can push the frequency so high or low that you can’t hear it anymore. This is not very useful for music, but extremely useful for modulation (which we will get into a bit later).

Now that we can control the volume and pitch, the last thing we want to control is timbre.

First let’s talk about what timbre is.

According to Britannica (2019), timbre can be understood as the quality of a sound, what fundamental frequencies and overtones are in it, and how intense these tones are. If we think about the human voice, every person has a different tone when speaking or singing, subtle nuances that make each voice unique. For example, you may refer to someone having a raspy voice, or if they were to sing, you may describe the sound as pure and resonating. These are descriptions of the timbre of the voice, what does it sound like? Not what pitch it is (high or low) and not how loud it is, but instead, what kind of waveform is it?

If we look at a more complex waveform, we can gain a better understanding of timbre.

Here we have added another oscillator (sin), and now two are playing at the same time.

When we look at this new waveform, we can see that it is no longer a smooth up and down movement, but now the wave has more peaks and valleys. This means that the wave has a timbre with harmonics and overtones, changing the timbre of the original sound. We can hear that a pure sine wave is very smooth, but when I use two at the same time, the way these waves interact with each other creates a new, rougher, timbre.

So how do we control the timbre? In synthesis we use something called a filter.

A filter allows you to control timbre by removing certain frequencies and amplifying resonant frequencies. For example, when I move the slider down on the filter, the sound becomes more muted, less bright and softer, since we are removing certain tones from the sound. This can also be done the other way around, instead of taking out the higher tones, I take out the lower ones, making the sound more nasal and brighter.

As you can see, when we cut out some of higher frequencies, the wave becomes smoother, meaning that we have changed the timbre to a more muted sound.

Now that we can control these three properties, we can mold the sound in many ways to create a variety of different waveforms. Using an oscillator (with control over its pitch), filter and volume nob, we have control over our sound. But this isn't music, just a static sound. We can change the three properties previously mentioned, but we don’t have three hands to change them all at once. We need components that can be used to change the way other components affect things. The first thing you would want to be able to control is time. For this, we use an LFO.


2. MANIPULATING SOUNDWAVES—LFO (FILTERS)

 

Earlier I mentioned that a sound can be so high or low in pitch that a human cannot hear it, but this sound can be used to control other things. Let’s revise what sound is again. A Sound is a pulse (a blip), how often that pulse happens will determine the pitch. If the pulse is very slow, we stop perceiving it as a tone, but now as many little blips or pulses. The slower you go, the more rhythmic the pulse becomes, and at its slowest, it would be a slow series of pluses.

Here we can see what the LFO is doing. It is sending out a steady pulse (rhythm). If I

increase the frequency of the LFO, then these pulses would happen more frequently,

meaning a faster rhythm.

If I take that rhythm, and use that to control the volume nob, then you will see that the sound is now switching on and off.

The LFO is pulsing, that pulse is being sent to volume nob, and then that volume nob is

following the pulse of the LFO. If I change the pitch of the LFO, that will change the speed

of the sound going on and off.

Here is where things get a little more complicated, since we are using quite a fewcomponents to do very simple things, but once you start to investigate the signal chain,things become clearer.

It starts with the oscillator.

The oscillator runs into the volume nob. From there we add a new component, the LFO, and then run that into the volume nob. Basically, we are using the LFO to control the volume nob. Like a hand that keeps moving the nob up and down at a continues rate.

When you think about this, it becomes clear that the possibilities for the number of combinations of sound sources and ways you can control them are endless.

But in this example, it can be a bit limiting. I can use the LFO to control the volume nod and thus the oscillator meaning the volume nob will follow the wave form of the oscillator (e.g.) but that ALSO means that the volume nob can only move in particular ways, that are dictated by what waveforms the LFO has. The true power of an LFO comes from pairing it with another component and using the LFO to trigger this component.

When I use the LFO, as a square wave, and then use that to trigger the volume nob, you will notice that the sound switches on and off, but what if I wanted to have the sound slowly fade in and then fade out, or come in quickly come in and fade out slowly ?

The LFO does not have waveforms that mimic this kind of movement, and this is where envelope generators come into play. An envelope generator is a component that dictates a movement in a more precise way. There are stages to an envelope, and they are called attack, decay, sustain and release, also known as ADSR. When the envelope generator is triggered, it will go through each of these stages in order. If I set the envelope generator to have a fast attack, that means that the moment that I trigger the sound, it will occur. The decay would be the natural reduction in volume following the attack stage. For example, if you strike a key on a piano, the initial strike of the key is the loudest the sound can get, a few milliseconds after this, the sound gets a bit softer, this is known as decay. This takes us to the next stage, which is sustain.

ImaginE that you struck the key, and now you are holding it down, and the note continues to sound, this is the sustain stage. The last stage (keeping with this metaphor) is when you release the key, and the sound starts to fade away naturally. In synthesis, we can control how long each of these stages are. An example of a sound with a quick attack, short decay, short sustain and short release would be a kick drum. If you kick a kick drum, the sound happens, but no matter how long you leave your foot on the kick pedal, the sound will not continue. An example of a sound with a slow attack, moderate decay, long sustain and long release would be something like a violin. When the string is bowed (depending on how fast it is bowed) it will take some time for the string to reach max volume, from there the sound can keep going indefinitely until the player stops bowing it, from there the body of the violin will still be vibrating even when they have stopped bowing, meaning the sound keeps going after they have stopped playing.

So NOW, how can we pair this with the LFO to create more interesting rhythms and change thenotes of our oscillator?


3. ENVELOPE GENERATORS

 

Well, firstly, we connect the LFO to the envelope generator. Instead of using the LFO’s wave forms to control the volume nod, we are instead going to use the LFO to trigger the envelope generator, which will go through the four stages mentioned above. We can then send the output of the envelope generator to the control the volume nob. Now the volume can be controlled in a much more precise way.

the waveform is now following the shape of the envelope, rising and falling

to the specifications of the envelope.

From this point it is difficult to say where you should go next since it will depend on what you are trying to create. but a good component to take this into a more musical place would be a sequencer.


4. SEQUENCERS

 

A sequencer is simply a programable component that can send out a sequence of notes and rhythms. The sequencer is similar to the envelope generator, in the sense that the sequencer runs through different stages when triggered, but the difference is, the sequencer will only go to the next stage when it is triggered again, whereas an envelope generator will go through all four stages with on trigger. If you had a particular melody in mind, then you would enter each note into the sequencer, but alone, the sequencer does nothing. We still need something to trigger the sequencer, and for this we use the LFO.

Now each time the LFO triggers the sequencer, the sequencer will move to the next step in

the sequence, whatever information relating to pitch on that step can then be sent to the

oscillator.

Now that the pitch information is being sent to the oscillator, the sequencer is telling the

oscillator to change the pitch of the oscillator each time a new step in the sequencer is

reached. Now we have a rudimentary melody taking advantage of all the previously

mentioned components. The sequencer can be programed in many ways to create

different melodies with different rhythms to them.

Now we have something a bit closer to music. Using these components, you can craft any sound you can think of, and have these sounds interact with each other in extremely complex ways. There is so much more that we can get into, but these basic components form the foundations of subtractive synthesis.


5. CONCLUSION

 

Understanding how subtractive synthesis works is a beautiful thing for people who make music/ love music, because it reveals the infinite complexity of a single note. It shows us that sound cannot be taken for granted, and every time you hear a sound, you are also hearing a complex microcosm of interactions. Working like this forces you as a creative, to start from scratch.

When you pick up a guitar and play a note, you don’t think about how the guitar was built to make those notes, but instead the note is there and ready to go. In synthesis, you build your own guitar from scratch. This forces you to simmer in the qualities of those notes, their pitch, their timbre and volume, and consider what those things make you feel. Synthesis pushes you away from noodling and towards starting each session with a goal. To me, that is the most valuable lesson you can get from synthesis, a reminder that music comes from the mind, not from the instrument. Think about what you want to craft, in great detail, and execute that idea with unwavering accuracy, this will strengthen the bond between the ideas you have in your head and being able to externalize them in the real world. But above all else, express yourself, and have fun.


6. Additional resources

 

Read More
Gusher is Gusher is

CLASS 005 W/JORDAN BAREISs & MIKE ‘BAHBAH’ NGUDLE: Particles & Frequencies — Audio-Driven Design in TouchDesigner

JORDAN BAREISS and MIKE ‘BAHBAH’ NGUDLE are multimedia artists whose practices intersect design, sound, and interactivity. In Particles & Frequencies — Audio-Driven Design in TouchDesigner, they explore how gesture, sound, and image can merge into a single responsive system. Participants learned to use MediaPipe for real-time hand tracking, sending motion data to VCV Rack, where gestures generated and shaped sound. That audio then returned to TouchDesigner, animating a point-cloud model of a banana—translating frequency and rhythm into colour, scale, and motion. The session demonstrated how open-source tools can be combined to create custom, audio-reactive environments that blur the lines between instrument and interface, performer and designer. This class invited participants to think of sound not as output, but as a living visual drive.

JORDAN AND BAhBAH ARE MULTIMEDIA ARTISTS WHOSE PRACTICES MEET AT THE INTERSECTION OF DESIGN, SOUND, AND INTERACTIVITY. FOR THIS PROJECT, THEY COLLABORATE CLOSELY, COMBINING THEIR EXPERIENCE IN TOUCHDESIGNER AND VCV RACK TO DEVELOP AN INTERACTIVE AUDIO-VISUAL SYSTEM.

In this session, we explored the creative potential of gestural control in audio-visual design. Using MediaPipe in Touchdesigner to track hand movements, participants could manipulate a VCV Rack patch in real time — shaping sound through motion alone. These gestures directly influenced parameters within the modular patch, allowing sound to become a truly physical, interactive experience.

The generated audio was then routed back into TouchDesigner, where it drove a point cloud system built from a 3D model of a banana. Each movement and sonic shift translated visually, creating a dynamic link between gesture, sound, and image. This class demonstrated how community-built tools like MediaPipe and VCV Rack can expand TouchDesigner’s possibilities, bridging performance, interactivity, and play. Downloadable project files, point cloud files and demo footage from the session are available for further exploration at the end of this class.


1. Introduction to Touchdesigner

 

What is touchdesigner?

TouchDesigner is a node-based visual programming environment used to create real-time, interactive visuals and experiences. It’s widely used in live performance, interactive art installations, projection mapping, stage design, and generative video effects.

The platform has a large, active community of artists and developers, with countless tutorials, example projects, and forums where users share knowledge and support one another.

While no coding is required to use TouchDesigner, it includes deep Python integration, allowing for advanced automation, data handling, and customization. Beginners can create without any programming background, but more technical users can push the software further through scripting and external integrations.

Node based Workspace

In TouchDesigner, projects are built using nodes. Each node represents a specific operation, such as generating a shape, processing an image, reacting to sound, or controlling movement. By connecting nodes together, you define the flow of data and build complex visual systems step by step.

The node-based workspace provides a visual map of your project. Instead of writing lines of code, you arrange and connect nodes in a network that makes the logic of your project easy to see and adjust. This modular approach allows you to experiment quickly, trace how visuals are generated, and make changes in real time.

Operators Every node is called an Operator. Operators are the building blocks of your project, each handling a specific type of data or process. They are grouped into families, with each family designed for a particular purpose:

  • TOPs (Texture Operators): Work with images and video. They handle everything from simple image adjustments and compositing to advanced real-time effects, rendering, and generative textures. •⁠ ⁠SOPs (Surface Operators): Create and manipulate 3D geometry. SOPs are used to model shapes, apply transformations, and build procedural 3D structures.

  • CHOPs (Channel Operators): Work with numbers and motion over time. They’re often used for animation, audio analysis, controlling parameters, or linking external devices like sensors and MIDI controllers.

  • DATs (Data Operators): Handle structured data, such as tables, scripts, or text. They’re powerful for logic, data processing, and Python scripting inside TouchDesigner. •⁠ ⁠COMPs (Component Operators): Act like containers that group nodes together into reusable systems. COMPs can hold interfaces, control logic, or entire custom tools, making them essential for organizing and scaling projects.

What is vcv rack?

As covered in depth in Kayleb sass’s class “UNDERSTANDING SYNTHESIS—FROM WAVE TO FORm” Vcv Rack is an open-source virtual modular synthesizer that emulates the look and behavior of physical modular synths. It lets you build custom sound systems by connecting virtual modules with patch cables — oscillators, filters, sequencers, and effects — to explore sound design, synthesis, and signal flow in a
hands-on, visual way.

MediaPipe Hand Tracking

MediaPipe is an open-source ML library for gesture tracking. Inside TouchDesigner, we’ll use a hand-tracking controller built with MediaPipe to detect hand positions and movements in real time. These values are packaged into channels (CHOPs), which we then route out to VCV Rack.


2. The Workflow

 

Step 1: Hand Tracking → Control Data

  • webcam feeds into TouchDesigner’s MediaPipe component.

  • The system tracks hand landmarks (distances, pinches, rotations).

  • Output is cleaned and normalised into CHOP channels (null operators).

  • These values are sent to VCV Rack via OSC Out or MIDI Out.

Step 2: Gestures → Sound (VCV Rack)

  • In VCV Rack, the gesture-data channels are received with an OSC cv or MIDI-CC module.

  • They are patched into oscillators, filters, delays, or other sound processors.

  • The sound is both generative and gestural: you literally “play” the instrument by moving your hands.

Step 3: Sound → Back Into TouchDesigner

  • The audio output from VCV Rack is routed into TD via Dipper (virtual audio device).

  • An Audio Device In CHOP receives the sound inside TouchDesigner.

  • An Analyze CHOP extracts amplitude, spectrum, or RMS data.

Step 4: Sound → Visuals (Point Cloud)

  • A scanned 3D fruit model (from Polycam) is imported as a PLY/OBJ into TouchDesigner.

  • The audio analysis drives parameters such as:

    • Point size (loudness = bigger).

    • Colour hue (frequency shifts = colour shifts).

    • Rotation or turbulence (beat = movement).

  • The hand gestures themselves can also directly modulate visual parameters (e.g. pinch → colour, hand height → rotation speed).


3. use cases & Performance

 

Visual Design & Motion Graphics

  • Build real-time visuals using 3D geometry, particles, shaders, and video.

  • Create generative art that reacts to sound, movement, or data inputs.

  • Ideal for music visuals, installations, and interactive projections.

Audio-Reactive Systems

  • Connect audio inputs or MIDI controllers to drive visuals.

  • Link to tools like VCV Rack, Ableton Live, or Pure Data for cross-platform audio interaction.

  • Map sound frequencies or amplitudes to parameters like color, scale, or motion for responsive experiences.

Interactivity & Sensors

  • Integrate external data sources such as webcams, depth cameras, and sensors (Kinect, Leap Motion, MediaPipe, OSC devices, etc.).

  • Build gesture-controlled interfaces, body tracking installations, or interactive exhibits.

  • Ideal for experimental performance, responsive architecture, or public art.

Data Visualization & Systems Design

  • Use TouchDesigner for real-time data visualizations — from environmental sensors to live social data.

  • Import data streams (CSV, JSON, API feeds) and visualize them as interactive, moving graphics.

  • Often used in live events, dashboards, or immersive storytelling.

Projection Mapping & Installation Art

  • Precisely map visuals onto 3D surfaces, sculptures, or architecture.

  • Sync lighting, sound, and motion for immersive environments.

  • Common in large-scale events, exhibitions, and theatre productions.

Live Performance Mode

  • Performance Mode hides the TouchDesigner interface, allowing your final network to run cleanly and smoothly in full-screen.

  • Used in live visuals, concerts, VJing, and exhibitions where real-time control is needed.

  • You can assign MIDI/OSC controllers, gestures, or key inputs to parameters and perform your visuals like an instrument.

  • It’s the “presentation” state of your project — minimal interface, maximum responsiveness.

Integration & External Tools

  • Seamlessly connects to software like Unreal Engine, Blender, Ableton, Unity, and VCV Rack.

  • Supports Python scripting, Open Sound Control (OSC), MIDI, NDI, and web-based APIs for extended control and automation.

experimental & Research Applications

  • used for prototyping new interaction models, AI-driven installations, and experimental performances.

  • Because of its modular, visual workflow, it’s ideal for quickly testing and visualizing creative or technical concepts.

In this case our audio reactive system INTEGRATES an audio-reactive system created by Taipei, Taiwan based audio visual artist pepepepebrick in 2024 with the added functionality of manipulating point clouds using the audio output from vcvrack.

This isn’t about building every tool from scratch — it’s about assembling open-source modules into a creative system. By crediting the developers and patching resources together, we create a unique workflow while showing how to remix existing tools.


4. Applications & Takeaways

 
  • Interactive performance: Performers can use gestures to play sound & visuals simultaneously.

  • Installation art: Visitors’ movements could generate a unique soundscape and visuals in real time.

  • Workshops/education: A hands-on way to teach modular sound, real-time graphics, and interaction design.

  • so-much more!!

Big Idea:

With TouchDesigner as the central hub, almost any kind of data (gestures, sound, sensors, networks) can be turned into creative input. By learning how to bridge these systems, you can invent your own instruments and interactive experiences.

collect and download our project files for free and have fun with this gestural synth setup.

Project Demo:

Download Class project files

5. Additional Resources

Read More
Gusher is Gusher is

CLASS 006 W/MBAKO MOEMISE: LOST IN TRANSLATION — AN INTERACTIVE STUDY OF LANGUAGE AND SONIC EXPERIMENT

MBAKO MOEMISE is a Johannesburg-based curator whose work examines the archival properties of sound and its relationship to memory, geography, and identity. Their research engages African methodologies of archiving through listening, questioning how sound might preserve what written systems overlook.

In Lost in Translation — An Interactive Study of Language and Sonic Experiment, participants engaged in a live, collaborative exercise where a single phrase was passed and translated from person to person, shifting in tone, meaning, and rhythm. The evolving fragments were recorded and assembled into a sound collage — a living archive of voices and mistranslations. Blending psycho-geography, Afro-futurism, and sound art, the class explored language as vibration, archive, and social fabric, asking how sound itself can hold memory, miscommunication, and care.

Mbako Moemise is a curator, based in Johannesburg, with an immediate concern with the archival properties of sonic art and psycho-geography. Their visual cultural and curatorial research explores African methodologies of archiving through sound and spatial inquiries. They engage with the concepts of memory, place, sensory experience and temporal positionalities, to question archival systems and amplify alternative African narratives.

IN A BROKEN TELEPHONE FORMAT, A SINGLE PHRASE Was PASSed FROM PARTICIPANT TO PARTICIPANT, EACH TRANSLATING IT INTO THEIR LANGUAGE OF CHOICE.

AS MEANING SHIFTS AND MUTATES, THE PROCESS WILL BE RECORDED AND TRANSFORMED INTO A FINAL SOUND COLLAGE - A LIVING ARCHIVE OF VOICES, INTERPRETATIONS, AND ACCIDENTS ALONG THE WAY.

This isn’t much of a class but rather a collaborative manifesto, an evangelistic inquiry that explores sound art, afro-futuristic approaches to archiving in African contexts and Site-Specificity. The experimental class uses interactivity to engage with the subject matter presented.


1. SOUND ART AND THE EPHEMERAL EXPERIENCE

Sound art dates back to the early inventions of futurist Luigi Russolo who, between 1913 and 1930, built noise machines that replicated the clatter of the industrial age and the boom of warfare. Dada and surrealist artists also experimented art that uses sound.

Environmental historian Peter A. Coates pointed out that what we think of as noise is as much a matter of ideology as it is of decibels. Sound Art explores sound and music outside of its aesthetic dispositions. This furthers Russolo’s thinking, expanding on notions of sensory experiences within art that divulge a plethora of subject matter beyond the canvas.

Sound art, by its very nature, is often ephemeral, existing only in the moment it's created or experienced. This transient quality is a core aspect of the art form, differentiating it from more tangible mediums. The focus on the temporal experience, rather than a lasting physical object, is a key characteristic of sound art.

“In their discussion, Bijsterveld and Pinch address how curators can use sound to facilitate interpretation, evoke memories, and construct immersive experiences that transcend the visual plane. Similarly, Jonathan Sterne (2012) proposed that sound in museums conveys information and shapes the affective atmosphere, influencing how visitors connect with the artefacts and the narratives they represent.

From this perspective, Suzanne MacLeod (2013) also proposed that designed soundscapes augment the narrative coherence of exhibitions, thus integrating the sound into the narrative as a textual cohesive device linking, highlighting, correlating, or separating through perception. From this perspective, sound is a narrative instrument and a spatial component that influences the visitor’s exhibition experience. Sound also serves as a guiding tool, drawing visitors’ attention and influencing how they interpret the material on display.”

Numerous artists lean onto sound art as an avenue to explore.

Artists such as Lawrence Abu Hamdan work looks into the political effects of listening, using various kinds of audio to explore its effects on human rights and law.

Lawrence Abu Hamdan is an artist and audio investigator, whose work explores ‘the politics of listening’ and the role of sound and voice within the law and human rights. He creates audiovisual installations, lecture performances, audio archives, photography and text, translating in-depth research and investigative work into affective, spatial experiences. Abu Hamdan works with human rights organisations, such as Amnesty International and Defense for Children International, and with international prosecutors to help obtain aural testimonies for legal and historical investigations. He received his PhD in 2017 from Goldsmiths London and is a practitioner affiliated with Forensic Architecture.

WALL, UNWALLED (2018)

In the year 2000 there was a total of fifteen fortified border walls and fences between sovereign nations. Today, physical barriers at sixty-three borders divide nations across four continents. As these walls were being constructed, millions and millions of invisible cosmic particles called muons descended into the earths atmosphere and penetrated metres deep, through layers of concrete, soil and rock. Scientists realised that these deep penetrating particles could be harvested, and a technology could be developed to use their peculiar physical capacities to pass through surfaces previously impervious to x rays. Muons allowed us to see for the first time the contraband hidden in lead lined shipping containers and secret chambers buried inside the stone walls of the pyramids. Now no wall on earth is impermeable. Today, we're all wall, and no wall at all.

Walled Unwalled is a single channel 20 minute performance-video installation. The performance comprises of an interlinking series of narratives derived from legal cases that revolved around evidence that was heard or experienced through walls. It consists of a series of performances reenactments and a monologue staged inside a trio of sound effects studios in the Funkhaus, East Berlin.


2. AU/ORALITY, ARCHIVES, AFRO-FUTURISM

 

DEFINITIONS:

AURALITY: Aurality in sound art refers to the emphasis on listening and auditory perception as a central element in artistic creation and experience. It explores the ways sound shapes our understanding of the world and ourselves, often challenging traditional visual-centric approaches to art. Sound art, as a field, prioritizes the aural experience, using sound as its primary medium and material.

ORATURE: Orature refers to the body of literature and cultural expression that is primarily transmitted through spoken or performed means, rather than being written down.

ARCHIVES: A collection of historical documents or records providing information about a place, institution, or group of people.

AFRO-FUTURISM: Afrofuturism is a cultural movement, philosophical perspective, and artistic genre that explores the intersection of African diaspora culture with science and technology, often within the context of science fiction, fantasy, and speculative fiction.

Aurality, simply, means the quality of hearing. Here in we focus on recent theorizations of aurality that hone into  the immediate and mediated practices of listening that construct perceptions of nature, bodies, voices, and technologies.

Sound is political.

Andrei van Wyk laments in his conversation with Lindi Mngxitama that, “Within my research, ‘noise’, ‘rhythm’ and ‘sound’ are all incredibly politicised and within the study of H/history, they signify the racialised and gendered divisions of labour within the social conditions that we currently live in. The research focuses greatly on how these social relations occur and operate within the greater societal ‘soundscape’, with a particular focus on South Africa.”

Kim Karabo Makin also makes mention of this when speaking about her work, “On Gaborone, 1985.” A sound installation that imagines Gaborone in 1985 through a mix and sampling of 'the living archive'. 1985 is described as the year that the Botswana capital “lost it's innocence”, due to a number of violent raids by the South African Defence Force. Significantly, the raid on 14 June 1985, led to the demise of Medu Art Ensemble overnight. Makin argues that this may be considered a critical turning point in Botswana’s (art) history, resulting in the stunting of Botswana’s creative development.

https://listeningmap.de/tlm-frontend/#927

“On Gaborone, 1985” (2021)

0:00 - 01:05


3. AFRO-FUTURISM & ARCHIVES

 

In “Decolonising the Mind”, by Ngugi wa Thiongo, Ngugi writes fervently of the crisis of African languages in literature. Ngugi reflects on his own experience dealing with a language, culture, and surveillance from colonial rule and schooling. He remarks, “The bullet was the means of physical subjugation. Language was the means of spiritual subjugation.” (p. 9)

Orature in Motswana culture is still heavily prevalent. Oral traditions of story telling and the medium/language that archives use to transverse generations offers a dynamic means of archiving.

This is not an approach to decolonise but rather as an assertion of Afro-futurist methodology to listening, archiving and embodying within an African Context.

Could we create archives going forward that are affective and caring in the conservation of culture and Language?

Thero Makepe, a Motswana born multimedia artist, when speaking about his photobook, “We didn’t choose to be born here,” discusses a gap in “knowing.” Makepe, currently focused on photography, employs the moving image as an aesthetic vehicle through which he examines familial, social and geopolitical histories. Within this framework, Makepe engages historic events to explore the liminal spaces between collective and personal memory, foregrounded by his own lyrical and spiritual sensibilities.

Foam Talent 2024-2025 Installation, Foam Museum, Amsterdam (image courtesy of the artist)

Some words are lost to history. How can we preserve an audible history?


4. SOUND ART AS MEDIATION

 

This experiment can not be approached without first considering The Social Life of Things, by Arjun Appadurai.

Just as commodities circulate, accumulate value, and undergo transformations depending on their cultural and social context, so too does language. It can be commodified, sold, and used as a gateway to power, prestige, or access, particularly in postcolonial settings where languages like English or French became tied to upward mobility.

Secondly, I’d like to discuss the mammoth of a curator Okwui Enwezor’s Johannesburg Biennale in 1977, heralding the theme: “Trade Routes: History and Geography.” Okwui says when reflecting on this biennale in Trade Routes Revisited, published by Stevenson Gallery, “I wanted to make an exhibition that took globalisation as a point of departure, to argue that globalisation started here, in South Africa.”

This point does not only ring as a Historical and Geographical inquiry, but an observation of South Africa herself in 1997.


5. INTERACTIVITY: BROKEN TELEPHONE Translation experiment.

 

A phrase was prompted by the curator. The phrase was then whispered from one person to another. As the phrase traveled, each person had the option to translate the phrase into a language of their choosing. Once the phrase reached the last person, they then recited the phrase whispered to them aloud.

In this case the phrase was translated and distorted so astray from it’s origin which demonstrated the core of the theory in which translation in archival is a DESTRUCTIVE force, morphing history as the integrity of it’s source is DESTROYED.

Read More