Sep 4, 2023

Landscape NOON experiments

Since I first saw it announced, I’ve been super excited about the Landscape NOON. It seemed to, in one box, do what I had been wanting to achieve with my experiments in modular synthesis for a while. Being able to trigger multiple independent (or not) voices while have a lot of control over the shape and contour of them. Being a huge fan of noisy and chaotic synths, I also really like the passive synthesis approach here, bring out that weird/starved power cycling sound you get when powering on/off a circuit, something which I’ve done with my 9v battery-powered ciat-lonbarde synths many a time.

My initial plan was to build something around a Bela so that it would, in effect, be a “black box” where I just plug my mic/sensor in and have it spit out audio, doing all of the machine learning (via SP-Tools) and signal processing on the Bela itself. The main attraction for this was to not only treat the NOON more like a static “instrument”, but also a pragmatic desire to not have to gobble up a ton of I/O or set up my Expert-Sleepers stuff to be able to communicate to it.

This is all the cables I need to run to/from my modular just to get the audio for these videos going. This is while doing nothing on the actual synthesis part of the modular.

So the idea is to have something where I have the main functionality I want (classification, descriptor analysis) as well as some additional knobs/buttons for realtime tweaking and playing. There will be no doubt that I will often want to do more things than the Bela setup I’ve made can afford, but for those times I can always plug it back into the modular.

While getting to grips with the NOON I did a lot of testing, and filmed a few videos across the time I’ve had it. The first one uses machine learning (via FluCoMa + SP-Tools running in Pure Data) to train different classes on the snare (center, “edge”, two crotales, rim tip, and rim shoulder) and then using that to trigger 6 of the voices on the NOON. I’m also using descriptor analysis from each attack to scale the gate length, gate height, and envelope parameters.

I’ve also setup my Erae Touch to also trigger the same voices (color coded) where the XYZ in each of the 8 zones controls gate length, gate height, and envelope release.

For the second video I incorporated some additional effects and audio processing from the confetti Max for Live devices.

In terms of the setup, there are three classes that have been trained in SP-Tools (drum head, rim, and crotales) and each of those is then triggering a couple of the voices on the NOON with additional parameters being tweaked by the audio descriptor analysis running alongside the classification. I then have 5 different effects mapped on the Novation Dicer controller (the little triangle wedge) and turn them on/off throughout. The rotating platter is a DIY controller based on the SC-1000 DJ scratch controller loaded with a sample of me playing a bit before recording this video. The audio is being picked up by a DPA 4099 for the acoustic sounds, a Naiant X-X mic for the distorted/scratch sounds, and a Sensory Percussion (v1) sensor for the audio analysis and machine learning.

For the third video I have some audio analysis driving playback of sampled toy piano that is being transposed slightly so it is out of tune with each new sample. The same audio analysis is also driving short blips being sent to the Landscape NOON as well as using lots of CV offset across all the individual channels on the NOON.

The fourth and fifth videos have a custom MIDI fader controlling most of the CV that is being sent to the noon. Each time I change direction on the fader I send the current position of the fader to a random voice on the NOON as well as controlling the global CV inputs on the NOON. Throughout the video I change the functionality of the fader a few times switching from having both the synth and drum active to having the fader direction switch between synth and drum audio to create fast “cutting” between them.

This is something I will explore further, and have another upcoming blog post where I detail more of the developments of my snare (now gong) drum setup, and how it’s grown substantially since my initial approach in Kaizo Snare. But for now, I wanted to make a shorter blog post with the videos I’ve made up to this point.

  

Leave a comment

ABOUT

Rodrigo Constanzo
-makes music and art
-lives in Porto/Manchester
-is a crazy person

Read my PhD Thesis!

////////////////////////////////////////////////////////////
Composition, Performance,
and Making Things,
sitting in a tree :
Me-Me-Me-Me-Me-Me-Me

////////////////////////////////////////////////////////////

Learn from me (for free!)

I am offering free lessons / conversations / consultations / mentoring / time / support to creative people, locally or remotely.
Want in on this?!

Upcoming Performances

No upcoming performances