Landscape NOON experiments
Since I first saw it announced, I’ve been super excited about the Landscape NOON. It seemed to, in one box, do what I had been wanting to achieve with my experiments in modular synthesis for a while. Being able to trigger multiple independent (or not) voices while have a lot of control over the shape and contour of them. Being a huge fan of noisy and chaotic synths, I also really like the passive synthesis approach here, bring out that weird/starved power cycling sound you get when powering on/off a circuit, something which I’ve done with my 9v battery-powered ciat-lonbarde synths many a time.
SP-Tools – Machine Learning tools for drums and percussion (alpha)
For the last few years I’ve been working on ideas and approaches to using electrics in a realtime/low-latency context with acoustic drums/percussion. The most recent of these have been working with the FluCoMa Toolkit to do some of the things I was doing before (but better) as well as try out some new things altogether.
During that time I honed in and refined a lot of the settings, descriptors/parameters, and algorithm choices to get something that I felt performed really well. Even as compared to commercial alternatives.
I decided that once FluCoMa put out the v1 of their toolkit, I would try and wrap up a bunch of the ideas into a cohesive package that focused on the approaches that I’ve been working on. After pushing hard on it for the last few months, I feel I have something that I can put out there. It’s still in what I would consider an alpha stage, though everything is quite stable. I’m only really considering it alpha as I will likely add more objects/abstractions/approaches, refine the ones that are there, as well as get a sense of how people are using and want to use it and potentially tweak some of the structure.
So for now you can download the package here:
http://github.com/rconstanzo/sp-tools
I’ve also made a quick overview video that talks you through the basic idea of the package, shows off some more examples, and will hopefully get you going with it.
If you have any comments or questions, or run into any bugs/problems, feel free to drop me an email and/or create an issue on GitHub.
Particle Castle Bubble Party!
A while back I announced the start of Amplifiers & Explosions, a a project, collective, community, and sometimes place which I started with Angela Guyton. Building on the play talk play series that I originally mentioned we now have another video series. Particle Castle Bubble Party!
Particle Castle Bubble Party! is basically a Gib Gab but in video form. It turns out I’ve been doing these Gib Gab sessions for over 5 years now(!!), and when speaking to a friend about how to have a broader community engagement with that, the seedling idea for Particle Castle Bubble Party! (PCBP!) was born.
Unlike play talk play, these will not be a regular (monthly) series, as they will come out as often as they happen. I will still do Gib Gabs, but if the person is interested, we can morph it into a PCBP! and then it will be posted.
The first Particle Castle Bubble Party! took place shortly after Zack Scholl contacted me for a Gib Gab session. Zack was open to the idea of exploring the, as of them, solidified and came with a ton of questions and ideas that we unpack and talk through here.
Kaizo Snare
I spent the better part of last year working on a performance that was a strange combination of things. It pulled together turntablism, feedback/friction, machine learning, signal decomposition, 3d printing, robotics…and a snare drum. All of these were ideas I was working on and exploring separately, but as things can sometimes do, they ended forming into something that was much more than the sum of its parts.
Amplifiers & Explosions / play talk play
As mentioned in my last blog post, the last few years have been strange ones, with lots of limbo and lots of waiting.
Well one of the more exciting things that I was waiting for was Amplifiers & Explosions, a project, collective, community, and sometimes place which I started with Angela Guyton.
The project is still young and will grow with us over the years, but I am excited to finally be able to get it up and running.
One of the first aspect of Amplifiers & Explosions that I want to share is a series of videos Angie and I have been filming for the last few years. These are called play talk play and they feature different improvisers performer and talking about their improvisation. The idea is to put one of these out a month and the first one is with fantastic vocalist and improviser Audrey Chen.
You and Me and Us and Me and You
I like to picture that there are an infinite number or realities, all running in parallel. The entirety of my existence, everything I have ever known is but one of these. I represent a single point on this infinite line.
There are moments in your life where you feel that infinity collapse. The things, moments, and people that exist across the multitudes of possibilities. These create a mirroring, an echo, a shimmer across these realities. You are no longer you, you are the infinite you. You stand across all time. You see the infinite.
Sometimes you live this without even knowing it.
This is a blog post about infinity. Specifically infinity minus one.
Rhythm Wish
Although this idea/piece is a couple of years old now, I realized I’ve not written about it in any detail (although I have talked about it on several occasions). So this is that. But more than that, I want to talk about what this idea isn’t, and how gloriously isn’t it is.
This is story of how a complicated idea became simpler and simpler until nothing else was left but its core.
Sometimes I Talk To People
I had the pleasure of being interviewed by a close friend, Dan Derks, earlier this year for his podcast about the lines forum called Sound + Process. We cover a lot of ground ranging from software design, to the importance of ‘now’. Worth having a listen:
(Dan also interviewed my crazy partner Angie last year too)
Here is another interview with the Art + Music + Technology podcast from 2015:
And a more recent one (2021) with some ex-students:
The aesthetics of accidentally listening wrong, on purpose
Let me tell you a story about the last eight years of my life, when I developed a special relationship with the last twelve seconds of Shania Twain’s “You’re Still The One”:
Friends. Objects. Lights. Feet.
Over the years I have had many wonderful discussions with people whom I have never met. Either through emails, forums, or even chatrooms I have talked about all manner of things with people scattered around the world. Sometimes these virtual friendships manifest in the physical world and a thing is born. An art thing. This is one of those times.
Black Box project
The Black Box project is a project that involves Pierre Alexandre Tremblay on bass/electronics, Patrick Saint-Denis on robotics/electronics, Sylvain Pohu on guitar/electronics, and myself on drums/electronics. It’s a four-way collaboration that has gone through two residencies (in Montreal and Huddersfield), to work out the finer details of putting a large-scale show together.
It looks and sounds something like this:
Have Learned, Learning, Will Learn
I was very young when I started learning music. My household was a musical one, with my mother playing piano and grandmother and great-aunt being being piano teachers. Growing up, I had 3 hour piano lessons every day, which I rarely enjoyed. I did not look forward to the lessons because music was a chore, it was something I had to do. It wasn’t until I picked up the guitar as a teenager and started developing a personal relationship with music, that it became something I wanted to do.
I then embarked on multiple (at the time unrelated) strands of learning music. I carried on with classical piano, solfege, and 4-part writing through university, while playing guitar, bass, and drums in all kinds of bands. Previously, I had learned to work with wood and metal in shop class, and later learned to solder and make my own guitar pedals. I didn’t know it at the time, but these unrelated strands of my life would eventually come crashing together.
I have recently completed a PhD in music composition from the University of Huddersfield. The experience was life changing in many ways and I am thankful to all the people that were there for me along the way. Through the PhD (and thesis) I developed and refined my thoughts on composition, improvisation, memory, interaction, mapping, and openness/sharing. You can read about all of this in my thesis which exists as a dynamic web-thesis (I am very proud of the thesis, and consider it to be an art object in and of itself).
Just Making Things Up
I was sitting in a cafe with a few friends and we ended up talking about how “just making it up” was often used as qualifier when talking about shitty music. Like seeing someone perform, it sucking, and thinking “it sounds like they’re just making it up”. I couldn’t completely disagree with this, as I have heard my fair share of shitty improv. But after seeing a close friend (Richard Craig) give a talk about performing with flute/feedback, and how adaptive/reactive he has to be, I couldn’t help but think that this was also a way to describe the sublime in performance. That shimmer/glimmer of transcendence. “Making things up” means it fucking sucks, or touching god. The stuff in the middle is composition.
Improvising is a big part of what I do, as a performer (whatever that means), composer (whatever that means), and just about everything else. And as such, I’ve thought a lot about improvisation, specifically things that I don’t like about it, in my performance as well as in others’. Many of these [shitty improv] tropes have inspired me to find ways to overcome them. Sometimes just being aware of the trope is enough to avoid it, but other times it’s taken a more deliberate reprogramming. What follows are a bunch of the tropes/ideas/problems and, where applicable, what I’ve done in order to overcome them.
Lights
Here are a couple of performance videos using an approach I’ve just recently started calling Light Vomit.
What you see in the video is a combination of automated processing (via The Party Van/Cut Glove) with a variety of DMX light interactions and behaviors. Everything is controlled from a Max patch I made specifically for the gig (and specifically to test these behaviors), most of which use audio analysis to dynamically record/play/process incoming audio and trigger a variety of light behaviors. (Click here to view my moment to moment analysis of the performance using my Making Decisions in Time improv analysis framework.)
dfscore 2.0
dfscore 2.0 is here! dfscore 2.0 is a much improved and completely rewritten version of dfscore that I started working on a couple of years ago. The dfscore system is a realtime dynamic score system built to display a variety of musical material over a local computer network. The primary motivation for building the system was to allow for a middle ground between composition and improvisation.
But before I get into all of that, here is a video showing the latest version along with its premiere at the Manchester Jazz Festival: