Frontloading: Vector over Raster

This is a fun one!  What we are talking about here is a way to front-load a live/studio rig such that you ideally only bounce/record the audio once you are at later stages of completing a track.

It comes down to storing control signals (which you can change easily!) instead of recording the resulting audio (which you *cannot* change easily).  It's a shape drawn with vector data (equation driven) vs literally putting the shape into pixels (raster data).  Illustrator vs Photoshop, in a way.

Caveats first:  Of course this will not work with certain things.  It assumes you have access to control signals in the first place.  An acoustic instrument doesn't have recordable control signals - the controller is the human - their muscles.  And so the result of that is always audio (unless you are Pat Metheny's Orchestrion control driven acoustic machines - also check out Simian's Murmurations as they dive into this as well).

That being said, you may ask yourself how many times have you committed to audio to only realise you are in a world of hurt behind a computer applying all sorts of EQ, compression, cutting, pasting, etc to the audio files themselves.   And right next to the computer sit the instruments that have created this audio.  Why not get it right from the start?  Or at least encourage and grow this type of workflow.

One of the problems is that magical moments are hard to capture.  And we know all about this.  Those who use modulars *really* know about this.  When we came to terms that recording audio was the only way to save a magical moment, we thought something was just off with the process.  And so we began a several year process to make a rig that doesn't have to be this way.  You can read the process for that system throughout our Content pages.

It turns out that if you were to control every parameter and simply record that data, you can recreate the audio the same way every time.  This assumes ideal control voltages and enough resolution and accuracy, but bear with me here.  The point is that if you, for example, were to take every parameter of your synth and send the same signals back to it, the sound will be the same as if you just recorded the audio the first time you came up with something you liked.

The big difference, however, is that you can change control signals.  Why would you need to change anything in the first place, you may ask?  Just ask why you would ever need to do any post processing, EQ'ing, alter the dynamics, repitch, chop/copy/paste, etc. any audio?  Have you ever done any of those things on audio files?  And what about a magical moment on a modular - but the phrase or rhythm isn't quite right... or you want to add a simple variation to a very loopy sequence created from a very basic hardware sequencer (they are all quite basic, when you compare this to what can ultimately be done with a DAW or directly via human performance).

One big reason we change things is because we do not know what will come in the future.  We may add other voices that do not play well with the original material.  We may need to transpose a song once you start realise the newly added bass track is not sitting right or is too heavy - yet if you move it up, everything is out of tune and repitching audio ruins its fidelity... You may need to add a variation that you hadn't thought of with the original sequence that is needed to transition to a new song section.  The possibilities are endless.  And they are completely to be expected.  Knowing this, we simply ask: do you want to fix things after they are baked as audio (which is painful for us) or could you simply reset the controls to make the changes you want directly?  To us, we would always prefer the latter.

So think about this!  It's a workflow change as well as a method to record/store/edit the control signals rather than the audio.  In many cases, it is more than possible to 'fix it at the source.'  We maybe have gotten into a habit of mixing 'later.'

A final example might also help you with the whole shift of post to source...

We design modules.  Let's use our Portal Kick as an example.  Kicks are notoriously difficult to get just right.  And much of that process is done in the box or at least after some initial kick audio is recorded.  We said no.  We built the mixing of our kick into the module itself... it was the ideal exercise in frontloading!

We do 'compression' by adjusting and creating custom envelopes to the point it doesn't need compression in a song to alter the cuttiness of the kick.   We were EQing the kick in songs so we built that exact EQ profile into the circuit.  We found we were HPFing the kick so it wasn't so heavy all the time (which is a common trap, BTW... making kicks too heavy).  So we built a HPF right into the module.  We were laying 'clicks' and 'ticks' onto a very low frequency core sinusoidal sound.  So we completely redesigned the core so that they were built right into the VCO itself.  Not even a layer; these clicks and ticks sound like they are part of the kick.  We were adding some impurity to the sinusoids via clipping plugins (and were left only moderately happy).  So instead we built our own form of distortion into the module.  Everything you could do to a kick track we built right into the source and this is as early as you could put it.  You can plug it right into your drum bus and it can be tuned in a final way without additional mixing elements (though bus compression is something we cannot account for on individual voices :D).  Add CV control as well so you can get every one of those settings back.  You can readjust the click after you've added your hats and need the click to sit in a different spot.  You can retune the kick, you can add some HPF to give it more room for the bass track.  And now imagine doing all of those tasks if you only had some non-ideal audio instead.  It's wrong.  It doesn't make sense.  Again we are slaves to a mandate given to us by tradition or lack of context when designers create the technology we use.  Fortunately, we are at a point where the technology is so powerful, all we need to do is constrain and design a system around it.

Try frontloading a bit!  Think about it and see if it would help your workflow and your music!  And be sure to check out our upcoming discussions on the system we have configured to do such a thing.  Exciting!

Copy blog RSS feed url here

Share this post