The purpose of this post is to describe the general concept of “intelligent EQ” also known as Dynamic EQ, and to provide a simple howto on implementing a DIY intelligent EQ.
Intelligent/Dynamic EQ essential adjusts the dynamic range of source audio within a particular frequency domain. What this means is that some or all EQ tweaks made in the EQ plugin can be automated to adjust the result based on the sound of the input. This is very similar in concept to what is accomplished by the use of multiband compression. Some nice differentiation is that you can control nearly every aspect of how the audio is adjusted, in the “conceptual context” of an EQ. Either method (multiband compression or intelligent EQ) can yield very musical and appealing results.
This concept is also implemented in some existing products, including hardware and software “compressor with EQ” plugins or processors. In many of these, there are a certain number of EQ bands that can be either compressed or treated with EQ, and in every different routing combination conceivable. The origin of this idea for me was in watching someone use one of these types of plugins from I believe Waves in a tutorial on Youtube. The plugin allowed the operator to configure, listen to in isolation, boost or attenuate, and compress several different EQ bands. When the audio was being processed, you could see “how much” the resultant processed signal matched your EQ curve while audio was being played back, while still seeing your “configured curve”. The way the audio appeared to “rubber-band” around the EQ curve visually and how that corresponded to the “level of processing” the plugin was doing in each band I was hearing was very intuitive, though I could perceive that complicated math was involved.
In the context of intelligent EQ, many different methodologies can be applied, and the possibilities are endless. This article will outline one simple approach to implementation that I have found easy to get started with when approaching this concept.
This implementation is designed around using freely available VSTs together to accomplish each of the “functional elements” of an intelligent EQ.
Create a guitar track which alternates between low chuggy palm-mute type riffs and higher register bits.
Create a bass track with some presence in the low end. The idea is to get something laid down that kind of “competes” with the guitar track you brought in when it is chugging.
Required VST plugins:
MEQ or similar EQ which can be automated via Midi CC messages.
Take a the guitar track and apply some EQ to it with MEQ. It is best to create a simple curve for the first time, with just one or two EQ points. For the purpose of this exercise, add a 12 dB/octave high pass filter and pull the frequency up high enough that you are able to hear the change in the source audio without trying too hard (for me this is around 100hz, sometimes higher). Add a second point and again make it an obvious to hear cut somewhere in the mids – in my case I picked a wide cut from about 350 hz to 1300 hz down about 3dB – probably at least 4 dB and with a Q that you can hear. The benefit of this type of manipulation is best learned when you can obviously hear the EQ moves if you switch between processing and bypass mode, but if the resulting output volume is the same “in or out”. It is also most clear to your ears in your first efforts with this to have the resulting EQ sound a little “extreme” or “harsh” in some spots of the track when no automation is applied.
Create an AUX track. We are going to use this track to “listen” in to the frequency range of the bass track from about 30 Hz to the same frequency you set your high pass filter to. Send from your bass track to this AUX bus at unity, post fader). On this AUX bus, insert an instance of RubberFilter. Engage only the high and low pass filters each between 30 Hz and the frequency of the high pass with about a 60 db reduction on each side.
After this “narrowing” of the audio to these frequencies, add an instance of Gatefish. We are going to use this to send MIDI CC messages to the EQ on the guitar track, with the goal of getting the EQ to automatically “roll back the lows” on the guitar when the bass is playing, but to let them through more when the bass is not present in these frequencies. Dial the attack “eyeball” knob all the way to the left so the effect kicks in as quickly as possible. Dial the release knob to about 12 o’clock, more about this later. Dial the sens knob all the way left – which will cause the left cheek of the fish to turn dark red and stay that way. Slowly bring the sens knob up until you see the left “cheek” of the fish just occasionally turn reddish, hopefully in time with the music. Dial the Vol knob up until the right cheek is performing similarly to the left; lighting up red occasionally and fading back. Ensure that the fish is “talking” using MIDI CC 1.
At this point, it is useful to be able to have the audio of the original tracks playing and effecting the AUX tracks, but to not hear the audio coming out of the aux busses. This can be accomplished in a number of ways; I chose to create an additional submix bus to which the AUX bus audio outputs where routed and to pull the fader down on this bus. This way I can see the activity of the “listeners” on the AUX busses without having to hear it in the resulting audio.
Ensure that the VST MIDI messages from the base “sidechain” bus are routed to either the Guitar track, the MEQ plugin, or some common MIDI loopback device if you have one installed on your OS. In Samplitude, this means that you have to record-enable the AUX track and enable “VST MIDI Out” as well, and finally set the MIDI out target for the AUX track to the instance of MEQ on your guitar track. Other DAWs will have other ways of accomplishing this task, so look for a tutorial on the subject for your DAW or experiment until you figure it out.
Now we want to go into the configuration of the instance of MEQ. Configure the multiparameter 1 section to control only the frequency of the EQ point corresponding to your High Pass (probably EQ point 1).
Press play. Watch and listen MEQ as the frequency for the high-pass filter on the guitar moves in response to the source signal of the bass. Adjust Gatefish as necessary to get the low end to duck back when the bass is present but to “stay present” when the bass is not playing. The settings for this are going to vary widely depending on your source audio (both bass and guitar tracks), EQ settings and the tempo and style of music being played.
We are going to apply the same general concept to the mid cut EQ point, but with a small twist: the output of the actual guitar track itself is going to drive the automation of the EQ point. The goal of this part of the exercise is to get the mid cut to “fully engage” when the guitar track is particularly present in that part of the audio spectrum, but to “fully disengage” when the guitar track is not as present.
Create an additional AUX tracl and send the guitar track to it post-fader at unity gain. Add an instance of Rubberfilter and set it to the frequency range of your “mid” EQ point (in my example 350 Hz – 1300 Hz ) with 62 db reduction on each side. Add a Gatefish instance and tune it to react quickly and only when the source is “too present” or loud. The goal here is to get Gatefish to automate the MEQ instance on the guitar track to just “smooth out the harsh notes” but to otherwise leave the mids alone, so we want gatefish only really responding to loudest bits. Configure this instance of Gatefish to talk on MIDI CC 2. Perform any necessary routing to get the MIDI output of Gatefish routed to the instance of MEQ.
Return to the instance of MEQ. Configure the second multiparameter to control the Gain of the EQ point defined for the mid cut (probably EQ point 2) based on input from MIDI CC 2. Set the range for the multiparameter to between 0 db and whatever the depth of your cut is (-3db in my example).
This second method can be applied to however many EQ points your CPU can support given that each will need an AUX track and instances of Gatefish and Rubberfilter. Additionally, the fact that the AUX track is listening to the result of the automation of the track it is automating makes this a tremendously powerful tool. The art and challenge is to get the “feedback loop” to naturally and musically move between total processing of the EQ moves and no processing of the audio.
Extra Credit: Implement a variation on this theme where instead of automating each parameter of each EQ point, you automate the “dry/wet mix” of the MEQ effect to affect all points.
You can experiment endlessly with this combination, automating almost any parameter of any plugin using the level of the audio signal as the sole determinant, so please feel free to experiment and let me know what you come up with.