As you all may know, the role that the mind plays in how we perceive audio is a very significant one. Audio engineers who are aware of psychoacoustic principles and other “mind tricks” can use them to great effect when making music or sound tracks. This basic concept can also be used to help you train your ear for audio production in a form of “reverse engineering”. I hope to list a few of these tricks here in the hope that they can help someone become better at hearing aspects of audio, either where they are present or where they are lacking. They might even help you to “hear” some of the processing techniques and their artifacts that might be eluding you at the present time.

As an example, one of the early frustrations I remember having as a musician, and then again as a hobbyist audio producer was my inability to “hear bass” in a typical musical reproduction scenario. For live performances I could “feel” the bass and if standing the right distance from the source, could often hear the lower mids of a bass guitar, but not in headphones or on a stereo system. It has taken me a long time – probably 15-20 years – to get to where my ears are now with it. For me the trick was to listen to musical examples which I was familiar with, then high pass filter them aggresively to “take all the bass out”. I started with the high pass up somewhere around 150 Hz and gradually pulled it down until I could no longer hear the difference it was making. Then I would A/B the bypassed and processed signal a few times to really listen for the change in overall tone. Over time I got to where I can now hear changes of 3db down to about 30 Hz on a good day. This process took awhile and yielded positive results. However, after performing this exercise for some time, I realized that for me, the secret to “hearing bass” is this: understand how bass affects overall tone, learn to recognize it when it is NOT THERE. After this, when I try to go “hunting for the bass” with my ears, I try to imagine what the track would sound like with no bass, and BAM it hits me in the ears.

Other quickie tips:
When trying to set relative levels for tracks within a mix, it is often helpful to listen to a track other than the one you are setting the level on, particularly one that “overlaps” in its contributions to the overall tone with the track you are trying to level. An example here would be to listen to the kick drum while you move the fader for the bass guitar. Once you feel that the track you are adjusting is starting to “step on” the other track, try to evaluate the respective levels at that point. If you back down on the bass guitar fader, do things sound better or worse?
When trying to set relative levels for tracks within a dry mix, it is often helpful to pan the two tracks to the same position or place your mix in Mono if it is not already in Mono. Then try to picture in your mind where your ears are telling you the instruments are. Is one “off to the side and slightly behind” the other? Is one instrument “in your face” while others are more distant? The goal here isn’t necessarily to get all of the instruments to sound like they are the “same distance” from the listener, but to develop a picture of where the performers might be if this were a live performance situation. Once you have a good picture in your mind of where things “sound like” they are, you can then adjust as seems fit to your idea of where they “should be”. The big trick here is to think of “quieter” as “further away”, and to consider this “distancing” concept when performing other tasks like panning the parts within the stereo field, or applying additional processing.
When listening for changes in dynamics in audio you are trying to analyze, try to close your eyes and perform the “distancing” visualization at a point of reference. Think of the changes in dynamics as either the listener “getting closer to” the elements of the mix being boosted (or the whole “virtual band”) or as these elements “moving away” from the listener. Often this really helps me notice somewhat subtle changes in volume. Instead of trying to think of one track getting quieter/louder than another the picture in my mind is of someone quickly “ducking behind” one or more of the others, or one performer “stepping forward” toward the listener. In this sense, the ear and mind are actually AMAZINGLY capable. In the real world, your ear can distinguish subtle re-orientations of audio elements in your surroundings, as well as more sever re-orientations.
Additional aspects of audio can lend a hand/hurt things when taking this distancing approach. Learning to listen to these aspects in everyday life can tremendously help hear when something is “off” with respect to the overall imaging of the “mental picture” of a given piece of audio, and also help you to understand when to/not to process the audio to enhance/soften imaging of individual tracks or the mix as a while. Examples of this are what the “initial sound” of a given musical contribution is (described as transients, punch, whack, attack, etc.) the sustaining portions of the contribution, and “how they fade away” or decay. Listening to each of these aspects on each “instrument” or “actor” in a mix/everyday world scenario can really open your ears to how timbre itself is formed and perceived. This will help you to understand when source material is “overcooked” or “too raw” or just right and in a lot of different directions. Focusing your listening to one element, whether in isolation.
When performing EQ moves as a form of audio correction, think of boosts as “more likely” to cause phase issues than cuts. This is even more true when you have several tracks competing for the same frequency domains in a mix. This technique can help you lean toward subtractive EQ first, which tends to sound more “natural” to the human ear. Even though it is true that EQ cuts can introduce phase relationship problems that did not exist prior to the cut, they are likely to be less audible and occur only at low volume and in the areas that are being demphasized, thus making them less perceptable, again particularly in the mix.

The purpose of this post is to describe the general concept of “intelligent EQ” also known as Dynamic EQ, and to provide a simple howto on implementing a DIY intelligent EQ.

Definition:
Intelligent/Dynamic EQ essential adjusts the dynamic range of source audio within a particular frequency domain. What this means is that some or all EQ tweaks made in the EQ plugin can be automated to adjust the result based on the sound of the input. This is very similar in concept to what is accomplished by the use of multiband compression. Some nice differentiation is that you can control nearly every aspect of how the audio is adjusted, in the “conceptual context” of an EQ. Either method (multiband compression or intelligent EQ) can yield very musical and appealing results.
This concept is also implemented in some existing products, including hardware and software “compressor with EQ” plugins or processors. In many of these, there are a certain number of EQ bands that can be either compressed or treated with EQ, and in every different routing combination conceivable. The origin of this idea for me was in watching someone use one of these types of plugins from I believe Waves in a tutorial on Youtube. The plugin allowed the operator to configure, listen to in isolation, boost or attenuate, and compress several different EQ bands. When the audio was being processed, you could see “how much” the resultant processed signal matched your EQ curve while audio was being played back, while still seeing your “configured curve”. The way the audio appeared to “rubber-band” around the EQ curve visually and how that corresponded to the “level of processing” the plugin was doing in each band I was hearing was very intuitive, though I could perceive that complicated math was involved.
In the context of intelligent EQ, many different methodologies can be applied, and the possibilities are endless. This article will outline one simple approach to implementation that I have found easy to get started with when approaching this concept.
This implementation is designed around using freely available VSTs together to accomplish each of the “functional elements” of an intelligent EQ.

Requirements/setup:
Create a guitar track which alternates between low chuggy palm-mute type riffs and higher register bits.
Create a bass track with some presence in the low end. The idea is to get something laid down that kind of “competes” with the guitar track you brought in when it is chugging.

Required VST plugins:
Gatefish
MEQ or similar EQ which can be automated via Midi CC messages.
Rubberfilter

Take a the guitar track and apply some EQ to it with MEQ. It is best to create a simple curve for the first time, with just one or two EQ points. For the purpose of this exercise, add a 12 dB/octave high pass filter and pull the frequency up high enough that you are able to hear the change in the source audio without trying too hard (for me this is around 100hz, sometimes higher). Add a second point and again make it an obvious to hear cut somewhere in the mids – in my case I picked a wide cut from about 350 hz to 1300 hz down about 3dB – probably at least 4 dB and with a Q that you can hear. The benefit of this type of manipulation is best learned when you can obviously hear the EQ moves if you switch between processing and bypass mode, but if the resulting output volume is the same “in or out”. It is also most clear to your ears in your first efforts with this to have the resulting EQ sound a little “extreme” or “harsh” in some spots of the track when no automation is applied.
Create an AUX track. We are going to use this track to “listen” in to the frequency range of the bass track from about 30 Hz to the same frequency you set your high pass filter to. Send from your bass track to this AUX bus at unity, post fader). On this AUX bus, insert an instance of RubberFilter. Engage only the high and low pass filters each between 30 Hz and the frequency of the high pass with about a 60 db reduction on each side.
After this “narrowing” of the audio to these frequencies, add an instance of Gatefish. We are going to use this to send MIDI CC messages to the EQ on the guitar track, with the goal of getting the EQ to automatically “roll back the lows” on the guitar when the bass is playing, but to let them through more when the bass is not present in these frequencies. Dial the attack “eyeball” knob all the way to the left so the effect kicks in as quickly as possible. Dial the release knob to about 12 o’clock, more about this later. Dial the sens knob all the way left – which will cause the left cheek of the fish to turn dark red and stay that way. Slowly bring the sens knob up until you see the left “cheek” of the fish just occasionally turn reddish, hopefully in time with the music. Dial the Vol knob up until the right cheek is performing similarly to the left; lighting up red occasionally and fading back. Ensure that the fish is “talking” using MIDI CC 1.
At this point, it is useful to be able to have the audio of the original tracks playing and effecting the AUX tracks, but to not hear the audio coming out of the aux busses. This can be accomplished in a number of ways; I chose to create an additional submix bus to which the AUX bus audio outputs where routed and to pull the fader down on this bus. This way I can see the activity of the “listeners” on the AUX busses without having to hear it in the resulting audio.
Ensure that the VST MIDI messages from the base “sidechain” bus are routed to either the Guitar track, the MEQ plugin, or some common MIDI loopback device if you have one installed on your OS. In Samplitude, this means that you have to record-enable the AUX track and enable “VST MIDI Out” as well, and finally set the MIDI out target for the AUX track to the instance of MEQ on your guitar track. Other DAWs will have other ways of accomplishing this task, so look for a tutorial on the subject for your DAW or experiment until you figure it out.
Now we want to go into the configuration of the instance of MEQ. Configure the multiparameter 1 section to control only the frequency of the EQ point corresponding to your High Pass (probably EQ point 1). Configure things to allow for a range of frequency values corresponding to the RubberFilter instance you defined on the bass “sidechain” AUX track (in my example between 30 Hz and 100 Hz). Also set it up to listen to the MIDI CC channel specified in the bass AUX track’s Gatefish instance.
Press play. Watch and listen MEQ as the frequency for the high-pass filter on the guitar moves in response to the source signal of the bass. Adjust Gatefish as necessary to get the low end to duck back when the bass is present but to “stay present” when the bass is not playing. The settings for this are going to vary widely depending on your source audio (both bass and guitar tracks), EQ settings and the tempo and style of music being played.
We are going to apply the same general concept to the mid cut EQ point, but with a small twist: the output of the actual guitar track itself is going to drive the automation of the EQ point. The goal of this part of the exercise is to get the mid cut to “fully engage” when the guitar track is particularly present in that part of the audio spectrum, but to “fully disengage” when the guitar track is not as present.
Create an additional AUX tracl and send the guitar track to it post-fader at unity gain. Add an instance of Rubberfilter and set it to the frequency range of your “mid” EQ point (in my example 350 Hz – 1300 Hz ) with 62 db reduction on each side. Add a Gatefish instance and tune it to react quickly and only when the source is “too present” or loud. The goal here is to get Gatefish to automate the MEQ instance on the guitar track to just “smooth out the harsh notes” but to otherwise leave the mids alone, so we want gatefish only really responding to loudest bits. Configure this instance of Gatefish to talk on MIDI CC 2. Perform any necessary routing to get the MIDI output of Gatefish routed to the instance of MEQ.
Return to the instance of MEQ. Configure the second multiparameter to control the Gain of the EQ point defined for the mid cut (probably EQ point 2) based on input from MIDI CC 2. Set the range for the multiparameter to between 0 db and whatever the depth of your cut is (-3db in my example). Again, this is going to vary depending on the source guitar track, but if you keep the goal in mind, you should be able to switch between MEQ and your Gatefish instance on the AUX guitar track to react “organically” to the resultant audio.
This second method can be applied to however many EQ points your CPU can support given that each will need an AUX track and instances of Gatefish and Rubberfilter. Additionally, the fact that the AUX track is listening to the result of the automation of the track it is automating makes this a tremendously powerful tool. The art and challenge is to get the “feedback loop” to naturally and musically move between total processing of the EQ moves and no processing of the audio.
Extra Credit: Implement a variation on this theme where instead of automating each parameter of each EQ point, you automate the “dry/wet mix” of the MEQ effect to affect all points.
You can experiment endlessly with this combination, automating almost any parameter of any plugin using the level of the audio signal as the sole determinant, so please feel free to experiment and let me know what you come up with.

If you are a user of SoundCloud (or if you just sign up for a free account) you can get access to a lot of cool “special edition” apps for Music and Sound production. My favorite among these is Samplitude Silver – SoundCloud Edition. It is a very capable DAW, limited to only 8 tracks in total and with a limited set of bundled plugins.

It is intended to allow you to use Samplitude and recognize its power before committing to buying the more feature-rich version. As a version of Samplitude, it has some great and intuitive features to help your workflow. If you have a number of VST plugins already installed (or are willing to add them) you can get A LOT done with this simple version of Samplitude, and it uploads directly to SoundCloud as a bonus. I intend to use this particular software to demonstrate the instructables included on my blog and site.

Download Link:
http://dl03.magix.net/samplitude_silver_soundcloud_us.exe?cookietest=1

This article details a process I used to create a feature-rich, free drum synthesizer which incorporates a physical input device (you get to bang on something) a VST software module to translate the physical input into MIDI information, and a VSTi sampler to translate MIDI data into realistic sounding drums.

This article draws heavily on information obtained from a number of Internet resources, some of which I remember the names of and have included in the References section of this post. I do not mean to steal this information from the originators, but hopefully refine it into something that can be used by the relative newcomer to digital recording. Much of the source material and the domain information for drum triggering gets VERY in-depth and can be seen as a (very fun) rabbit hole should you start to dig deeper. I hope to “nutshell” this information so that the user knows which portions of what they see are important for them to understand at this point in their musical journey.

Furthermore, this post is part one and I plan to get more in-depth on the “feature-rich” aspect of this setup. At the end of this instructable you should have a single working
drum trigger and be able to make it sound like a loud snare or bass drum when you strike it. Varying loudness (dynamics in audio terms velocity in MIDI terms) will be dealt with
in a followup.

Recipe for one “drum”:
Piezo-electric element (I got mine from Radioshack and eBay, remove any plastic casing around the element)
Two-wire Audio cable with bare wires on one end (I sacrificed an old guitar cable and stripped the wires back a bit)
Stack of notecards of any size. (Don’t open them, leave the plastic cellophane on)
Free VSTi instrument plugin shortcircuit – http://vemberaudio.se/shortcircuit.php
Free Audio signal to MIDI trigger VST plugin – KTDrumTrigger http://www.smartelectronix.com/~koen/KTDrumTrigger/
Modern Computer
Audio Interface that accepts input from audio cable mentioned above. (In my case the intact 1/4″ mono plug from my guitar cable)
Digital Audio workstation with MIDI support (including editor and record capabilities) and VST support.
Audio Sample for the “loudest note” on the particular piece of kit you are looking to emulate. (For instance a snare sample recording the hardest hit, whether recorded by you or gotten from somewhere

Please note that I am using Samplitude Silver (free download at http://dl03.magix.net/samplitude_silver_soundcloud_us.exe?cookietest=1) to demonstrate this method. You might have to make some adjustments to the process if you are using a different DAW, but the method itself is indeed portable.

Process:
1. Build and test your input device
a. Wire the piezo-electric element to your two-wire audio cable. Use whatever means you have/are comfortable with. (I initially just twisted the wires together and held them together with electrical tape, I recommend a more permanent solution. 🙂 ) Be sure that the piezo element disk itself is freely accessible and somewhat moveable. (I left a little of the raw wire some play past the shielding on the cut end of the guitar cable)
b. Load up your computer and DAW. Add a mono audio track and mark it as the track you will be recording to, selecting an input on your Interface.
c. Test the signal by plugging the non-piezo element end of your trigger into your audio interface’s appropriate input. Start a recording and LIGHTLY tap on the piezo element.
You should see (and possibly hear) some “clicks” or “pops” on the audio track. If you do not, try raising the input gain on your interface and repeat the experiment. If you have pulled the gain up quite a ways and still see no signal, double-check first that you have marked the correct input from your interface as the record source for the track, then your wiring job.
d. Once you are sure you are seeing or hearing the “thumps”, stop and cancel/undo the record of that audio.

3. Mount the input device
a. Take your unopened plastic-wrapped stack of note cards in hand. Cut a slit in the plastic (toward the middle of the stack) such that you could slip the piezo element in between the cards.
b. Do just that, slide the piezo element somewhere in the middle of the stack. You don’t have to get it into the dead center, don’t go too crazy. Also, as long as all of the piezo disk and maybe about 1/4″ of wire lead is “in” the notecard stack, you should be good.

2. Calibrate the input + audio interface combination
a. Enable and begin recording again in your DAW with the input device as the record source. Hopefully you will see that your piezo is still wired up correctly and that inserting it into the notepad stack has not caused it to stop working. You may notice that when you tap on the notecards it seems either louder or quieter than when you tapped on it before. This is why we are calibrating this guy.
b. With your notecard stack on a secure hard surface, tap on the notecard with your finger or some other implement (like a pencil). Move the tap around the notecard and watch/listen to what it does to the input signal. You may find that if you tap right over the spot where the piezo is, that the signal clips, or that if you hit it too hard the signal clips. This means the volume is up too high (or you are hitting it too hard/too close to the piezo). Adjust the gain on your audio interface until you feel like you are hitting the cards as hard as you would like in the spot you would like for the “loudest” drum hit when recording without clipping. Once you have found the sweet spot, write it down somewhere for future reference. (For me it was gain at about 11 o’clock, hitting right over the piezo for hardest notes using my finger). You will find that this primitive device you just built can actually handle a pretty wide range of dynamics and that you can get different results by moving your “striking object” around the notecards and hitting at different levels. Find a good general location on the notecards to strike and somehow mark it (I used a little circle sticker showing the rough center of the “target zone”).
c. Stop the recording in the DAW and cancel/undo to leave a blank track.

4. Record a simple “bass drum beat” with your newly constructed notecard drum. It’s going to record just the “clicks” or “tap sounds” made from striking it. Try recording some as quiter hits and some as louder ones. This should be natural if you just pretend you are really playing a drum beat on the cards some hits will be harder and others less emphasized.

5. Translate audio input into MIDI with KTDrumTrigger (one of the more complicated portions of this instructable).

6. “Print” the audio track to the MIDI track via recording and KTDrumTrigger.

7. Configure shortcircuit to play the MIDI back, triggering your sample at each MIDI event.

Extra Credit: Try to see if you can get realtime Audio -> MIDI recording going, without the need for the interim audio track. This is often a function of enabling particular settings for “Realtime Effects monitoring” in your DAW and routing the signal from KTDrumTrigger directly to a MIDI track.

References:
Source for most of the information on shortcircuit as a drum sampler – http://stash.reaper.fm/2313/Shortcircuit%20Drums%20for%20REAPER.pdf

Accompaniment for the beginning recording musician.

If you are looking to do some recording, but do not have the ability to play drums, keyboard, or bass or the means to record them, this post is for you.

Many guitarists find themselves in this position and take on the quest to become essentially a “one-man band” in a virtual sense: all of the musical content and it’s production are provided by you alone or by software.

There are a lot of “in-between” approaches that can be extremely high-yield when you’re first starting off. In particular if you are trying to record and produce cover music – either as a tribute or just to hone your production skills with a pre-determined “end-goal” or “vision”, then the world is your oyster. In this “in-between” space, the Internet (as usual) is your best friend.

Backing Tracks: There are a wide variety of backing tracks available via a simple search on Google, at the time of this writing http://www.guitarbt.com and similar sites offer collections of backing tracks, some with only drums and keyboard, some with drums keyboard and bass, others with everything but the lead guitar and vocals. The advantages of this approach are pretty obvious: you can start with a track that has a decent “sound” going with most of the band already recorded and mixed, leaving you free to shred as your heart desires. The disadvantages are perhaps not quite so obvious: everything is “pre-mixed” so changing the overall sound of the resulting mix is going to be severely limited, since the original tracks likely include effects and dynamic range controlling devices like compression. You lose out on the opportunity to learn how to make each of those “other” tracks work or “gel” together in a mix, but if you are just starting out, that might be alright by you. If you create a mix with pre-recorded and mixed backing tracks, it is hard to get your additional tracks to “sit well” among the other instruments since you can’t EQ them individually and carve out holes for your sound. This leads to a “jam-along” or “karaoke” type sound, and usually your “overlay” tracks will overshadow the original mix in unnatural ways. Furthermore, you may be limited in what you are legally allowed to do with your resulting track if the backing tracks come from someone else. Sometimes, folks just want you to add an attribution in your description of the track and not to charge folks for the resulting work, other folks are more restrictive.

Bass trick: If you do not have an electric bass (which I highly recommend to all guitar players, particularly those recording), you can often make due with recording the direct signal from a guitar and pitch-shifting it down an octave. Note that this works best with bass lines that only have one note playing at a time, and that the result is probably not going to sound like a real bass if you ask a bass player. I have had the best results with this method when I put my pickup selector in the 4th position (on my guitar this is the bridge humbucker combined with the middle single coil, some call it the bridge out-of-phase position). It sounds kind of odd when you play a guitar in that position (unless you are Ty Tabor or a country player) but for bass simulation the “quacky” quality really sounds nice. Further refinement can be achieved by running the pitch-shifted signal through an amp and/or cabinet simulator for bass and/or routing the result through a “mid-heavy distorted” auxiliary track/bus mixed in with the original (pitch-shifted) signal. Alternatively, you could render a MIDI track of the bass through one of the above mentioned online MIDI->audio services or run it through your local synth on your DAW/computer.

Loops and samples: Similarly, there are MANY examples of both audio and MIDI loops and samples that can be downloaded from the Internet. Individual loops/samples or collections of loops free and commercial are abundantly available. They also vary widely: some are drums-only, some are multi-instrument. In the case of audio loops, getting a realistic end-result is a lot easier, provided you are creative and careful with repetition. Maximum flexibility is provided by the MIDI loops, as you can tweak them and make them significantly different from the original (use them as kind of “starting points”) and render them to audio with any number of Synthesizers and/or Drum Samplers to get wide variety of results. Both loops and samples share a disadvantage with backing tracks in that such tidbits are often accompanied with a widely variant spectrum of “licenses” or “fair use” policies. Some folks give you free reign to do what you will, others want that attribution, others want money. This is less of a problem with the MIDI loops due to the fact that you can easily create original works that are significantly different than what you started with (the originator might never even notice that you started with their loop by the time you are done adding and removing notes and/or rendering them to audio). In this realm, my current best advice is to take a look at the Free (and paid) MIDI collections from Groove Monkee and the free audio samples available from freesound.org, just a Google search away. I should also note here that there are online services to render MIDI loops to audio using a wide variety (and rage of quality) of synthesizers/sampling packages. Some of these services are free, others are paid services.

Tabs: One of the most effective ways to use the internet to generate music (particularly in the case of cover songs) is the modern implementation of guitar tabs. A few formats, in particular Powertab and Guitar Pro format offer the ability to tab out all of the parts used in modern Rock or Pop music. Furthermore, there are a wide number of great “metasearch” sites for finding tabs for just about any song you can come up with. Many have the accompaniment parts also transcribed and each instrument is on its own track within the tab file. You can easily take these tab files and open them up in almost any tab player/editor (I am still a fan of Powertab and Tuxguitar) and export each of the tracks (or all/some of the tracks) into a MIDI file that can be readily imported into your DAW. You can then take the MIDI editor in your DAW coupled with a Synthesizer or Optional Drum Sampler to translate these MIDI events into audio you can use in your mix. Furthermore, most MIDI editors allow you to perform edits on the imported tracks to customize or “humanize” the performance as you see fit. This yields the maximum amount of flexibility and creativity of any option I have found for the “one-man-band” setup. I plan to further significantly expand on this subject in a future post or article. The disadvantages of this approach are: Depending on how much of the original tab you used, how much you tweaked it, and the preference of the original author of the tab, you might be required/feel obliged to acknowledge their initial efforts tabbing out the song, and your use of their tab. Depending on the accuracy of transcription and attention to dynamics control in the original tab file, you might have a very “computer sounding” starting point after importing the tablature, requiring more work to make a “human-sounding” end-product. You can miss out on the opportunity to learn how to mix elements of a drum kit in this approach as well, as most synths and drum samplers offer tempting out-of-the box sounds that work reasonably well with minimal tweaking of the “kit”.

Sequencing: This is a process where you open the MIDI editor in your DAW and either manually enter notes in with your mouse or record a MIDI performance from a MIDI device like a keyboard or electronic drum kit. If you have a keyboard/MIDI input device and the requisite skill, you can go a long way pretty fast with this method. If however, you are manually sequencing via a keyboard or mouse, I have found this to be a laborious and largely unrewarding way to create music, though YMMV. I should note that it is also possible to “step-sequence” where if you are not able to actually play a piece in it’s entirety up to tempo, you can record each note one at a time, specifying the dynamics (volume) pitch and duration of each note in stepwise fashion. Again, I do not have the patience for this type of thing, but it can yield amazing results. I would consider using this approach if I had an orchestral score and a MIDI was not readily available.

Collaborations: The Internet age has brought us many online services that allow us to connect with other musicians and “trade licks” or collaborate. This offers some very exciting opportunities to create and mix songs without actually having to record instruments you do not have, know how to play, or have the ability to record. Most modern musicians find this to be the very best that the Internet has to offer in the way of furthering and expanding music, and freeing it from traditional commercial constructs. A few very notable services as of this writing include SoundCloud, ReverbNation, and Bandcamp, though in this list only SoundCloud appears to officially recognize online collaboration as a valuable benefit to its service members.

Triggering: Another great way of generating musical input specific to drums and percussion is triggering. You can purchase very inexpensive electronic components like piezo-electric elements and create a DIY method for recording yourself “thumping on things” and/or purchase hardware devices to accomplish the same goal (trigger modules, drum pads, eDrums, etc). Using this method, you can record your physicial real-world “fake drumming”. Then you can translate this recording into a series of MIDI messages (automatically translated if using triggering modules, drum pages, eDrums, etc., translated by software if using piezo-electric elements {might I recommend KTDrumTrigger, subject for a more in-depth discussion at a later date}). This MIDI data can then be used to trigger samples, either in a synthesizer or a drum sampler (hardware module or software like Superior Drummer). Using this method you can get amazingly realistic results with relative ease. The advantages are: the recorded MIDI data is infinitely “customizable” after the recording, as mentioned above. The result of the MIDI recording – when conducted properly – will sound more “human” than importing a MIDI or tab->MIDI usually will out of the gate because both the timing and the “loudness” of each percussive strike are determined by your “fake drum” performance (or the tweaks to the MIDI after you recorded it). The difference is quite amazing versus a raw MIDI take. Spending about $2 for a piezo element at a local RadioShack hooked to a guitar cable I cut one of the connectors off of, and “drumming” on a stack of notecards with my finger, using the above-mentioned KTDrumTrigger software to translate the audio to MIDI, I have been able to obtain pretty impressive results when rendered through a decent drum sampler and/or synth module. Coupling this inexpensive “eDrum” with something like Superior Drummer yields infinitely tweakable possibilities with professional sounding results. Disadvantages are that tweaking with the audio to midi trigger (whether software or hardware) takes some time to tune in the “loudness” aspect. Also, the performance will never be as good as a real drummer (unless you can also play drums, then maybe you could pull it off). Also, you have to do a few passes to record things right this way, unless you have multiple trigger devices and are pretty coordinated. I usually throw down a simple beat track that I imagine in my mind to be the kick and the snare on one pass, then do high-hats on a second pass. Finally I would add in the cymbals manually using MIDI editing. This requires some patience and time, but the result sounds very human if done properly, and in the end you are generating all of the musical content, so no usage restrictions apply. Also, I cannot overstate how awesome it feels to play crazy double-bass and snare blast beats on a notecard and have “the real thing” coming out of the speakers! 🙂

The gear that I use to facilitate making music in my home studio is relatively humble (with the exception of my guitars themselves). Please note that I am not sponsored by any of the vendors mentioned in this post, and have no relationship with them or their products (other than I am a consumer of them). I can only speak to my experience and am describing the gear that I use, as well as some that I have tried out.

When the Home Recording Revolution first really kicked off (~2006/7; the digital one, not the PortaStudio one) I acquired from my nearest Guitar Center a relatively expensive (at that time) hardware interface called the Digidesign Mbox. This came in a packaged deal with two free condenser microphones from MXL, the MXL 990 and 991. In addition, this package deal included a hardware-specific version of ProTools LE. This simple setup is how I performed my initial recordings, using the 990 and 991 for different aspects of recording acoustic guitars and using the line out from my amplifier or the 990 placed in front of the speaker in some instances. Around the same time, I purchased a Line 6 AX2 212 amplifier which was one of the earliest modelling amplifiers available, based on the word of mouth review of one of the local musicians I have a lot of respect for. At that time (and at the time of this writing) I did not choose to invest in studio monitors, but rather opted for some relatively inexpensive studio headphones – Audio Technica ATH-640fs. This gave me a wide versatility of sounds at a relatively high cost (~$1200 for the amplifier and ~$500 for the interface, software and microphone bundle, ~$100 for headphones = ~$1800 total cost).

I have been able to work with this setup for quite some time, but the dramatic improvements in technology have made setups like this vastly more affordable, provided you have a decent computer to work with. Here I hope to outline what would be an acceptable entry-level setup for the average guitar player in 2013 looking to record.

Interface: Lots of options, something like the ART USB Dual Pre is a good option, providing minimal complexity and decent quality of recording (16-bit audio), you can pick one up for around $50 new if you watch for the deal (and I have done this). This is great if you want to get basically the same setup I had starting off with two inputs 1/4″ or XLR and phantom power (think mic’ed sources). The real bang for your buck when it comes to recording electric guitar or bass (or any other electrified stringed instrument) is something like the Line 6 Pod Studio GX. You can pick a refurbished one up for $75 or less on ebay (which is how I went) or find one at your local guitar center or best buy ranging from $75 – $150. This provides ONLY 1/4″ (Mono, I think) input and monitor/headphone out for low-latency monitoring at 24 bits. This is a great way to get the raw signal for “direct recordings” into your DAW for possible manipulation with software plugins. Additionally, this particular model (and others like it) give the purchaser access to POD Farm software plugin suite and the Line 6 Store for expanding the gear available in your farm. This gives roughly the same flexibility of sounds and variety of gear to manipulate it as my AX2 212 provides, at no additional cost!. Granted the sounds you get out of POD Farm require some tweaking to be useful in a mixing context, but for the price, you would have a hard time finding something as versatile and “all-encompassing” as POD Farm for free on a <$100 interface. Recording Software/DAW: Again, several options, stick with ProTools LE (or upgrade to ProTools for additional cost), use the recording software included with your interface (Audacity in the case of the ART USB Dual Pre, RiffWorks in the case of the Pod Studio GX), something else like Reaper, Cubase or Samplitude at additional cost, or any "special deal" software available from a wide variety of online services. I personally prefer Samplitude ($499-$2000), though Reaper is also acceptable and significantly more affordable at ~$100. However, when first starting off, you can probably get by with a feature-set limited "special offer" software package like "Samplitude Silver - SoundCloud Edition" which are usually freely available. I started on exactly this, which is very powerful for free software, though limited to only 8 tracks total and with limited bundled plugins. You can easily expand the capabilities of such a platform with freely available VST plugins, though you will likely soon outgrow the 8 track limitation (due to the use of bus routing). There are also several DAWs available with limited feature sets for mobile devices which can usually be picked up for $50 or less. These will be most limited by your device, but will also likely suffer from things like incompatibility with VST plugins. Amp Simulation plugins: You can go with micing an amp, and in the end this process usually yields more "authentic" tones, but will get a lot more mileage out of your recordings if you record the direct unprocessed signal (instead of or in addition to the signal coming from an amplifier). Also, micing an amp is an art that takes a lot of patience and time and can in and of itself make or break a recorded track. Amp simulation allows you to "virtually reamp" the track and continually revise the sound as you are creating your mix, and also as technology improves. Many DAW platforms include some Amp simulation plugins, and some hardware interfaces bundle similar plugins like POD Farm mentioned above or versions of Amplitube. There are also a number of other options available, including free amp simulation plugins, free cabinet simulation plugins, or other commercial amp/cabinet simulation systems like Guitar Rig, Amplitube, Overloud TH2 (my personal favorite), MAGIX Vandal and the list goes on. You can get very far and learn a lot by using the free simulation plugins that are available, and I still use them in my mixes. However, the simulation systems like POD Farm, Guitar Rig, Amplitube, Vandal, or TH2 are going to get you a lot more power and flexibility in one package, and usually consume less CPU than the free alternatives. I must insert a personal note here that Overloud TH2 (~$200 at the time of this writing) is AMAZING and I have yet to find any similar package that delivers so much on their promise of mix-friendly readily accessible authentic sounds that are easily customizable. Microphone: If all you are recording is electric instruments, you may not need a microphone. You can often get the same deal I got on the two MXL condenser mics at Guitar Center for ~$100 and this continues to be a winning combination of microphones. However, if you are looking for a single microphone that will give you the most utility of any microphone you can buy, get an Shure SM57 dynamic mic for <$100. This mic can capture so many different types of sounds and people are so used to hearing it in commercial recordings that it is a no-brainer. You won't get all the "sparkle" and "breath" that you would out of a condenser mic, but it really can't be beat. Also note that if you purchase a condenser microphone, then you also have to have a way to supply it with phantom power, either via your audio interface, using a USB to XLR cable or with a hardware mixer. One additional item that will make your life simpler and so much more enjoyable is software to turn your tablet device into a mixing console. Most modern full-featured DAWs allow you to integrate with hardware devices so that you don't have to actually do your mixing (and recording/playback transport control) with your keyboard and mouse. You can purchase full-fledged harware mixers that integrate with your DAW and communicate via MIDI if you prefer a "real-world tactical feel", but such systems often cost in excess of $1000. Since I had already purchased a tablet - specifically an iPad, I looked into software that lets me us it as a control surface and communicates using the same MIDI protocols as the hardware mixers. I found a GREAT app called AC-7 core, which I was able to pick up for $3 when it first came out, now it's something like $7.99. This gives you AMAZING integration with any DAW capable of speaking Mackie protocol. With this app, I no longer have to sit in front of my computer to record, I can control the recording and nearly every aspect of the tracks (level mixing, naming, automation) from my iPad (which is mounted to a stand). So in summary, minus the price of the tablet - I was able to spend about $3 to get comparable functionality as a hardware mixer costing more than $1000! Performing mixing on the iPad is infinitely more ergonomic than using my mouse and tons more "responsive" as well. I can say without question that this is the VERY BEST $3 I have ever spent, and it would have been the very best $8 if I had to buy it today. I am not sure if similar products exist for other tablets, but if I had to choose between a $1000 hardware mixer and buying an iPad and the $8 app for ~$1000, it would be a no-brainer. All in all, you can actually get up and running recording guitar-based music for about $100 if you purchase the right interface and familiarize yourself with the right software plugins, and select a DAW that is either bundled with your interface or freely available (and likely limited). Playback through your existing PC speakers and/or headphones can suffice when you are first starting off, and graduating through studio headphones ($100-$250) to a more "professional monitoring setup" ($500-$infinity) can be staged over time.

Capture and share some nice raw tracks of “rock and roll” instruments in their most typical roles.

Provide a means for listening to the different frequency ranges within each “tone type” that are “most important” for that sound.

Begin to describe the complexity behind audio engineering due to it being partly “software” (your brain). Most short-term goals for a track have more than one way to accomplish from a technical standpoint, but your “software” will interpret the “solutions” differently. Most short-term goals that sound simple are more than one dimension; making a guitar sound “punchy” is usually not as simple as turn this frequency up, this one down. The sounds of the tracks within the mix “fighting for dominance” in frequency ranges makes “cut-through” EQ moves most simple on the OTHER tracks (not the one you are looking to “affect”). Furthermore, just getting the particular EQ curve right for a giving sound is only part way there, you need to capture and control the dynamics of a track within the confines of a particular frequency range. The most effective way to paint a picture for the mind is with CONTRAST or VARIETY which push the mix from a “moment-in-time” EQ snapshot to a living breathing entity. This is one area in which many “Impulse Response” systems ultimately fail, they essentially define the characteristics of how an audio system responds to a varying frequency input at a constant volume, but don’t capture the intricacies of how the previous few moments in the mix were and how that interacts with your brain. One good example of this is the introduction to “Cliffs of Dover” by Eric Johnson. The bombastic low E punch at the first note is what makes the follow-on bend sound so huge and razor-sharp.

Explain some of the known “quirks” of the human mind surrounding audio: more sensitive to boosts than cuts, auditory illusions (better by comparison, but not good, louder isn’t always better.

Discuss “beginning EQ” concepts and “best-practices”. Boost Wide, cut narrow, boost to “change the sound”, cut to “improve the sound”. Recommendations for Q setting and dB of gain/attenuation for beginners to start the ear on the right path.

Begin to discuss capturing, mixing and mastering workflow at high levels. Getting organized, defining your vision, separation of “stages” of music production, importance of taking breaks. (Goes back to the software thing).

Define the audio engineering terms that have THE MOST bearing on progress forward. Dynamic Range, Fundamental Frequency, psycho acoustics, transients, repetition/steady state decisions, variety and “spontaneity”, mirror neurons.

Explain the things that need to be “properly proportioned” for a song to “shine its brightest”. This includes a digression into the “musical components” of rhythm, dynamics, relational pitch continuity, phrasing, tone and articulation, as they apply to each individual instrumentalist’s performance, to the band as a cohesive unit while “tracking”, to the song as a vehicle for creative expression and communication, to the mix and to a collection of mixed songs.

I will use this site as a way to capture what I have learned and am learning about producing audio. Tips, tricks, exercises and techniques should be readily available and as easy to understand as possible. The goal is to offer the most valuable audio production knowledge in a way that a reader could quickly benefit from it. Explaining recording and EQ/mixing workflow and the “why” behind it.

This is my husband’s first blog. I can hardly believe that, considering he has been in Info Technology pretty much all his life. But now, alas, a blog. A blog all about his path learning how to record, mix and produce his awesome music. I hope you come and follow along in his experiences.