Audio production Part 4 - Filters and effects

From LXF Wiki

Table of contents

Audio and Music Production - part 4

(Original version written by Graham Morrison for Linux Format magazine issue 66.)


Mastering an audio project involves fixing all the minor inconstancies. We take to the controls for the final stage in the process.


For the penultimate instalment to this series of Linux audio tutorials, we're fittingly going to cover the final stages of a typical audio project. This is where all the disparate components of a composition are made to play nicely with each other, massaging their respective frequency ranges and squeezing a little more energy out of some parts while restraining the energy in others. It's what's known in the industry as mastering, and is usually performed by highly skilled engineers. In fact, this is the only way to professionally finish a piece, as a fresh set of objective ears can make the difference between success and failure. But it is still worthwhile understanding a little more about the mastering process, and more often than not with virtual studios, this process is actually becoming part of the compositional stage.

Mastering uses a few essential audio processes. We briefly covered the basics of these effects during the first audio tutorial but it's worth covering them in more detail. There are broad divisions in the way that effects are used and implemented. Some are purely utilitarian, while others are creative, but the majority fall somewhere between. They can be classified according to how they act on the sound into roughly three broad categories, delays, filters and dynamics. Obviously, the best way to get to grips and understand the various effects available is to use them, and currently, the application that provides the best environment for effects processing is Ardour.


It's all an illusion

It is true that any Jack compliant application can be used for inserting effects into the audio chain but Ardour has been built from the ground-up to take maximum advantage of this interconnectivity. This means that effects can be inserted manually into each channel, or what are called 'sends' can be created in the same place. This creates a separate Jack channel where a track's audio can be sent to an external program and re-inserted back into the track after processing. This is the same as working with external processors in a studio, and can still be used in this way by simply making the Jack channel connect to physical input and output ports.

Ardour is also better suited to mastering a project thanks mainly to its ability to perform the same trick with effect sending from the master bus right at the end of Ardour's audio chain. This makes it perfect for the final optimizations that often need to be made to a piece before it's suitable to burn onto a CD. To get the same functionality out of Rosegarden would involve connecting the mastering effects manually into the final audio stream from Jack, with Rosegarden having neither knowledge nor control over what happens to this stream once it leaves the application.

Effects are inserted into Ardour on a track-by-track basis, with a working style very similar to that of Rosegarden. In fact, they both feature functionally similar mixing windows, which is no surprise when they're both designed to replace them. The main difference is that Ardour's mixer features white space both below and above a channel's fader for inserting effects. Rosegarden only supports effects added after the fader, and while it may not sound it, there is a distinction between the two. As the fader controls the volume level of a track, it means that effects inserted before the fader will process the original audio level before it can be amplified or attenuated while those inserted afterwards process the audio post-filter. Depending on the effect, this can make either a little or a lot of difference, but we'll get to that.


Frequency Effects

Multi-band mayhem with FreqTweak

For generating a modest amount of audio mayhem, try passing your audio through FreqTweak, a graphical frequency-dependant filter. It also has more practical uses, such as for monitoring audio, as both the input and output audio stream can be viewed as a stereo spectrogram.

Both the colourful graphs at the top and bottom of FreqTweak's main window plot the volume level of each frequency in a horizontal spectrum of colour. Blue represents the lower levels, while red hues furnish the middle, ending predictably with violet. Time is on the vertical, so as the audio passes through the spectrograph, patterns of frequency look like rainbows at night.

FreqTweak is basically an effects processor. The only difference is that it limits the frequencies affected to those drawn on several histograms by the user. The first effect, for example, is a filter, and removes the frequencies from the audio path depending on the levels drawn into its corresponding histogram.


Once audio has been loaded and dropped onto its own track in Ardour, effects can be assigned to the track from either the mixer window or the channel strip in the arrange window. This is done by right clicking in the small box below the channel's fader (the post-effect slot). The opposing box above the fader is for pre-fader effects. LADSPA compatible effects can be inserted by selecting "New Plugin", the first of which to try is a delay.


Delays, filters and dynamics

Delay effects don't just work on sound in the immediately obvious way. Many processes rely on an array of tiny delays to generate a more complex effect than that of simply playing back a section of audio at a later time. The most widely used of these in audio production is reverb (an abbreviation of reverberation). This is an attempt to create an acoustic environment that differs from that of the source material. Audio in a recording environment is usually recorded 'dry', that is, without any external ambiance or effect. A dry signal is far more versatile, and doesn't impose its recording environment on other tracks.

The reason that reverb is considered a delay, is that it's usually nothing more than calculated reflections and filters from a mathematical model of a room. In a canyon sized ‘room' you'd get an obvious echo, but with a smaller environment you get an almost imperceptible impression of space. As a result, these algorithms computationally intensive, and it's only relatively recently that native processing has been able to compete with external DSP hardware. Notable reverb effects for Linux include Freeverb and gverb, and they're both perfect for augmenting a synth such as one created with AMS. Other notable delay based effects include phase and flanger. Both use discrete delays to create similar effects. The flanger was famously invented to accommodate John Lennon's hate of multi-tracking recording, using two tape machines and varying the playback speed of one by holding a finger on the machine's reel flange to produce timing fluctuations.

The main function for filters, at least at the mastering stage, is for equalisation. In a similar way to those gigantic graphic-equalisers on the front of a 1980's hi-fi, they can either amplify or reduce the audio signal at varying frequencies. The difference in the studio is that the frequency range can usually be configured, as can the range and density of the effect. Early digital designs were aimed at creating the perfect equaliser, but it was soon discovered that there was always something missing from their character, in the same was as with the filters for synthesisers. This has resulted in modern designs trying to incorporate some of the imperfections of those older hardware devices.

The least obvious, but perhaps most important group of effects are those involved in changing the dynamics of the audio. The simplest such device is a noise gate, which mutes the sound when the level falls below a certain threshold. This is especially useful for eliminating the background noise from a typical guitar amplifier. Another dynamic effect is called a ‘compressor', otherwise known as ‘the bane of modern music'. When misused, and it often is, it forces a uniform signal level across a whole track. Music shouldn't be this way, but that doesn't mean that a compressor doesn't have its uses. In fact, it's pretty much the only way to bring out some of the detail that often lies just under the surface of a recording. This is most obvious when recording vocals, a look at the waveform would show that there's often wide variation in the level of the signal (imperceptibly compensated for when listened to in isolation). This can make the track difficult to hear when combined with the other tracks in a project. That's where the compressor comes in. As its name suggests, it compresses the dynamic range (that's the gap between the loudest and the quietest parts), generating a more consistent level. In simple terms, it attenuates the loudest parts while amplifying the quieter.

The whole point in covering these processes in some detail is that they're all involved in the mastering stage of a project. Using a compressor with equalisation on a vocal track is often essential. The same is true, to a greater or lesser extent, of the other components in a project. Bass tracks rely on compression to create the pumping punctuation in a typical dance track as much as compression on a vocal can change the emphasis in a performance, so it's not all about volume. If you've managed to take all this in, we can now move on to the mastering stage. Linux features a specific mastering application, and in keeping with other dreadfully named Linux audio utilities is called 'Jamin'. It's basically a hard-wired group of mastering effects, arranged as you might typically use them for finalizing a project. This is meant to be the last process in the audio chain before the track is burned to disc (actually the very last should be a dither algorithm for making best use of your native 32-bit sound files when they're converted to 16-bits for CD).


Master and servant

In Ardour there is a section on the master bus for processors that need to occupy the final stage of an audio chain. It is below the master channel fader (labelled 'post-fader inserts, sends & plugins'). As Jamin is a standalone application and not a plugin, Jack connections need to be made from this point to the Jamin process. Right clicking in the white box below the master channel's fader should present you with the option of inserting a new plugin, a send or an insert. The distinction between these three is that a plugin is for internal LADSPA effects, a send simply pipes the output from the channel to an external process while an insert does the same but expects the signal to be directed back to the channel. It's this final one that we need to use so that the output from Jamin comes back to Ardour's master channel.


Taking Control

External controllers and Linux

Getting the volume levels right for a project with 11 tracks of audio is difficult at the best of times, but moving virtual sliders with a mouse button makes it even harder. Mixing consoles give an engineer instant access to a track's volume through its corresponding fader, and unlike a mouse, several can be moved at the same time. The only way to achieve the same thing with a mouse is using automation by recording the movement on each channels fader over several passes of the piece of music. There are two possible solutions to this problem. One is to output each individual track from a well specified hardware interface and into an external mixer. The other way is to use an external hardware controller that's interfaced to the computer in some way.

While you may think that these controllers may be the domain of other more popular operating systems, this isn't necessarily so. Because most of them use MIDI as their transport protocol, nearly any generic interface is compatible.

What's more there are now interfaces that accept incoming MIDI which allows them to update their surfaces to reflect the on-screen version.


Once an insert has been created, Jamin itself needs to be started. By default, it automatically connects itself to the hardware outputs in Jack. This will create a problem on playback, as you would get a duplication of the signal, one being from Ardour, the other from Jamin. To prevent this, just disconnect Jamin's output from within qjackctl's connection window. The next step is to wire Jamin into the audio chain. While this can be done from qjackctl's connection window, Ardour has a considerably simpler interface and is often far easier to use than the spaghetti wiring you often get qjackctl. To make this connection from within Ardour, right click on the newly created insert and select 'Edit'. This brings up Ardour's own connection window, with the inputs on the left and the outputs on the right. At the moment, there's only a single output enabled - the equivalent of sending audio in mono. This is useful for some effects, but would be catastrophic for the master track. To add the other channel, just click on 'Add Output'. To make the connection, just select the Jamin tab for both the input and output channels, followed by clicking on the Jamin outputs as they appear in the list. This should move them to the output box on the left of each section and at the same time make the connection within Jack. Finally the insert needs to be enabled, either from the menu, or by middle clicking on the insert (this should remove the brackets).

If all has gone well, when you next play through the project, the output should be routed first to Jamin and from there back to Ardour. You can tell that Jamin is working when there's lots of activity in its window. That window is a little intimidating at first, but it's really only providing access to a combination of three compressors and a parametric equaliser, plus a couple of boost and limit sliders. Why three compressors? Well, each is frequency dependant, which means they each process a separate frequency range. To help with this, the upper area of Jamin's display houses a spectrum analyser, which maps the volume on the Y axis and the frequency on the X. Lower frequencies are on the left, higher on the right, and the frequency at the cursor position is displayed underneath the window.

Parametric equalization can be edited from either the spectrum analyser or from the more familiar slider interface presented under the '30 band EQ' tab. From the spectrum analyser the equalisation curve can be either drawn by hand, or edited by dragging the curve anchor points shown in yellow. Points over the middle line boost the signal while those below obviously reduce it, and the width of the boost can be edited by broadening the curve. A tight notch at 50 or 60 Hz for example could reduce hum from an electrical supply, while a broad gain at around 2-5 KHz would lift a vocal slightly, whereas further gain in the 15 KHz+ region can introduce an almost imperceptible 'air' to a recording.


Compressed Ardour

The green and red vertical bands are the 'crossover' points for the three compressors. Any frequencies that fall between the left border and the green bar are sent to the low (or bass) compressor. Those that fall between the two bars are for the mid range, and the remaining area to the right of the red bar are sent through the high frequency compressor. Each compressor shares the same controls, labelled as ARTrKM across the top of each section. These letters represent attack, release, threshold, ratio, knee and makeup gain and are typical to nearly every compressor.

Beneath the each compressor's sliders are two level indicators, with the top one displaying the volume for each respective compressor's frequency, and the lower showing the gain reduction incurred on the source frequency range by each compressor. All these elements are brought together in the curves that are shown in the corresponding compressor tab. These are basically a graphical representation of the lower parameters and show the level of attenuation for any given input level. Unaltered volume would be a straight line between the lower left corner and the right side of the horizontal zero gain line (values above this line represent gain).

In a typical session, you would move the threshold to the point where the gain reduction indicator starts to bounce, then alter the attack and release depending on the material and the frequency range. Slower material obviously benefits from slower envelopes, while punchier material needs the faster recovery times of a quicker envelope. It's important to hear the effect of each change, and this can be made much easier using the solo switch, isolating the audio that's sent to the current compressor. The most important parameter though, is ratio and as you can see when changing this value on the graph; it changes the ratio between the amount of attenuation and the input volume.

Once the project has been tweaked within Jamin, the next thing to do is ‘render' or record the final output to an audio file. This should make it relatively easy to either burn the track to a CD, or upload your masterpiece to a website. From Ardour, open the Export Session window (Session->Export->Export Session to an audio file) and the first thing that needs to be changed is the filename. The rest is project dependant, for CD you would obviously need to make sure that the sample format is set to 16 bits and that the frequency is 44.1 KHz.


The show must go on

If you have been working with high sample and bit rates (and as Ardour uses 32 bit floats for internal processing, you probably have) the output quality can be improved by using dither. Dither in audio works in exactly the same way as dither in video, and is a great way of hiding the imperfections of digital stepping behind a veil of noise. For best results, try setting the dither option to Shaped Noise, but depending on your material, one of the other dither algorithms may be better suited. Finally, make sure the master outputs are selected in the Output selection and click on ‘Export'. Ardour will then run through the whole project and render the output to an audio file.

And that's it. You should now have the finished product in the shape of an audio file on your system. Over the last few pages, this series of tutorials has covered the basics of Linux audio, starting with the fundamentals of ALSA and Jack, followed by synthesis and sound generation before the reaching the final stages of mastering a project. While in many ways this has been only a brief overview of the many audio possibilities available to Linux users, the main intention has been to whet your appetite enough to hopefully push you into actually doing something, and if you do, you know where you can send the results!

Links

Software used in this tutorial