martes, 27 de mayo de 2014

Riding the Fader



Hello, I'm Carlos Devizia  from Argentina
Today I will explain the technique called "Riding the Fader".
Sometimes you have a performance that you like, but there are some parts in which it is too loud and some other parts in which it is too quiet. It often happens with the vocals. In this example, and for the safety of your ears I won´t sing. Instead, I'll demonstrate this technique over an instrument performance. I´ll use Zynewave Podium free, but you can replicate this technique with your favourite DAW. You may encounter some differences between DAW to DAW, but the main concepts are the same.

The concept behind "riding the fader" is easy, when the volume of the sound goes up, we turn the fader down, when the volume goes down, we turn the fader up. Just like that. What we are actually doing is a kind of manual compression. In fact, every compressor does this kind of job, first it analyzes the signal and then compensates the volumes in the desired way.

OK, let´s work! We have our audio in an audio track. It is a short clarinet performance, and there are some noticeable differences in volume in it. In part due to the difference between the registers of the instrument but also because of the performance in which I played with more strength in some parts than in others.

We can listen to the original performance







We right-click on the track and select "automate parameter" and then we choose "level". 
 


This will create an automation child track related to our clarinet track




By double-clicking on the level track, we'll see the following:









This is our clarinet part and we can begin automation here. Then, you must select the "pencil" tool" and just draw the automation envelope following the wave.






After doing that you may notice that your actual level is too low, so you can correct it by selecting all the points (Ctrl + A), and with the arrow tool taking them to your desired level. You can even move individual points if needed.









One of the good things of Zynewave Podium is that this automation changes does not affect the main fader of the track. So you can still increase or decrease the volume level of the whole track by adjusting the gain fader.



In other DAWs this is not possible and you´ll see that as the music plays your fader volume moves up and down, so if you want to increase or decrease the whole volume of the track you'll be in trouble. To solve this, you can add a Gain VST filter (there are lots of them, and some of them freeware as the one that I show below), and then ride the gain fader of the filter leaving the track fader to adjust the global volume of the track.




After finishing our work we can listen to the performance that now has a lower dynamic range and the loud parts and quiet parts are closer.





Then we can normalize our audio.



The conclusion of this small experience is that we can manipulate our volumes in an easy simple way by using this technique, and though it will be really helpful using it as the way it was designed for (the manual compression we talked about) it let the doors of experimentation opened and would be interesting to ride the faders in unusual ways too.
Finally, it is important to point that every time you alter audio you are making a decision. In this case, we should ask ourselves how much we want to ride the fader to equal the volume levels as, the more we equal the levels, the less dynamic range we will have in the end. These last years have seen a rise in the "loudness war" losing dynamic range in music. I think that it is a metter of balance, but this is up to every musician and every producer.




miércoles, 21 de mayo de 2014

Categories of Effects



Hello, I am Carlos Devizia a musician from Argentina. Today I will write about Categories of effects, giving small definitions of each one and providing some audio clips as examples of how they work. Please note that these examples are not intended to be musical, but they aim to show the effect itself clearly. Also be aware that, depending on how you tweak the parameters of the effects processors, you may obtain results drastically different.  And that´s a great advantage in the creative field, if we know how to use it.
We can associate different effects to different principles of sound.

Dynamic Effects

These effects are related to amplitude and they can control volume in different ways.
 In this category we find: compressors, limiters, gates and expanders.

Compressor
Essentially, this effect reduces the dynamic range of a piece by reducing the volume when it gets too loud. In this case "too loud" is related to a point called threshold or ceiling. Everything that goes above that will be reduced in a desired amount.

Limiter
This is basically a compressor with a fast attack and a high ratio.

Gate
Also known as noise gate, this effect attenuates any signal below a determined point. We also can say that basically a gate allows a signal to flow only when it is above certain level, stoping everything that is below that level.

Expander
This effect increases the difference between the loud parts and the quiet parts in an audio signal.

In the following example you´ll hear the clean sound and then processed with a compressor, a limiter, a gate and expander (in that order)




Delay Effects

These ones add slight delays to the signal and they are used to represent a space the listener is inside. In this category we find: delays, flangers, phasers, choruses and reverbs.

Delay
This effect takes an audio signal and reproduces it a certain time later, mixing it with the original signal. The amount and strength of repetitions can be adjusted by the musician, producer or sound engineer.

Flanger
This effect is achieved by taking to identical signals of audio, leaving one of them intact while delaying the second one by a gradually changing period.

Phaser
It is very similar to flanger, in the fact that this effect takes two signals leaving one untouched while processing the other one with a small delay and altering the phase of it. When both signals are mixed the frequencies that are out of phase cancel each other.

Chorus
Once again we have two similar signals. One of them is slightly delayed and usually modulated with an LFO. This gives the impression of a thicker sound and we can perceive some kind of movement in the resulting sound.

Reverb
This effect is related to the reflections of sound in various surfaces before reaching the ear´s listener. With our reverb units (hardware or software) we can re-create an existing space, enhance certain sounds (like a vocal performance, for example) or create a totally weird ambience.

Now you´ll hear the audio processed  with delay, flanger, phaser, chorus and reverb (in taht order)




Filter Effects

These effects control the timbre of sound. Among them we find : high pass, low pass, band pass and EQs.

High pass filter
This filter allows, as it is suggested by its name, to pass frequencies that are over a determined one.
Low pass filter
It is exactly the opposite of the High pass filter. It allows to pass the frequencies that are below a certain point, attenuating the ones that are over that point.


Band pass filter
It allows to pass the frequencies that are in a determined band of frequencies, attenuating those that are below or above that range.

EQ
The EQ affects the timbre of the sound by enhancing or attenuating certain frequencies (boosting or lowing frequencies). It is a vital part of the music production. It can be used both as a corrective tool and as a creative tool. You can find different types of equalizers: graphic equalizers, parametric equalizers and paragraphic equalizers.

In the following example you´ll hear the audio processed with a high pass filter, a low pass filter, a band pass filter and an EQ filter (in that order).




Conclusion


As musicians, producers or sound engineers, it is vital for us to understand each category of effects, how they are produced and which impact they have on an audio source. This will give us the opportunity to use them in a creative way, knowing exactly what we are doing and achieving the results we want to achieve instead of random results.

miércoles, 7 de mayo de 2014

Sound: Propagation, Amplitude, Frequency and Timbre



Welcome to this blog everybody! My name is Carlos Devizia and I am a musician from Argentina. You can find more about me and my music in this blog.
Today, I want to write about sound and its properties.

We hear sounds all around, all the time. But why do we hear them and how can we recognize one sound from others?

Well, this is because there are some properties that belong to sound. And we´ll talk here about it. We´ll also conclude why it is important for us, as musicians, to know about them.

Propagation

First of all, sound needs a media to travel. Basically, sound is produced by a vibratory movement and transmitted through some media to our ears. The media can be a lot of different things, air, water, metal. Some media are great conductors of sound and some other ones are very poor conductors of sound. In a vacuum environment sound cannot travel, that´s why in space no one can hear you scream :), as you probably know.


But, back to Earth, sound produces a movement, this movement consist of two parts compression and rarefaction. And the movement thus produced travels from the origin of the sound towards our ears. This movement follows the same direction of the sound. This means that the motion of compression and rarefaction is parallel to the direction of the sound, not perpendicular to it.


Several factors determine the speed of sound. The media itself has an influence. Thus, for example, sound travels faster through metal than through air. You probably have seen lots of movies in which someone puts his ear down to the railroad track to hear the train coming. The more elastic the media is, the better the conduction of sound will be. On the contrary, soft and porous media are very bad conductors of sound.

In the air, the sound travels at a speed of about 340 meters per second. However this is not an exact number. Factors like temperature or elevation has effect on the speed of sound. It is estimated that for each degree that temperature increases, the speed of sound increases in 0.6 meters/second.

Generally we speak that the speed of sound is:

340 meters/second


1 foot/millisecond


1 kilometer/ 3 seconds


1 mile/5 seconds



As musicians, producers or sound engineers we can take advantage of this characteristic of sound. When you listen to a sound, you can perceive distance and procedence of that sound. It is not the same to hear someone speaking in a low voice beside us, than someone who speaks in the same way at several meters of us. Also, we can perceive where is located the origin of sound. It is right in front of us? Behind us? It is located slightly to the left? Completely to the right?

By the use of different effects we can re-produce a natural environment or create a totally strange atmosphere, working on the propagation of sound and the way we perceive it.

Effect we´ll be using to achieve this goal are: flanger, delay, reverb.

Amplitude

There are several type of waves. As we said before, they move in the direction of sound, but we represent them by different means to visualize them. This is important to understand some sound phenomena.  In the following examples we´ll use the image of an oscilloscope, which is a valuable tool for analyzing sound.










We said that sound involves a process of compression and rarefaction. The intensity of this process is what we call amplitude. Our perception of the amplitude is loudness. Here, it is important to point, that though these words may seem synonyms, they are not.

In practical terms, the more the amplitude, the louder the sound we will perceive. (compare the graphic example of a soft sound and a loud sound and listen to them).












To measure amplitude we use a unit called decibel (DB), which is, in fact, a relative unit. Now, this is a tricky word, as it is used in different contexts. If we are talking about decibels in the air, we talk of DBSPL (decibels sound pressure level). 0 DB is the quietest sound we can hear and from that it goes above. The point in which our ears suffer is called "threshold of pain".

Some examples of approximate DB levels:

Whispers - 12 DB


Soft music - 25 DB


Quiet street - 43 DB


Conversation - 60 DB


Car - 68 DB


A Factory - 75 DB


A traffic jan - 83 DB


Powerful Hi-Fi system - 90 DB


Symphonic orchestra at full level - 102 DB


Thunder - 109 DB


Rock Concert - 115 DB


Plane - 120 DB


Threshold of pain - 130 DB

But, some lines above we said that decibels was a tricky word and this is because it is also used in the computer like DBFS (decibels full scale), and here 0 DB is the loudest thing that can be represented by the computer. From there the values go down in form of negative numbers.

It is useful to understand the concept of amplitude, for example related to audio gear. Here we find what we call dynamic range, and which is the range between the quietest sound that a device can handle (before that level there is only hiss) and the loudest level that the device is capable to deal with (over that there is distortion, cracklings and non-desired upper harmonics).

It is also useful to understand the concept of amplitude to use that in a creative way. Once again we can use effects to process amplitude in a creative way: Our tools here will be compressors, limiters, gates and expansors.            

Frequency and timbre

As we said, there are waves of compression and rarefaction. The quantity of those waves determines the frequency of a sound. We measure it using a unity called Hertz, a Hertz is one in a second.

We perceive this as pitch. Once again, like amplitude and loudness, frequency is a measurable term, while pitch is our perception of that frequency.

The higher the frequency the higher the pitch we will perceive. So a sound of 440 Hertz will be lower than a sound of 8000 Hertz, for example.

In the oscilloscope images we can see a low frequency wave and a higher frequency wave. You´ll notice that amplitude does not change. This is because properties of sound are independent between them.














Usually, we tend to say that human beings perceive between 20 hertz and 20000 hertz, which is not totally true. More likely we are able to perceive to about 18000 hertz. And not all ears behave in the same way. We tend to lose perception of high frequencies while we grow up. Also, women tend to hear better higher frequencies than men do.



Now, let´s think for a moment, two sounds were played at the same frequency with the same amplitude we should perceive them in the same way. But here makes its appearance another element which is timbre. When a sound is produced, we can hear a fundamental frequency, but also lots of frequencies that are present too in that sound. These secondary frequencies are called partials, and if they related to the fundamental in a perfect mathematic way they are called harmonics. These frequencies colorize the wave and make it less pure, giving to its sound its proper voice. This is why a piano sounds like it does, and a cello sounds with its own sound, and so on. Let´s take a look at the way the oscilloscope shows the sound of some instruments.


























The knowledge of this fact is useful, for example, to wind players that can obtain new sounds enhancing harmonics while they play, and thus obtain different notes in one playing position.

We can use different filters to work on timbre and frequency. EQ is our main tool in this aspect. We can use it in two ways: to correct audio files that need some changes in their frequencies to sound more natural, boosting some of them and lowering some others; or we can use the EQ in a creative way, boosting and lowering frequencies to achieve results that are different from those that can be heard in the real world.

Conclusion:

The knowledge of sound and its properties will benefit our creative work as musicians. Sometimes musicians tend to forget that they are working with sound. Knowing its properties will help us to manipulate it in a creative way, to reproduce a real environment, to create a completely strange atmosphere, to understand why the song we are working in does not sound in the way we want, to choose the proper effect in every situation.



Intruments photos taken from wikimedia commons:
Drums: Stephan Czuratis 
Nylon Guitar: James anderson
Clarinet: Ratigan
Alto Sax: Jana C.