Skip navigation
noamkrief
Currently Being Moderated

newbee - dual mono vs stereo

Jan 23, 2014 12:43 AM

Tags: #audition #stereo #mono

Hi everyone.

 

Today I got myself two more microphones to record my piano. (a total of 4 mics)

 

I also used auditions for the first time and ran into a strange situation:

 

In multitrack view, when I record 2 microphones as two seporate mono tracks, I get a very "thin" sound out of my piano. I think it is because the sound reaching each microphone is different. The farther I space out the microphones, the more I notice that the sound is not 100% synchronized. It's almost and echo effect probably 1ms delay between each micrphone. This makes sense.

 

I thought that's what stereo was all about. 2 different sound sources.

 

But when I select track 1 to be a stereo track, I record the same 2 mics at the same mic placement, and the sound becomes much fuller, warmer, and in sync.

 

 

Can anyone make sense of this for me? Is Adobe Auditions synchronizing the time delay when recording stereo vs recording 2 seporate monos?

 

Hope my question makes sense, and thank you in advance for any replies.

 

Noam

 
Replies
  • SteveG(AudioMasters)
    5,610 posts
    Oct 26, 2006
    Currently Being Moderated
    Jan 23, 2014 2:41 AM   in reply to noamkrief

    noamkrief wrote:

     

     

    In multitrack view, when I record 2 microphones as two seporate mono tracks, I get a very "thin" sound out of my piano. I think it is because the sound reaching each microphone is different. The farther I space out the microphones, the more I notice that the sound is not 100% synchronized. It's almost and echo effect probably 1ms delay between each micrphone. This makes sense.

     

    I thought that's what stereo was all about. 2 different sound sources.

    It is - although the science of stereo is a good deal more complicated than that. But I think that your problem may not be quite what you think...

     

    If you record two mono tracks from microphones that are very close to each other and you get a really thin sound, then it's more likely that the sound from each mic added together is cancelling the other one out. In other words, you may have the mics polarity reversed with respect to each other. IOW, for the same sound source, one mic's signal is going positive and the other going negative. This would give rise to exactly what you're experiencing - the further apart the mics get, the more difference there is, and the less cancellation there is.

     

    In terms of the time delay, you're not far out. Generally we reckon that it takes sound about a millisecond per foot to travel in air, and it doesn't take much more than this to make the signals sufficiently different from each other not to cancel out. It's still not going to give you a representative sound though - the mics really ought to be in phase with each other. With a balanced mic connection it simply means that the balanced pair of cables inside the shield is connected one way around on one mic and the other way around on the other.

     

    Is there an easy way to find out? Yes there is, and it's gloriously simple. Assuming that you're using CS6 or CC, then there's a switch at the top of the mixer channel for each track (circle with a stroke through it) marked 'polarity reverse'. Flick this for one channel, and if the results suddently magically improve, then that's your problem.

     

    How many mics you need to get a good piano sound depends entirely on the piano and the room it's in. With a decent grand in a resonant room, I probably wouldn't use more than two anyway as a stereo pair - but I'd spend some time making sure that they were in the right place. A lot of people use two pairs, with the second pair in a line with the first, but further away. If you don't have them in a line, then you are likely to experience time difference cancellation in the stereo field - but this is very room-dependent. As far as recording in multitrack mode is concerned, if you have a stereo pair (wired correctly in phase), then record this as such - as a stereo track.

     

    Is Adobe Auditions synchronizing the time delay when recording stereo vs recording 2 seporate monos? 

    No, Audition doesn't do this. It only ever does what you tell it to.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 23, 2014 4:57 AM   in reply to SteveG(AudioMasters)

    Silly question but, when you had two mono tracks, did you pan one hard left and the other hard right?  If you do have a phase problem, leaving them both mono in the middle would show up the problem far worse than panned tracks or a stereo track.

     
    |
    Mark as:
  • SteveG(AudioMasters)
    5,610 posts
    Oct 26, 2006
    Currently Being Moderated
    Jan 24, 2014 2:53 AM   in reply to noamkrief

    noamkrief wrote:

     

    Steve, I am using adobe auditions V3 so I didn't see the option to change the polarity.

    Ah, different mixer and I don't have that installed anywhere at the moment. The alternative approach that works with all versions is to invert one track in Edit view - I'm pretty sure that in the main effects menu there's an 'Invert' option, and that will do what's necessary. Just double-click on the track in multitrack, it will open in Edit view, you invert the track and then click on multitrack again - it will now be inverted.

     

     

    So if I understand it right, stereo is like a 3D image. Each eye sees different, but if each eye see's its own image and in addition, the other eye's image, it would be a blurred image.

     

    So the left ear has to hear 1 mono mic, and the right ear has to hear the 2nd mono mic. Recording in stereo creates that bias automatically.

     

    If I am correct, it leaves me thinking - then how can stereo sound recording sound good from speakers? You cannot guarantee that the left spearker sound will only be heard by your left ear and the right speaker by your right ear.

     

    And what happens to stereo sound when played through a single speaker playback device? Does it combine both sides? Will we have the same blurred, delayed, thin effect?

    Well I did say that it's more complicated, and you're beginning to ask the pertinent questions...

     

    Reams of stuff have been written about this, and it's going to be difficult to digest this down to a single post, and even harder because it's not possible to illustrate it. So what you'll get is a rather limited set of generalisations, I'm afraid! First thing you have to realise is that stereo from loudspeakers is a very limited illusion - certainly not 3D, because you can't record any depth information from just two microphones that speakers could reproduce - essentially air mixing will destroy that information if it's not presented to the ears directly (although it's not quite that simple in the real world). If you put two mics somewhere near your ears, record something, and then reproduce the results from them into a pair of headphones, you remove all that air mixing, and just present the results from the mics to your ears. And it will sound as though it has depth, and you may even hear things coming from behind you.

     

    The only reason that this works is because your head is present when you make the recording, and the masking effect of this on the sound you hear is inevitably encoded with the recorded information. Everybody's head and ears are different, and has a different effect on the recorded sound, so what reproduces correctly for you won't necessarily work for anybody else - you have a unique Head-Related Transfer Function (HRTF). This method of recording is called Binaural recording, and it isn't really 'stereo' as such, although the terms are often mixed up. The Wiki article about it here is quite accurate, and has some other useful links in it. You can sort-of fix binaural recordings to reproduce over loudspeakers by adding 'crosstalk' to the signals, but that's not the best way to record for them, quite frankly - the mic positioning is quite inappropriate.

     

    Loudspeaker-reproduced stereo relies on both time and phase differences detected by your ears, to work at all. And yes it really is both. One huge problem with it is that you can't put the mics where you ears are and expect reasonable results - because your ears are on the sides of your head, not several feet away in a room! So inevitably you only get decent results from loudspeaker stereo if the mics are a lot closer to the sound source. What speakers are attempting to do is reproduce the wavefront that the sound source is creating, from about the same place that it's happening, and that's not the same in any way as reproducing what your ears would hear from a suitable listening place.

     

    And it's this whole business of getting mics in the right place for decent loudspeaker reproduction that's the killer - and a heck of a lot of people simply get it quite wrong, for all sorts of reasons (including, these days, the BBC, which is shameful; I could some tales unfold...). For a start, there are umpteen different ways of configuring microphones, although they all fall into a relatively small number of generic groupings. The basic groupings are coincident and spaced, and within each there are a number of options to do with types of microphone, and different placements. If you put 'stereo microphone techniques' into Wiki's search, you'll find a lot of articles and links about this. None of them will tell you how to get it right though, as that's almost a trade secret (hehe!), although I have given you clues...

     

    But as far as your 'thin' effect is concerned, without actually looking at and listening to the signals, I can't be definitive about what's causing it. If you want to post a short sample of a recording though, I can tell you quite easily what's happening. There's no way to post audio on this forum (thanks, Adobe...) - you'd need something like a Dropbox public link (you can do this for free with a basic Dropbox account) to post a link to.

     

    There's a whole raft more to say about stereo recording, and it really won't fit here. Really, if you want to find out more, you need to read a decent book about it, and if you want to, I can recommend a few.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 24, 2014 3:51 AM   in reply to SteveG(AudioMasters)

    Just checked AA3 and it's mixer does have the Phase Reverse switch icon as described earlier.

     
    |
    Mark as:
  • SteveG(AudioMasters)
    5,610 posts
    Oct 26, 2006
    Currently Being Moderated
    Jan 24, 2014 4:25 AM   in reply to ryclark

    That makes sense, but you may have to explain to the OP exactly where it is, because I don't remember...

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 24, 2014 9:29 AM   in reply to SteveG(AudioMasters)

    Near the top of each Mixer channel to the right of the track level control.

     

    ScreenHunter_15 Jan. 24 17.28.jpg

     
    |
    Mark as:
  • SteveG(AudioMasters)
    5,610 posts
    Oct 26, 2006
    Currently Being Moderated
    Jan 24, 2014 12:03 PM   in reply to noamkrief

    I'll have a listen later.

     
    |
    Mark as:
  • SteveG(AudioMasters)
    5,610 posts
    Oct 26, 2006
    Currently Being Moderated
    Jan 24, 2014 2:32 PM   in reply to SteveG(AudioMasters)

    Well I'd say that this was all quite consistent. Couldn't see an easy way to actually download the Soundcloud file, but that wasn't a problem because I can record them direct anyway - that way I could examine exactly what was happening.

     

    Essentially, listening to the dual mono file in stereo was virtually identical to listening to the stereo one with the mono button pressed on the monitor system - which is as it should be. What you've got is the result of two mics, relatively close to each other, with a relatively close sound source, but one where the individual strings are all at slightly different distances from each mic. This means that some wavelengths of sound will reinforce at each mic, and others will cancel out.

     

    The audible result under these circumstances is  known as a comb filtering effect - so the corresponding frequencies add and subtract, and this will affect the sound of each string. And consequently the harmonic responses will inevitably be distorted as a result. It's called comb filtering because the actual frequency response, if drawn on paper, looks a bit like the teeth of a comb. And yes, it tends to sound 'thin' - so that's the answer to why. It was the next thing I was coming to after eliminating the polarity error, anyway - not exactly rare, but not often explained, either.

     

    If you want a mono result that won't do this, then you have to use a coincident mic technique. This would mean that the arrival time of all sources at both mics was identical, and that the stereo effect would be derived from amplitude differences only. So for instance, you'd have two small-diameter cardioid mics with the capsules aligned one directly above the other, and pointing left and right, with an included angle somewhere between 90 and 135 degrees, depending upon how wide you want the stereo field. It's possible to eliminate all phase issues up to a pretty high freqency this way - most of the way up to 20kHz, and it's not really going to be noticeable at that sort of frequency anyway - very little signal up there.

     

    This used to be the way most broadcasters recorded concerts, just so that they'd play more or less correctly on mono radios. But as I mentioned earlier, organisations like the BBC seem to have forgotten how to do this, for some reason, and now they produce pretty dreadful sound from most of their concerts - woe betide anybody having to listen in mono, because they hear a very distorted sound.

     

    There are some placement methodologies that are regarded as some sort of compromise, and allow you to use a spaced technique with sort-of acceptable mono compatibility, although I'm less than convinced myself - if you want to know more about that, then look up ORTF and DIN microphone techniques. But, less positional accuracy in the reproduced sound is one side effect of this. With a piano though, you don't really want positional accuracy from the strings - you want an overall sound that fits in the environment it's in, but without being distracted by a sharply focussed sound - so generally if it's close-miced, then spaced mics is the way to go - but, you have to forego the mono compatibility.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 24, 2014 7:13 PM   in reply to SteveG(AudioMasters)

    I'm a big fan of the X-Y technique and tend to use it on a lot of things, ranging from choirs and choruses to small classical ensembles.  It works well for me--generally I'll use a small diaphragm condenser like the AKG C451 or 391.  There are special mounting brackets to hold the mics in the right position but I just tend to use two stands, fight with getting them in the right position, resolve to buy a bracket then forget it until the next time.

     

    One second thing, if you use a "spaced" technique, there's a rule of thumb that the mics should be at least 3 times as far apart as their distance to the sound source.  As with any rule of thumb there are lots of caveats and exceptions but at least this gives a starting point.

     
    |
    Mark as:
  • SteveG(AudioMasters)
    5,610 posts
    Oct 26, 2006
    Currently Being Moderated
    Jan 25, 2014 2:52 AM   in reply to Bob Howes

    Bob Howes wrote:

     

     

    One second thing, if you use a "spaced" technique, there's a rule of thumb that the mics should be at least 3 times as far apart as their distance to the sound source.  As with any rule of thumb there are lots of caveats and exceptions but at least this gives a starting point.

    On an orchestra, that would put them outside the room!

     

    I think that the rule actually states that no sound source should be closer to the mics than three times the distance between them. And the reason for this is precisely because of the comb-filtering effect. Most spaced-mic techniques (with one notable exception) use spacings up to about 26cm max - that's the spacing in the current version of the Faulkner array, not the widely published one, which has them closer. The notable exception is of course the Decca Tree, where they can be anything up to about 4ft apart - but that only works because of the centre mic.

     

    I've not ever tried it, but the chances are that a small Decca Tree on a large concert grand in a sensible size hall might actually work rather well, as long as you don't mind rather average mono compatibility from the combined signal. You might think that just using the mono feed from the centre mic would give a better result, but probably not; it's only used as a centre fill, and would give a very 'forward' sound if used alone.

     
    |
    Mark as:

More Like This

  • Retrieving data ...

Bookmarked By (0)

Answers + Points = Status

  • 10 points awarded for Correct Answers
  • 5 points awarded for Helpful Answers
  • 10,000+ points
  • 1,001-10,000 points
  • 501-1,000 points
  • 5-500 points