Sound quality - the science.
Sound quality - the science. Posted on: 05.11.2012 by Tatum Ansaldo Just thought I'd share this on the topic of sound quality, why mp3s are evil etc. I found myself watching the TED talk video below by Tony Andrews of Funktion One, and started to believe a lot of what he was saying sounded like bullshit. However, I don't know the science well enough to justify that statement, so I did some research and sent the link to a good friend of mine who's a physics graduate and electronics engineer specialising in signal processing and analysis (and DJ). I'll post his response later, but first the video and a post I found on another community which takes the bit depth issue into account. Now the post from head-fi by a guy called Gregorio. I'm interested to hear the responses from those of you who can make it through the whole thing. It gets a bit scientific but I believe it's very well explained: | |
Nikole Resende 06.11.2012 |
Originally Posted by Sample Seven
1. A musical octave is defined as a doubling in frequency. 2. 20 kHz are only the theoretical maximum in human hearing. In reality only very young children might be able to actually hear such high frequencies. Human hearing degrades over time, already when turning 20 most people can't hear much above 16 kHz. This means you can set the cutoff frequency much lower, e.g. at 18 kHz, cutting out much more of the content above 22 kHz without any noticable effect on the audible sound. 3. Many producers use a high-cut at about 20 kHz as the high frequencies, even though they are inaudible, use up headroom and thus "suck out energy" from the track (read: keep mr. Superproducer from winning the loudness war ). "Stacking" (or rather chaining) filters leads to their slopes adding up, i.e. if the producer used a 24dB/octave filter and the filter used in the Reproduction of the sound has also a 24dB/octave slope, the actual slope will be 48dB/octave, meaning a much steeper attenuation of frequencies above the cut-off point. |
Nancey Inderlied 06.11.2012 |
Originally Posted by Sample Seven
Fun fact, the end-chain sample rate (on a DJ mixer, for example) also directly affects headroom, due to relaxed (or essentially no) AA filtering on the end output. This is one of the things that contributes to the newer DJM's (96khz) absurd 18db of headroom, even though it upsample's a CDJ's 44.1khz signal. |
Julius Schoenhofer 05.11.2012 |
Originally Posted by Audeo
My potentially interesting technical explanation: When sampling at 44 KHz, the highest frequency that can be accurately reproduced is 22 KHz. Any frequency above 22 KHz will be distorted and sound like a lower frequency. Humans hear up to about 20 KHz. Therefore, when sampling at 44 KHz, one has to filter out frequencies above 22 KHz using a low pass filter. Low pass filters, both analog and digital, can not just completely attenuate all frequencies above a certain cutoff frequency. They gradually attenuate frequencies above whatever the desired cutoff frequency is. How much the filter attenuates frequencies beyond the cutoff frequency is called the filter's slope, and is commonly measured in dB per octave or decade (octave = doubling of frequency, nothing to do with musical octave, decade = 10x the frequency). Here's a good illustration, showing a -20 dB/decade low pass filter: You can see the filter gradually attenuates the frequencies above the cutoff frequency. So, back to our 44 KHz sampling scenario. If you put a lowpass filter with the cutoff frequency right at 22 KHz, it will still let some frequencies greater than 22 KHz through and cause distortion. To avoid this, one can shift the filter to a lower corner frequency and reduce the amount of signal that is above 22 KHz coming through. This will reduce distortion, but it will start to affect the desirable (<22 KHz) high frequencies in the signal. Now, instead, consider sampling at 88 KHz. The highest frequency that can be accurately reproduced then is 44 KHz, which is way higher than humans can hear. So we have this entire range of 20KHz - 44 KHz in which to implement the low pass filter, since humans can't hear anything in that range. That is, we can put the low pass filter much higher than the 20 KHz cutoff of human hearing, so it won't affect the desriable part of the signal, but we can also put it far below the maximum reproducible frequency (44KHz) so it will attenuate more of the excessively high (>44 KHz) parts of the signal that would distort. This means that we can filter much more of the high frequencies that can't be reproduced at the given sampling rate, while leaving more of the desirable frequencies untouched .
Originally Posted by fullenglishpint
Check it out, it's the little thing above the typical Ti dome tweeter: [ |
Tera Baragan 05.11.2012 |
Originally Posted by fullenglishpint
|
Tatum Ansaldo 05.11.2012 | Just thought I'd share this on the topic of sound quality, why mp3s are evil etc. I found myself watching the TED talk video below by Tony Andrews of Funktion One, and started to believe a lot of what he was saying sounded like bullshit. However, I don't know the science well enough to justify that statement, so I did some research and sent the link to a good friend of mine who's a physics graduate and electronics engineer specialising in signal processing and analysis (and DJ). I'll post his response later, but first the video and a post I found on another community which takes the bit depth issue into account. Now the post from head-fi by a guy called Gregorio. I'm interested to hear the responses from those of you who can make it through the whole thing. It gets a bit scientific but I believe it's very well explained: |
Nikole Resende 06.11.2012 |
Originally Posted by Sample Seven
1. A musical octave is defined as a doubling in frequency. 2. 20 kHz are only the theoretical maximum in human hearing. In reality only very young children might be able to actually hear such high frequencies. Human hearing degrades over time, already when turning 20 most people can't hear much above 16 kHz. This means you can set the cutoff frequency much lower, e.g. at 18 kHz, cutting out much more of the content above 22 kHz without any noticable effect on the audible sound. 3. Many producers use a high-cut at about 20 kHz as the high frequencies, even though they are inaudible, use up headroom and thus "suck out energy" from the track (read: keep mr. Superproducer from winning the loudness war ). "Stacking" (or rather chaining) filters leads to their slopes adding up, i.e. if the producer used a 24dB/octave filter and the filter used in the Reproduction of the sound has also a 24dB/octave slope, the actual slope will be 48dB/octave, meaning a much steeper attenuation of frequencies above the cut-off point. |
Linda Chavda 07.11.2012 | Tony Andrews is a right goon. |
Corine Ammon 06.11.2012 | I'll omit what I wrote about bit depth, as this seems to have been covered already in better detail than I did. Obviously, over compression and the loudness wars are evil. But adding a few extra bits of depth resolution isn't going to help that problem - it just raises the ceiling that commercial producers will be obliged to drive their waves into. People will have to turn down their devices - amusing, because the average volume control potentiometer has better channel balance at higher volumes. Next up: sampling theory. It's a bit harder to nail down - but the essential point is that for a digital system it is considered completely transparent to sample at a rate >2x that of the highest frequency you wish to reproduce. This is why a standard cd is sampled at ~44.1kHz. Whilst in theory, this process turns a sine wave into a triangle wave at the highest reproducible frequency, in practice because of the nature of physics that "triangle wave" will be reproduced as a sine wave due to the fundamental limitations of the rest of the system. A 96k file can reproduce a sound up to 48kHz, well beyond the hearing of anyone who's not a dog or bat etc. There is a claim that much in the way infrasound (sub 10Hz) is not perceived as a tone and yet adds to music, ultrasound can also do this - despite a lack of even a proposed physical mechanism of how we would notice it or, y'know, any kind of scientific evidence. The argument made by the proponents is that even if it turns out they're wrong, there's no harm in it right? And producers use high bit-rates! The fact is that there is harm in it, and production is a different kettle of fish all together. That is not to say that there are no benefits of a higher sample rate. A higher sample rate can make it easier to implement analogue anti-aliasing, increase bit depth (virtually, with some clever maths) and reduce noise. We now use digital filters in digital systems and I've dealt with bit depth earlier - the noise issue is however valid for consumer audio. I'll come back to this later, because now it's time to talk about the drawbacks of oversampling. In order to understand this, we should first clear up some issues around distortion - the video gets as far as distortion = bad, which whilst somewhat true, is less than helpful. Distortion can be thought of as any deviation from the desired signal. There are two kinds of distortion - linear and non-linear. In linear distortion, as the power of the wanted signal is increased, the distortion increases at a linear rate (wanted signal level is proportional to unwanted level.) In non-linear distortion the power contained in the portion of the signal we call distortion rises more quickly than the wanted portion (wanted signal level raised to a power is proportional to the unwanted signal level.) The main kind of non-linear distortion we worry about is called intermodulation distortion (harmonics are a special case of this mechanism.) Two tones can interact to generate new tones either side of the original two. For instance, if you have tones at 1MHz and 2MHz, you will see shrinking intermod tones in either direction with a spacing of 1MHz. The power represented in these tones depends on the amount of non-linear distortion. All amps have non-linear distortion - it's merely a question of how much. There are a few issues with this extra representable frequency range. The first is that if it is at all populated, intermod noise will be scattered throughout the audible spectrum - amps tend to have worse non-linear characteristics the further from their optimal zone they get, and it should go without saying that no-one optimises an audio amp to ultrasound frequencies. Tony Andrews mentions that he only listens to distortion free music (an obvious lie if you're familiar with what distortion is), and we'll assume that he means clipping. Producers will recognise clipping as "overdrive" distortion, present in pretty much all rock music for a start. The process of clipping is equivalent to squaring off the top of a wave. A square wave is made by adding successively higher harmonics of a fundamental sine wave together and an infinite number of these harmonics summed will give a perfect square wave - therefore, squaring off a wave is the equivalent of adding high frequency harmonics. The second issue is similar, in that ultrasonic tones will be generated by the intermodulation of the audible tones. Thus, if that frequency range is available it will be filled with noise - how much depends on the amp quality amongst other things (it's worth noting that audio amplifiers tend to be optimised to have low distortion around 4kHz. The further from the frequency you go, the worse the distortion becomes.) Whilst it is true that limiting the sample rate detracts from square wave reproduction for the reasons in the above paragraph, only a few harmonics are needed to make a better square wave than the speaker drivers can actually reproduce. In fact, it is current driver technology that acts as the main limitation. Any given driver is going to have a frequency response curve - and the better it's characteristics in certain parts of the curve, the worse they will be in others. The traditional way round this is to optimise it to a certain frequency set and then only drive it with those frequencies using a crossover. The average audio tweeter, for obvious reasons, doesn't even extend into the +18KHz region that well - just like the average woofer won't touch 20Hz. When a driver is driven with a wanted signal, and an "unwanted" signal outside of it's operating range two things happen: 1. The driver is able to reproduce the unwanted signal, but not well. The signal comes out and it is distorted making the speaker sound rubbish. 2. The driver is not able to reproduce the unwanted signal in any significant quantity relative to the wanted signal because of it's response curve. You can probably imagine what happens to the wanted signal when the voice coil is constantly trying and failing to move the driver faster than it can physically move! Degradation of wanted signal quality would be putting it lightly. The second case is what will happen when you try and reproduce ultrasound with a standard tweeter. So clearly, we need to filter this sound from the tweeter. However, apparently this ultrasound increases our enjoyment - so we're actually going to want a crossover and a full on ultrasound tweeter. Good luck finding any speakers with that built in. You also have to bear in mind that any filtering (this includes crossovering) will introduce aberrations of their own - filter theory depends on reactance which is a non-linear phenomenom, therefore any filtering adds non-linear distortion. Thus, whilst oversampling does have uses, in consumer audio the sample rate decision is a trade off between noise (linear distortion) and intermodulation (non-linear distortion). So essentially, for a standard system, as soon as you start sampling higher than audible frequencies, you're looking at spending a lot of money on non-consumer components in order to make your sound *only* a little shitter. Something that the presentation very much lacked was peer reviewed, double blind studies about how audible the effects he describes are, so I'll try and do that myself. This one: http://www.aes.org/e-lib/browse.cfm?elib=14195 is quite good. They spent a year putting a 16bit, 44kHz ADC-DAC process into various signal chains and seeing if people could notice the difference when compared to a "full quality" signal. It turns out that no matter what the system, no matter who the listener (including self professed golden eared audiophiles and professional masterers) no-one can spot the difference (unless you have it on silent and turned right up - apparently some noise was discernable). Frankly, if you believe that background noise at 50dB down from a sigal makes a difference, you'd be better off spending your money on an anechoic chamber rather than the next sound system up. The issues raised are real - but as soon as the digital technology can reproduce the desired signal more accurately than the rest of the system, no further benefits will come from a higher sample rate. The highest end of current systems cannot render the difference between a 44.1kHz 16bit scheme and a full production quality 192kHz 24bit scheme in such a way that the difference is noticeable (without foreknowledge of which scheme is being used.) I've seen people say that interactions involving ultrasound waves generate subtle, audible tones. This is quite true, however it doesn't mean we have to record and reproduce the inaudible component sounds in order to hear their audible results. If the ultrasound interaction is "genuine" and desired, then it's audible results will have been recorded from the source or generated in software (production should be done with higher sample rates, largely for this reason.) If its results weren't recorded, and the component sounds need to be combined in a non-linear device in order to hear the result, that's distortion. One definite benefit that was mentioned earlier in the thread was the ability to use a gentler low pass filter slope - as I mentioned earlier, filtering introduces distortion of its own. As with everything in engineering, it's a tradeoff. Improvement in one area will spoil another - this is why this sort of thing has to be constantly re-referenced to the actual capabilities of the human ear, demonstrated in double-blind studies. |
Nedra Cheff 06.11.2012 | I believe higher sample rates are useful in audio processing situations for production etc. But once all said and done 16bit 44.1khz is perfectly acceptable as an industy standard. The problem in my eyes is that Mp3 seems to have become the industry standard for audio distribution. JM2P |
Nancey Inderlied 06.11.2012 |
Originally Posted by Sample Seven
Fun fact, the end-chain sample rate (on a DJ mixer, for example) also directly affects headroom, due to relaxed (or essentially no) AA filtering on the end output. This is one of the things that contributes to the newer DJM's (96khz) absurd 18db of headroom, even though it upsample's a CDJ's 44.1khz signal. |
Julius Schoenhofer 05.11.2012 |
Originally Posted by Audeo
My potentially interesting technical explanation: When sampling at 44 KHz, the highest frequency that can be accurately reproduced is 22 KHz. Any frequency above 22 KHz will be distorted and sound like a lower frequency. Humans hear up to about 20 KHz. Therefore, when sampling at 44 KHz, one has to filter out frequencies above 22 KHz using a low pass filter. Low pass filters, both analog and digital, can not just completely attenuate all frequencies above a certain cutoff frequency. They gradually attenuate frequencies above whatever the desired cutoff frequency is. How much the filter attenuates frequencies beyond the cutoff frequency is called the filter's slope, and is commonly measured in dB per octave or decade (octave = doubling of frequency, nothing to do with musical octave, decade = 10x the frequency). Here's a good illustration, showing a -20 dB/decade low pass filter: You can see the filter gradually attenuates the frequencies above the cutoff frequency. So, back to our 44 KHz sampling scenario. If you put a lowpass filter with the cutoff frequency right at 22 KHz, it will still let some frequencies greater than 22 KHz through and cause distortion. To avoid this, one can shift the filter to a lower corner frequency and reduce the amount of signal that is above 22 KHz coming through. This will reduce distortion, but it will start to affect the desirable (<22 KHz) high frequencies in the signal. Now, instead, consider sampling at 88 KHz. The highest frequency that can be accurately reproduced then is 44 KHz, which is way higher than humans can hear. So we have this entire range of 20KHz - 44 KHz in which to implement the low pass filter, since humans can't hear anything in that range. That is, we can put the low pass filter much higher than the 20 KHz cutoff of human hearing, so it won't affect the desriable part of the signal, but we can also put it far below the maximum reproducible frequency (44KHz) so it will attenuate more of the excessively high (>44 KHz) parts of the signal that would distort. This means that we can filter much more of the high frequencies that can't be reproduced at the given sampling rate, while leaving more of the desirable frequencies untouched .
Originally Posted by fullenglishpint
Check it out, it's the little thing above the typical Ti dome tweeter: [ |
Romelia Stankard 05.11.2012 | Oh god, not this guy again. Tony Andrews makes some good speakers but he is totally out of his mind and all of his talks are filled with pseudoscience. TED has gone downhill a lot and become a joke but TEDx really brings out the crazies. I believe this talk is relevant which really explores how the human mind works and perceives difference in audio and dispels a lot of audio myths that are rampant- and gets into lossless sampling rates and recording formats. |
Tera Baragan 05.11.2012 |
Originally Posted by fullenglishpint
|
Venetta Cawyer 05.11.2012 | Not a double blind test, no. The teacher knew what file was playing. And about the spec sheet, that's just the range they can guarantee, not "just" the range it can produce. Even the DS210 from Funktion One goes to 18Khz. But in reality they go way higher. Next time I have a measurement mic handy (one that goes REALLY really high) I'll take a measurement. |
Leeanna Ayla 05.11.2012 | tldr blah, blah, blah Tony Andrews can kiss my AAC's |
Tatum Ansaldo 05.11.2012 | According to Meyer themselves, the MSL-4 only has a frequency response up to 18kHz... http://www.meyersound.com/pdf/produc...s/msl-4_ds.pdf Was it a double blind test? |
Venetta Cawyer 05.11.2012 | Oh we weren't listening on your average home stereo system.. We got six Meyer MSL4 tops and six 700-HP subs installed with all the right processing, bells and whistles That's about 21,840 watt of seriously high quality sound. |
Tatum Ansaldo 05.11.2012 | There will be more on this when I have the full write up of the response from my friend (he decided to edit it so it was a little more informative and less libellous ) but I find it very hard to believe that pushing the sample rate beyond 44.1kHz makes an audible difference. No consumer audio speaker is designed to (or able to) accurately reproduce frequencies much above 20kHz. The physical limitations of the size of the average tweeter mean that it simply won't move that fast, so you'd need a dedicated ultrasound driver. On top of that, there's no scientific evidence to even suggest that the human ear can pick up ultrasound in any way. |
Venetta Cawyer 05.11.2012 | The Funktion One guy, Tony, really sounds like he did a bit too much LSD in his time and is really frustrated by the mainstream studio mixers. All I hear him do is ranting about things but nothing about Mike Shipley and studio engineers like him, who make gorgeous mixes. What Gregorio says is true when he talks about the useless dynamic range of 24bit audio. But what he doesn't cover is the extra frequency sample range and that's the thing that DOES add more enjoyment. At school we recorded a saxophone at 44.1Khz and at 88.2Khz. Of course there was no way to let the saxophone sound as good as when it was played live, but the difference between 44.1Khz and 88.2Khz was pretty amazing. It added so much more air and depth. Even though we can't "translate" frequencies above 20Khz with our ears, it still contains information that is somehow perceived and does make a difference. |
Tatum Ansaldo 05.11.2012 | Cont.
1 = Actually these days the process of AD conversion is a little more complex, using oversampling (very high sampling frequencies) and only a handful of bits. Later in the conversion process this initial sampling is 'decimated' back to the required bit depth and sample rate.
2 = The concept of the perfect measurement or of recreating a waveform perfectly may seem like marketing hype. However, in this case it is not. It is in fact the fundamental tenet of the Nyquist-Shannon Sampling Theorem on which the very existence and invention of digital audio is based. From WIKI: “In essence the theorem shows that an analog signal that has been sampled can be perfectly reconstructed from the samples”. I know there will be some who will disagree with this idea, unfortunately, disagreement is NOT an option. This theorem hasn't been invented to explain how digital audio works, it's the other way around. Digital Audio was invented from the theorem, if you don't believe the theorem then you can't believe in digital audio either!! 3 = In actual fact these days there are a number of different types of dither used during the creation of a music product. Most are still based on the original TPDFs (triangular probability density function) but some are a little more 'intelligent' and re-distribute the resulting noise to less noticeable areas of the hearing spectrum. This is called noise-shaped dither. 4 = Dynamic range, is the range of volume between the noise floor and the maximum volume. |
Tatum Ansaldo 05.11.2012 | Gregorio:
It seems to me that there is a lot of misunderstanding regarding what bit depth is and how it works in digital audio. This misunderstanding exists not only in the consumer and audiophile worlds but also in some education establishments and even some professionals. This misunderstanding comes from supposition of how digital audio works rather than how it actually works. It's easy to see in a photograph the difference between a low bit depth image and one with a higher bit depth, so it's logical to suppose that higher bit depths in audio also means better quality. This supposition is further enforced by the fact that the term 'resolution' is often applied to bit depth and obviously more resolution means higher quality. So 24bit is Hi-Rez audio and 24bit contains more data, therefore higher resolution and better quality. All completely logical supposition but I'm afraid this supposition is not entirely in line with the actual facts of how digital audio works. I'll try to explain:
When recording, an Analogue to Digital Converter (ADC) reads the incoming analogue waveform and measures it so many times a second (1*). In the case of CD there are 44,100 measurements made per second (the sampling frequency). These measurements are stored in the digital domain in the form of computer bits. The more bits we use, the more accurately we can measure the analogue waveform. This is because each bit can only store two values (0 or 1), to get more values we do the same with bits as we do in normal counting. IE. Once we get to 9, we have to add another column (the tens column) and we can keep adding columns add infinitum for 100s, 1000s, 10000s, etc. The exact same is true for bits but because we only have two values per bit (rather than 10) we need more columns, each column (or additional bit) doubles the number of vaules we have available. IE. 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024 .... If these numbers appear a little familiar it is because all computer technology is based on bits so these numbers crop up all over the place. In the case of 16bit we have roughly 65,000 different values available. The problem is that an analogue waveform is constantly varying. No matter how many times a second we measure the waveform or how many bits we use to store the measurement, there are always going to be errors. These errors in quantifying the value of a constantly changing waveform are called quantisation errors. Quantisation errors are bad, they cause distortion in the waveform when we convert back to analogue and listen to it.
So far so good, what I've said until now would agree with the supposition of how digital audio works. I seem to have agreed that more bits = higher resolution. True, however, where the facts start to diverge from the supposition is in understanding the result of this higher resolution. Going back to what I said above, each time we increase the bit depth by one bit, we double the number of values we have available (EG. 4bit = 16 values, 5bit = 32 values). If we double the number of values, we halve the amount of quantisation errors. Still with me? Because now we come to the whole nub of the matter. There is in fact a perfect solution to quantisation errors which completely (100%) eliminates quantisation distortion, the process is called 'Dither' and is built into every ADC on the market. Dither: Essentially during the conversion process a very small amount of white noise is added to the signal, this has the effect of completely randomising the quantisation errors. Randomisation in digital audio, once converted back to analogue is heard as pure white (un-correlated) noise. The result is that we have an absolutely perfect measurement of the waveform (2*) plus some noise. In other words, by dithering, all the measurement errors have been converted to noise. (3*). Hopefully you're still with me, because we can now go on to precisely what happens with bit depth. Going back to the above, when we add a 'bit' of data we double the number of values available and therefore halve the number of quantisation errors. If we halve the number of quantisation errors, the result (after dithering) is a perfect waveform with halve the amount of noise. To phrase this using audio terminology, each extra bit of data moves the noise floor down by 6dB (half). We can turn this around and say that each bit of data provides 6dB of dynamic range (*4). Therefore 16bit x 6db = 96dB. This 96dB figure defines the dynamic range of CD. (24bit x 6dB = 144dB). So, 24bit does add more 'resolution' compared to 16bit but this added resolution doesn't mean higher quality, it just means we can encode a larger dynamic range. This is the misunderstanding made by many. There are no extra magical properties, nothing which the science does not understand or cannot measure. The only difference between 16bit and 24bit is 48dB of dynamic range (8bits x 6dB = 48dB) and nothing else. This is not a question for interpretation or opinion, it is the provable, undisputed logical mathematics which underpins the very existence of digital audio. So, can you actually hear any benefits of the larger (48dB) dynamic range offered by 24bit? Unfortunately, no you can't. The entire dynamic range of some types of music is sometimes less than 12dB. The recordings with the largest dynamic range tend to be symphony orchestra recordings but even these virtually never have a dynamic range greater than about 60dB. All of these are well inside the 96dB range of the humble CD. What is more, modern dithering techniques (see 3 below), perceptually enhance the dynamic range of CD by moving the quantisation noise out of the frequency band where our hearing is most sensitive. This gives a percievable dynamic range for CD up to 120dB (150dB in certain frequency bands). You have to realise that when playing back a CD, the amplifier is usually set so that the quietest sounds on the CD can just be heard above the noise floor of the listening environment (sitting room or cans). So if the average noise floor for a sitting room is say 50dB (or 30dB for cans) then the dynamic range of the CD starts at this point and is capable of 96dB (at least) above the room noise floor. If the full dynamic range of a CD was actually used (on top of the noise floor), the home listener (if they had the gear ) would almost certainly cause themselves severe pain and permanent hearing damage. If this is the case with CD, what about 24bit Hi-Rez. If we were to use the full dynamic range of 24bit and a listener had the gear to reproduce it all, there is a fair chance, depending on age and general health, that the listener would die instantly. The most fit would probably just go into coma for a few weeks and wake up totally deaf. I'm not joking or exaggerating here, believe about it, 144dB + say 50dB for the room's noise floor. But 180dB is the figure often quoted for sound pressure levels powerful enough to kill and some people have been killed by 160dB. However, this is unlikely to happen in the real world as no DACs on the market can output the 144dB dynamic range of 24bit (so they are not true 24bit converters), almost no one has a speaker system capable of 144dB dynamic range and as said before, around 60dB is the most dynamic range you will find on a commercial recording. So, if you accept the facts, why does 24bit audio even exist, what's the point of it? There are some useful application for 24bit when recording and mixing music. In fact, when mixing it's pretty much the norm now to use 48bit resolution. The reason it's useful is due to summing artefacts, multiple processing in series and mainly headroom. In other words, 24bit is very useful when recording and mixing but pointless for playback. Remember, even a recording with 60dB dynamic range is only using 10bits of data, the other 6bits on a CD are just noise. So, the difference in the real world between 16bit and 24bit is an extra 8bits of noise. I know that some people are going to say this is all rubbish, and that “I can easily hear the difference between a 16bit commercial recording and a 24bit Hi-Rez version”. Unfortunately, you can't, it's not that you don't have the gear or the ears, it is not humanly possible in theory or in practice under any conditions!! Not unless you can tell the difference between white noise and white noise that is well below the noise floor of your listening environment!! If you play a 24bit recording and then the same recording in 16bit and notice a difference, it is either because something has been 'done' to the 16bit recording, some inappropriate processing used or you are hearing a difference because you expect a difference. |
<< Back to General DiscussionReply