VideoHelp Forum
+ Reply to Thread
Results 1 to 18 of 18
Thread
  1. I'm a big supporter of DTS and use it in all my film projects. However, I decided recently to do some comparisons between Dolby Digital and DTS by examining the frequency response of each format.

    I did a few DVDs, which yielded similar results. But this for example, I will use an example from M. Night Shyamalan's "The Village".

    Here is the Dolby Digital and DTS waveforms respectively:



    Now, the Dolby Digital waveform shows data up to around 20Khz, with some thicker bands of purple around 16Khz. The DTS has a response to around 19Khz, but lacks the concentration as seen around the 16Khz channel on the Dolby wave. However, there is a solid gap around the 15Khz mark. Both formats show a top level at 19khz towards the right, with thicker bands of purple here than elsewhere.

    To me, this looks as though the Dolby wave should have better sound. But DTS is always being touted as the better format, which I believe it is overall.

    DTS have said that comparisons between DTS and Dolby Digital are not really relevant, and that comparisons should be made between the original audio master and either codec. So does Dolby Digital actually add information to the encoded sound? Is the DTS form a better representation of the original master? DTS is known for having hiss-free quiet passages, so is the lack of information at 15khz showing where the hiss on the master was eliminated during encoding or something?

    But my questions are

    - What do the thick bands represent on the waveforms? And why does DTS lack seemingly lack information around the 15Khz mark?
    - Dolby Digital's lack of transparency is a known fact. The format "joins channels" above 15Khz at 448k and above 10khz at 384k. What is meany by "joining channels"?

    I hope someone knows what Im on about, Im very interested in the workings of this format.
    Quote Quote  
  2. Member 3dsnar's Avatar
    Join Date
    Jan 2006
    Location
    Proxima Centauri
    Search Comp PM
    Analysing spectrograms is not the right approach to decide which has better quality.
    The only reasonable way is to perform subjective listening tests on a representative group of experts (listeners).
    Cheers, 3d.
    Quote Quote  
  3. I think looking at the information encoded is a great way to do it. What's wrong with it?

    As for listening tests, you dont need to be an expert, I feel. Anyone can listen, right?
    Quote Quote  
  4. Член BJ_M's Avatar
    Join Date
    Jul 2002
    Location
    Canada
    Search Comp PM
    3dsnar is right -- since both are lossy formats (i.e. throw out data) , a FFT or spectrogram is not going to reveal much except if you do a difference with the original (still not going to tell you which sounds best).
    "Each problem that I solved became a rule which served afterwards to solve other problems." - Rene Descartes (1596-1650)
    Quote Quote  
  5. Member
    Join Date
    Mar 2002
    Location
    The Coal Region
    Search Comp PM
    Now, the Dolby Digital waveform shows data up to around 20Khz, with some thicker bands of purple around 16Khz. The DTS has a response to around 19Khz, but lacks the concentration as seen around the 16Khz channel on the Dolby wave. However, there is a solid gap around the 15Khz mark.
    The dark bands you see (~15.75 Khz) should be notch filtered out - you really do not want it in your audio. Several audio CD's suffer from this same problem - consider it an error and not a measure of frequency response. The loss of audio from the notch is worth the annoying whine reproduced on the loudspeakers.

    Most people cannot hear that high of a frequency, anyway.
    Quote Quote  
  6. Originally Posted by BJ_M
    3dsnar is right -- since both are lossy formats (i.e. throw out data) , a FFT or spectrogram is not going to reveal much except if you do a difference with the original (still not going to tell you which sounds best).
    But if DTS is better, it should look more like the original signal, yes? But how can it if it is missing stuff?
    Quote Quote  
  7. Banned
    Join Date
    Oct 2004
    Location
    Freedonia
    Search Comp PM
    This is my understanding. DTS is arguably "lossier" in that it basically throws away audio information above 17Khz because few adults can hear sounds above that level. This explains why you see a greater loss in DTS at higher frequencies.

    However, DTS does use much higher bit rates than AC3. This could, in theory, result in some improved fidelity. DTS also encodes at a louder volume. I believe this was done deliberately by the designers who knew that most people, correctly or not, will perceive a louder source to be "better" than a softer one, even if they are the exact same source.

    I think AC3 has been unfairly trashed by a lot of people. It is a fine codec. It very faithfully reproduces high quality sound at low bit rates, which is what it was supposed to do. Professional AC3 encoders have levels of compression where you can use more or less. I would venture a guess that at the lowest compression level, AC3 might actually be more faithful to the original sound than DTS, although DTS might sound better to most people, given the way it works.
    Quote Quote  
  8. Originally Posted by jman98
    This is my understanding. DTS is arguably "lossier" in that it basically throws away audio information above 17Khz because few adults can hear sounds above that level. This explains why you see a greater loss in DTS at higher frequencies.
    No, DTS at half-rate has response to 19Khz. At full rate it has response to 24khz.

    However, DTS does use much higher bit rates than AC3.
    Half rate is 768k, full rate is 1536k.

    DTS also encodes at a louder volume. I believe this was done deliberately by the designers who knew that most people, correctly or not, will perceive a louder source to be "better" than a softer one, even if they are the exact same source.
    Not true. DTS retains the original audio levels of the master tracks. Dolby affects the volume of the tracks during encoding, which is why most assume DTS is encoded "louder" when it is in fact the same level as the masters.

    I would venture a guess that at the lowest compression level, AC3 might actually be more faithful to the original sound than DTS, although DTS might sound better to most people, given the way it works.
    Yes, Dolby is a good technology. But in my eyes, Dolby is about "good sound" and "best saving of bandwidth", but DTS are about "great sound" and "artistic integrity". On the whole, DTS sounds better to me when I compare tracks on DVDs. And Dolby Digital is not more faithful to the original masters, in a few ways.
    Quote Quote  
  9. Member 3dsnar's Avatar
    Join Date
    Jan 2006
    Location
    Proxima Centauri
    Search Comp PM
    Hi Gav, by "expert" I mean any person who takes part in the listening test.
    In other words, it can by anybody. Ofcourse during the test "experts" are verified in terms of consistency of the answers they give.
    -----------------------------------------------
    Checking waveform similarity is not so obvious, because humans perceive sound in a specific way.
    For example, since we cannot hear sounds below 20 Hz and above 20 kHz, you could add to a sinusoidal high amplitude component of say 5 Hz. This would result in a significant waveform disturbance (deformation), while you would not perceive any difference (theoretically) in terms of quality.

    Anyway, human perception is much more complex thing than just a frequency range and in fact there are no reliable simulators which would evaluate sound quality (there is ofcourse ongoing and interesting research related to the subject though).

    To summarize.
    The best way to compare the quality is listening. If you find one codec sounding better
    then the other, you have all the rights to publicly say so and argue that you listened to it and this is your opinion 8)
    Quote Quote  
  10. Some parameters that can easily be heard but would be difficult to see on a waveform graph:

    Phase distortions: Different frequencies may have different phase delays. This can destroy stereo imaging.

    Harmonic distortion: a few percent harmonic distortion is audible but would be very difficult to see in a waveform graph.
    Quote Quote  
  11. Member 3dsnar's Avatar
    Join Date
    Jan 2006
    Location
    Proxima Centauri
    Search Comp PM
    Yeah, indeed nonlinear distortion can be heard, but difficult to see.
    Can you provide an example of signal with disturbed phase that does not reflect the waveform shape?
    (this may be true but only for very low amplitude components, is that what you meant?)
    Quote Quote  
  12. A phase delay that varies with frequency might cause a "tilt" in the waveform graph. A small phase delay would be easily heard but the tiny tilt would be all but undetectable in the graph.

    It's the same with harmonic distorion. 1 percent harmonic distorion can be heard but can you see a 1 percent "bump" in the graph? Maybe with test tones but not with normal broad spectrum audio.
    Quote Quote  
  13. Originally Posted by 3dsnar
    For example... we cannot hear sounds... above 20 kHz
    Surely this isnt fact? Everyone has different hearing.
    Quote Quote  
  14. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Yes, everyone has different hearing, but 20kHz is a good statistical global average for the point beyond which human hearing sensitivity drops off precipitously.

    Note: Hearing in the general sense can also include "feeling" and "location/imaging", and that is where higher sample rates have a more noticeable impact. Studies are still being done in this area, so unanimity hasn't arrived there yet. It's a little premature to talk "conclusively" about this, but having witnessed the differences myself, I believe there is a concrete improvement in higher bitdepth and higher samplerate material.

    Now, AFA the original topic:

    #1- You need an original PCM track to compare against.
    #2- You need multiple areas of quality on which to test (PSNR, THD/IMD, ringing/delay, Group phase)
    #3- You need many more means of describing/graphing these differences (waveform, spectrograph, lissajous(SP?), echogram)
    #4- You need to do side-by-side comparisons at comparable bitrate/efficiencies and at comparable volume levels (and YES, include a ABX listening component)
    #5- The Dolby engineers would probably take issue with your impression of their focus. Never been dissatisfied myself (when I was doing the encoding).

    Scott
    Quote Quote  
  15. Originally Posted by Cornucopia
    Yes, everyone has different hearing, but 20kHz is a good statistical global average for the point beyond which human hearing sensitivity drops off precipitously. Hearing in the general sense can also include "feeling" and "location/imaging", and that is where higher sample rates have a more noticeable impact.
    True, yes, I agree.

    The Dolby engineers would probably take issue with your impression of their focus.
    They have gone on record saying that their technology is all about saving bandwidth. I read it very recently, but I cant for the life of me remember where.

    Whats ABX?

    GAV
    Quote Quote  
  16. #6 And most importantly, you need double blind tests.
    Quote Quote  
  17. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    ABX - comparison test used often in double-blind testing.

    http://www.pcabx.com/

    and

    http://www.provide.net/~djcarlst/abx_new.htm

    Scott
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!