Is there one?
I encoded a DVD with a 300+ bps AC3 stream, and my DVD player was hit/miss on detecting the DVD in the drive.
When I encoded at 224kbps, both DVDs (I always make two) worked okay.
Coincidence, or is there a limit?
PS: The combined bit rate did not exceed the DVD max bitrate of ~9000kbps.
+ Reply to Thread
Results 1 to 28 of 28
-
-
Hi,
448 is the max dolby digital bitrate.
KevinDonatello - The Shredder? Michelangelo - Maybe all that hardware is for making coleslaw? -
Originally Posted by yoda313
640 is the max bit rate , 448 is the max suggested bit rate for dvds"Each problem that I solved became a rule which served afterwards to solve other problems." - Rene Descartes (1596-1650) -
Originally Posted by BJ_M
Really??? I've only ever seen 448. Where would 640 be use? On a computer harddrive?
KevinDonatello - The Shredder? Michelangelo - Maybe all that hardware is for making coleslaw? -
640 is used in many hardware based devices for fixed system playback for themed entertainment and some cinema .. Like the alcorn mcbride DVMHD
Though D-Cinema's QuVis systems uses uncompressed (up to) 12 channel PCM .. or outboard DTS in some cases."Each problem that I solved became a rule which served afterwards to solve other problems." - Rene Descartes (1596-1650) -
I was told not to digg out old threads, but I feel ashamed opening a new thread for tiny questions and that over and over again one evening. So here is what I am concerned.
Avidemux 2.6.6 offers only max 448kbps, I was under the impression that one should use a max. of 384kbps, because every player could handle that and it could cause problems, when using higher bitrates. There are several sites that mention 384kbps as the maximum.
This http://www.gromkov.com/faq/general/dvd_audio_formats.html is not the only one. -
-
@MovingParts:
448kbps is the maximum bitrate allowed for DVD-Video.
640kbps is the maximum value allowed for Blu-Ray authoring, and by the specs themselves as well.
Speaking in general, 384kbps gives too low quality for 5.1 audio.
In fact, even 448kbps can be "not good enough" for 5.1 audio,
depending on the source, on the equipment used, and on the listener's ears as well. -
In fact, even 448kbps can be "not good enough" for 5.1 audio,
depending on the source, on the equipment used, and on the listener's ears as well. -
-
well I posted here 10 years ago .... time flies!!
"Each problem that I solved became a rule which served afterwards to solve other problems." - Rene Descartes (1596-1650) -
It'd still apply if you want multi-channel audio too (you can create multi-channel wave files), but it's not the "uncompressed" part so much, rather the fact it's lossless. You can compress lossless audio losslessly using a format such as flac, but it doesn't compress down anywhere near as much as a lossy format such as AC3.
I'm not 100% sure what the AC3 limit is for stereo audio (to be compliant with the DVD spec) but I think it's 384kbps. It might be 448kbps, but even at 384kbps the quality should be pretty high. -
If stereo - I would leave at PCM anyway.... even 384 (448 is also limit for stereo) or 448 it is noticeable - though people used to mp3s would not notice a difference.
"Each problem that I solved became a rule which served afterwards to solve other problems." - Rene Descartes (1596-1650) -
-
-
"Each problem that I solved became a rule which served afterwards to solve other problems." - Rene Descartes (1596-1650)
-
-
There's certainly something to be said for the placebo effect. Some people also think digital music sounds different depending on the brand/model of hard drive it's stored on. Seriously. And now we can all benefit from the improvement in sound when listening to digital audio over expensive ethernet cables. Seriously.
Audibility of a CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback
http://drewdaniels.com/audible.pdf
The Emperor's New Sampling Rate
24/192 Music Downloads ...and why they make no sense
All it'd take is one credible ABX test proving someone can definitely tell the difference in the studio and I'd jump straight on the "okay, there must be an audible difference" bandwagon. Just one person and just one test.
Who's we? Which studio? Which lossy formats? Which encoders have you tested? -
It's been a while since I've seen a 10 year old thread bumped...
I usually strip the embedded AC3 audio from original Blu-ray disc
TrueHD tracks have AC3 embedded. DTS-HD may not in most cases -
This is only partially true - you cant ignore fact that more and more audio systems perform digital level control and this mean that overall audio bit depth need to be be significantly higher (2 times at least) than native material bit depth.
As DAC bit depth can't be increased 2 times (in analog domain practical limitation is around 110 - 120dB) then you need go for significant oversampling... this is cost for elimination high quality analog electronics (precision, temperature stable, low noise metal film resistors, high quality potentiometers etc)... -
Maximum audio bitrate in DVD is determined by the frame size of the audio codec.
The DVD specs mandate a certain sized buffer for audio frames and the frames have to fit inside that buffer.
A DTS frame has a frame duration of 10.6666... ms, whereas AC-3 has a frame duration of 32ms, which is why DTS is allows a higher bitrate.
At least that's what I read.
Info:
http://stnsoft.com/DVD/dtshdr.html
http://stnsoft.com/DVD/ac3hdr.html
http://stnsoft.com/DVD/ass-hdr.html
The Maximum bitrate for DTS is 1509.75bps, so the maximum for AC-3 should be a third of that.Last edited by ndjamena; 22nd Oct 2015 at 00:13.
-
Now that makes sense to me. I'm sceptical as to how audible a difference there'd be for the majority of us mere mortals, but it's logical and makes sense.
Question though.... would upsampling 16bit audio to 24bit prior to any sort of digital manipulation be likely to produce an audibly different result than if the audio was natively 24bit? If you go with the theory that 16bit/44.1k is enough to reproduce audio perfectly accurately, I'm trying to get my head around whether 24bit should be better than 16bit in respect to digital manipulation if you upsample the 16bit audio first.
Do you know much about null tests? (For those who don't know if you take two copies of an audio track and invert one they cancel each other out and you get perfect silence, so when performing the same test between a lossless source and a lossy encode there's never perfect silence.... and that's a null test)
I ask, because I was messing around a little earlier, and if the quality of a lossy encode directly relates to how close to silence a null test produces, AC3 does better than I expected. I don't know if it's that simple, but I was using the AFTEN AC3 encoder and I'm not sure if it's especially high quality. I don't do much AC3 encoding. Anyway...... I just picked a random CD track and a small section from it and tried some null testing.
There's a click at the beginning of the 448k AC3 test. I think it's due to using the AFTEN "no padding" option when encoding but it was the only way I could ensure Audacity would import the AC3 correctly. I wanted to try the FhG AAC encoder as well as LAME MP3, but for some reason those encodes weren't lining up with the source correctly in Audacity despite the fact foobar2000 showed they had the exact same number of samples. I assume that's some sort of issue with encoder padding when importing, but I was running out of time to keep messing about at that stage. Maybe later.
I've no idea what this proves or if it's directly related in any way to differences we might hear between a source and a lossy encode, but I found it interesting. If it does relate to differences we can potentially hear, AC3 seems to do as well as AAC at high bitrates. You can get a rough idea just looking at the file sizes. The smaller the size, the closer the null test probably was to producing silence.
Edit: I thought I'd add the bitrates of the lossy VBR encodes for completeness.
fdkaac m4 - 129kbps
fdkaac m5 - 213kbps
neroaac q0.5 - 181kbps
neroaac q1.0 - 410kbps
qaac q91 - 215kbps
qaac q127 - 352kbpsLast edited by hello_hello; 21st Oct 2015 at 21:50.
-
It will be good to point that there is no objective studies related for over 20kHz perception for lossy coding - at least model for perceived loudness don't exist.
Personally i have nothing against high depth, high sample rate PCM audio (SACD or DSD is a different topic and i'm in favour to reject SACD as h audio format when compared to PCM). Nowadays we have no problems with capture and storing UHQ Audio even if average listener can't perceive supersonic range. I can understand concerns related to phase distortions expressed by enthusiast of UHQ Audio.
Personally i'm aware that me perception ends around/slightly above 16kHz (still can perceive horizontal deflection in regular CRT TV which is 15625Hz) - this info is to express that i'm not biased by my imaginary supersonic perception.
From mathematical perspective sample value is multiplied by constant - if this can be performed with truncation then this can be perceived as lossless operation. Multiplying by constant coefficient will not change nature of sample so sample will be still 16 bit accuracy (in 24 bit resolution - resolution and accuracy are different things).
In analog world such operation is used for differential mode (symetrical, balanced etc) - also it is used in real audio world to improve system dynamics - two or more (parallel to reduce noise) DAC (ADC) are feed with differential data (signal).
Problem with codecs is that without knowing exactly how they are designed you can't predict phase distortions and you can control sample position... Imagine that you have perfect codec but it produce samples shifted by some offset from reference - subtracting such samples will produce error that don't exist (unless absolute time position is important for you - in audio this is not critical until it stay constant).
In most codecs that works in frequency domain (DCT based for example) - you can't ignore this problem...
Also FIR filters introduce delay - better filter - longer delay... also results returned by filter can be inaccurate for some time - all math used for codecs assume constant time - function that describe audio signal is considered as continuous - so there is time than can be considered as "past" "present" and "future" - sample values are produced by using all three (sometimes only past and present) - you can perceive this as pre-echo and post-echo - you can reduce this effect by overlapping analysis window but it is not for free, also higher frequency accuracy lead to worse time accuracy...
So even manually aligning samples may be not optimal/sufficient to compare (imagine constant shift by π/2 i.e. 90 deg) such signal will be perfect in analog domain but in digital domain will have completely different values and as such can't be compared directly - samples are not everything - analog signal after reconstruction filter is important. -
So the upshot of what you're saying is I shouldn't pay too much attention to the results of the null tests because they don't really reflect any differences between the source and the encode we might hear?
I figured that was likely to be the case, but I wasn't sure. -
My point is that we can't assume that lossless PCM and lossy codec are equal especially in case where complex, non-stationary signals are used. This will work only for single tones such as sine wave and only under some conditions.
PCM is sample representation of some real signal - lossy coding is more like music synthesizer - we create/generate signal in a particular way - our goal is generate signal that it will be similar to original. -
http://www.avsforum.com/forum/150-blu-ray-software/1369154-dolby-truehd-7-1-vs-dts-ma-...d-7-1-a-4.html
Just a note that when it came to DVD, the bitrate limitation to 448 kbps was not for lack of Dolby wanting it to support 640 kbps.
DVD players are able to store one sector of disc audio data in memory for the jump/branch function. Therefore, one audio frame must completely fit within one sector. The DVD memory can hold 2048B, or 16kb.
One frame of DD audio is 32 ms long, and 16kb/32ms = 500kb/s. The 448 kb/s rate therefore fits, but the higher DD rates of 512 and 640 kb/s exceed the memory size.
In contrast, DTS uses a 10ms frame size, so 16kb/10ms = 1.6Mb/s. Hence they can use the 1.536Mb/s rate of 48kHz.
One might ask, so why did Dolby not use a shorter frame size like DTS? It reduces coding efficiency. The frame size on Dolby's film version of AC-3 is actually 10 ms, as that is the perf rep rate. But to improve the sound quality for consumer applications of AC-3 (the first being HDTV), the frame size (duration) was increased.
Similar Threads
-
Problem with Megui Maximum bitrate
By yaoyao0204 in forum Video ConversionReplies: 2Last Post: 17th Feb 2012, 10:57 -
Maximum Bitrate
By unity2 in forum Authoring (DVD)Replies: 17Last Post: 24th Jun 2011, 07:25 -
Nerovision maximum bitrate
By whenloverageswild in forum Authoring (DVD)Replies: 6Last Post: 11th Oct 2007, 08:40 -
CBR w/Maximum Bitrate = Dissapointing
By jcm0320 in forum Capturing and VCRReplies: 14Last Post: 27th Aug 2007, 21:16 -
Maximum Safe Bitrate
By SCDVD in forum Newbie / General discussionsReplies: 8Last Post: 21st Jun 2007, 22:37