Guys I wanted to know the ability to encode an audio track in DTS with codec (dca), but I'm having difficulty, I can only in stereo, and not in multi-channels (5.1 or 6).
My belief was that it could be done, but maybe I was wrong or I'm missing something. Thank.
Our website is made possible by displaying online advertisements to our visitors.
Please consider supporting us by disabling your ad blocker or buy PlayOn and record Netflix! :)
Please consider supporting us by disabling your ad blocker or buy PlayOn and record Netflix! :)
+ Reply to Thread
Results 1 to 14 of 14
I don't know how ffmpeg uses libdcaenc, so all I can say is, give a try to ffdcaenc.exe itself.
Download it from https://github.com/filler56789/ffdcaenc-2/releases
[C:\] => ffdcaenc -h FFDCAENC --- experimental 'Coherent Acoustics' compressor. Usage: ffdcaenc -i <input.wav> -o <output.dts> -b <bitrate_kbps> Optional: -l Ignore input length, can be useful when reading from stdin -e Switch output endianess to Little Endian (default is: Big-Endian) -r Reduced Bit Depth for DTS CD format (default is: Full Bit-Depth) -h Print this help screen -c Overwrite the channel configuration (default is: auto-selection) -f Add an additional LFE channel (default: used for 6-channel input) -m Multiple Mono input files (default: -i for multi-channel input file) Use -0 <input.wav> -1 <input.wav> etc. (up to -5) Channels are in ITU order: 0,1,2,3,4,5 --> LF, RF, C, LFE, LS, RS The following mono input file combinations are supported: 1.0 -2 center.wav 1.1 -2 center.wav -3 lfe-wav 2.0 -0 left.wav -1 right.wav 2.1 -0 left.wav -1 right.wav -3 lfe.wav 3.0 -0 left.wav -1 right.wav -2 center.wav 3.1 -0 left.wav -1 right.wav -2 center.wav -3 lfe.wav 4.0 -0 left.wav -1 right.wav -4 ls.wav -5 rs.wav 4.1 -0 left.wav -1 right.wav -4 ls.wav -5 rs.wav -3 lfe.wav 5.0 -0 left.wav -1 right.wav -2 center.wav -4 ls.wav -5 rs.wav 5.1 -0 left.wav -1 right.wav -2 center.wav -4 ls.wav -5 rs.wav -3 lfe.wav -v Show version info REMARKS: The input or output filename can be "-" for stdin/stdout. The bitrate is specified in kilobits per second and may be rounded up -- use floating-point values for bitrates that are not a multiple of 1 kbps. Because the encoder uses a 4-byte granularity, i.e., 32 bits per audio frame (with 512 samples/frame), the ACTUAL bitrate will always be a multiple of: 3 kbps for 48 kHz 2.75625 kbps for 44.1 kHz 2 kbps for 32 kHz 1.5 kbps for 24 kHz 1.378125 kbps for 22.05 kHz 1 kbps for 16 kHz 0.75 kbps for 12 kHz 0.6890625 kbps for 11.025 kHz 0.5 kbps for 8 kHz -- NOTICE: the values 377.25, 503.25, 754.5 and 1509.75 AT _48kHz_ are exceptions. * Available channel-layouts: - 1: A - 2: A, B - 3: L, R - 4: (L+R), (L-R) - 5: Lt, Rt - 6: FC, FL, FR - 7: FL, FR, BC - 8: FC, FL, FR, BC - 9: FL, FR, BL, BR - 10: FC, FL, FR, BL, BR - 11: CL, CR, FL, FR, BL, BR (not supported) - 12: FC, FL, FR, BL, BR, OV (not supported) - 13: FC, BC, FL, FR, BL, BR (not supported) - 14: CL, FC, CR, FL, FR, BL, BR (not supported) - 15: CL, CR, FL, FR, SL1, SL2, SR1, SR2 (not supported) - 16: CL, FC, CR, FL, FR, BL, BC, BR (not supported) * Valid sample rates (in kHz): 8 11.025 12 16 22.05 24 32 44.1 48 * Transmission bitrates (in kbps): 32 56 64 96 112 128 192 224 256 320 384 448 512 576 640 768 960 1024 1152 1280 1344 1408 1411.2 1472 1536 1920 2048 3072 3840 open VBR LOSSLESS
ffmpeg can encode 5.1 DTS. It uses the number of input channels whenever possible unless you explicitly set something else. So either you set ffmpeg to down-mix to stereo or your source is only stereo. While it is possible to convert to 5.1 I highly recommend you don't. If you want more help post MediaInfo of the source, complete ffmpeg command and log and why exactly you think you need DTS.
Oooops, the info below is important as well, because ffmpeg's dts encoder is libdcaenc:
Originally Posted by myself
ffmpeg support DTS encoder - bellow it's capabilities:
Encoder dca [DCA (DTS Coherent Acoustics)]: General capabilities: exp Threading capabilities: none Supported sample rates: 8000 16000 32000 11025 22050 44100 12000 24000 48000 Supported sample formats: s32 Supported channel layouts: mono stereo quad(side) 5.0(side) 5.1(side) DCA (DTS Coherent Acoustics) AVOptions: -dca_adpcm <boolean> E...A... Use ADPCM encoding (default false)
I have a DVD movie with audio track in DTS :
Audio #2 ID : 189 (0xBD)-137 (0x89) Formato : DTS Formato/Informazioni : Digital Theater Systems Modo : 16 Impostazioni formato, Endianness : Big Modo muxing : DVD-Video Durata : 37 min 26s Modalità bitrate : Costante Bitrate : 1.510 kb/s Canali : 6 canali Posizione canali : Front: L C R, Side: L R, LFE Frequenza campionamento : 48,0 kHz Frame rate : 93,750 FPS (512 SPF) Profondità bit : 24 bit Modo compressione : Con perdita Dimensione della traccia : 404MiB (39%)
I perform the conversion and set the command line with the audio part in this way, using the codec (dca)
... -map 0:v -map i:0x89 -strict -2 -c:a dca -ac 6 -b:a 384k -ar 48000 -hide_banner "Pulp_Fiction.mp4"
Ok then I edit in this way, and insert "Copy" so that it goes to copy the audio codec of the selected track always with 6 channels (5.1)
... -map 0:v -map i:0x89 -strict -2 -c:a copy -ac 6 -b:a 384k -ar 48000 -hide_banner "Pulp_Fiction.mp4"
Audio ID : 2 Formato : DTS Formato/Informazioni : Digital Theater Systems Modo : 16 Impostazioni formato, Endianness : Big ID codec : mp4a-A9 Durata : 2 o 28 min Modalità bitrate : Costante Bitrate : 1.510 kb/s Canali : 2 canali Channel(s)_Original : 6 canali Posizione canali : Front: L C R, Side: L R, LFE Frequenza campionamento : 48,0 kHz Frame rate : 93,750 FPS (512 SPF) Profondità bit : 24 bit Modo compressione : Con perdita Dimensione della traccia : 1,56 GiB (49%) Default : Si AlternateGroup/String : 1
2) ffmpeg still sucks at using libdcaenc;
Anyway: 384000 bits/second for six audio channels?
Many people swear that even 754.5 kbps is not good enough for six audio channels...
If the problem is ffmpeg and the internal codec dca, I'll have to do without it, at this point I do not care about a DTS with mono or stereo audio track, it is not convenient to use it for conversions.
Welcome to My UsersListNotConsidered.
I think the problem is the internal codec (dca) in ffmpeg, I hope will solve in the future, for the moment if no one else has given a valid answer to the problem, I think it is useless to clog the thread with useless answers. Thanks
Your command line should be like this:
ffmpeg -hide_banner -i .... -map 0:v -map i:0x89 -c:a copy -movflags faststart -f mp4 "Pulp_Fiction.mp4"
or for AC-3 like this:
ffmpeg -hide_banner -i .... -map 0:v -map i:0x89 -c:a ac3 -b:a 448k -movflags faststart -f mp4 "Pulp_Fiction.mp4"
According to Selur at Doom9's forum, 670 kbps is the minimum bitrate accepted by libdcaenc for compressing 5.1 audio at 48 kHz.
Last edited by Marsia Mariner; 7th Dec 2017 at 07:20.
On DVD Video, the purpose of dts was mainly to provide a multi-channel audio stream with roughly the same bitrates as stereo or mono 16-bit LPCM and only little quality loss, especially with an almost full frequency spectrum (so I was told by books when I worked in a DVD Authoring studio, unfortunately I forgot which...). The bitrates on DVD Video (at 48.0 kHz) are 754.5 (just below 768 like mono LPCM) oder 1509.75 (just below 1536 like stereo LPCM) kbps. While the distortions are nearly transparent for the HQ bitrate, they are clearly noticable for the LQ bitrate.
Dolby Digital AC-3 with typically 384 or 448 kbps for discrete multi-channel audio on DVD Video (or up to the core specs including 640 kbps) has a lower bitrate than dts. But that does not imply a lower quality. Besides a Fourier-like transformation of discrete samples into spectral bands per audio frame, which already helps compressibility, the psycho-acoustic filtering of the spectrum and a good channel coupling (like Mid/Side coding of stereo, just for more channels) increase the efficiency even more. If dts has an advantage over AC-3, then it is mainly in preserving frequencies which are probably hardly noticable.
But the real advantage of Dolby Digital AC-3 over dts is in the dynamic range: while dts is as limited as 16-bit integer LPCM (in fact, it has even less significant bits), AC-3 can be superior if the encoder had a high resolution master (24-bit integer or even floating-point audio samples) as source, and the impact is more obvious for rather quiet scenes (16-bit integer samples have a resolution of 16 bits only for full volume, whereas floating-point samples may always have the precision up to that of their mantissa, limited only by the precision of the microphones).
tl;dr: Despite lower bitrates, AC-3 can have better subjective quality than dts (core specs, compatible to DVD Video or only a little beyond).
Yep - depends on sample rate DTS can have two bitrates - in this discussion we can call them HQ and LQ - HQ bitrate is very close to PCM bitrate (simple math 2 channels, 44100Hz sample rate, 16 bit linear PCM = 2x16x44100 = 1411.2kbps or for 48000hz sample rate it will be 1536kbps) and LQ mode which is half of bitrate for HQ mode.
DTS support also possibility to use 14 from 16 bits in encoded stream to reduce hypothetical risk of destroying tweeters (so efficiently 87.5% of bits is used to carry data).
HQ DTS can be superior to AC-3 (MDCT codec based pre-echo).
Excerpt from: https://tech.ebu.ch/docs/tech/tech3324.pdf
The general conclusion of the EBU evaluations is that the quality performance cannot be achieved
if the bitrates used are not sufficient. This conclusion applies to both old and new codecs. If the
quality performance requirement for broadcasters is that none of the test sequences resulted in a
quality lower than “Excellent” (i.e. 80 points on MUSHRA scale), then relatively high bitrates are
required. For example, consider Dolby Digital (DD) or DTS which have been in the market for more
than 10 years: Dolby Digital requires 448 kbit/s and DTS still requires around 1.5 Mbit/s for
"Excellent" quality. The newer codecs, such as Dolby Digital Plus or Windows Media provide
"Excellent" quality only if operating at 448 kbit/s or above.
It is interesting to note that that broadcasters who were using Dolby Digital at a bitrate of
448 kbit/s several years ago made the right decision, although it was not made on scientific basis
but was based merely on practical "trial-and-cut" experience. Today, Dolby Digital still represents a
good compromise between bit rate and quality. Broadcasters who use DD at 448 kbit/s for 5.1
multi-channel audio are able to offer excellent multi-channel audio quality. This conclusion is
equally true for standard TV, HDTV and radio broadcasts.
forgot to add"
Those less significant bits is special case where DTS compressed stream use only 14 bits from 16 - this special mode will not affect audio dynamic, it can be used to prevent tweeters damage when DTS track is played accidentally as PCM.
Last edited by pandy; 7th Dec 2017 at 17:08.