Solving the problems with my videoclips by trial and error is just too frustrating; I whish I knew more about it but I can't seem to find info which is comprehensible with my very modest level of knowledge. For instance: I know more-or-less what a 'codec' is supposed to do but many practical questions remain, for instance:
1. Are codecs installed under the OS, for common use? Or do video-players come with their own codecs? Or is it a combination of the two? (I seem to have read, for instance, that VLC player installs it's own codecs. If so: why?) This question is prompted by the fact that, on my notebook, WMP plays a certain video well, VLC plays it jerkily while Irfanview says it's missing a codec.
2. As I would expect, some video formats seem to require more processing power to play then others. However, when I compare different computers, I can't make head nor tails of the results: an ageing notebook plays almost no HD format well, except for m2t files which can be played flawlessly with Windows Media Player (but not with other players). Our mini-notebook has no trouble handling Mts files (of which I read they require lots of power) but plays the m2t files very badly. Our (android) tablet plays mts files jerkily and without sound, but handles m2t files fine. I used the same files on all 3 systems of course and tried lots of other formats but you'll get the idea. Quite obviously, these differences can not be explained by mere differences in CPU power. So what else could be the explanation?
+ Reply to Thread
Results 1 to 27 of 27
1 - nowadays lot of systems usually have very limited codec support - common practice is to avoid codecs that require some license fee - common codecs covered by royalties are mpeg-2, h.264, h.265, mp3 etc. Sometimes OS vendor pay this licensee fee and OS user pay for OS.
Of course there is large number more or less free codecs - usually they not gaining high popularity due lack of wide industry acceptance.
2 - this is more related to HW capabilities of your decoder - pure software decoders may use CPU and they usually suffer from described by you syndromes, nowadays lot of modern SoC will support some HW decoding acceleration which usually is best possible option from customer perspective - common HW decoders supported by today SoC are mpeg-2 and h.264. Additionally to HW capabilities you need to have software capable to use HW decoder itself and this is completely different problem as usually those HW encoders are proprietary and usually there is no freely available documentation. Intel use QSV technology, NVidia nvdenc etc.
My advise is simple - use software capable support HW acceleration by your HW.
Thanks a lot for replies.
@Pandy: If I understand you correctly, you say it's all in the interplay between codecs and specific HW. No doubt you've got a point there, but as an answer to my question I find this hard to understand. I mentioned several video-formats (e.g. MTS, m2t, m2ts) and as far as I can find out these formats ALL contain streams compressed with the same mpeg-2 codecs. If it were only HW support for this particular compression that makes the difference, our computers would consistently stack up in the same order, but, as I decribed, they do NOT.
How can that be explained? I think there's more to it then just hardware assistance.
@Jagabo "It's a jungle"
So, now you tell me! Thanks a lot - I'm already lost in that jungle!
But anyway, your very helpful reply confirms some things I'd come to suspect. And am I right to understand that reading a container format is a function of the media-player? That this has nothing to do with codecs and hardware-accelleration? This would make sense to me, as my limited knowledge tells me that 'codecs' are bits of software which take on the task of compressing/decompressing video or audio and for some of these the hardware, notably the Graphics Processor, may contain special functions to speed things up. Correct?
(Or is reading a container format a 'shared' function from some OS's (notably windows) too? I can't seem to find out.)
Another thing I learned from your reply is that graphics drivers kind of form the surprise factor: while I can easily make sure I have the latest versions of player, codec and OS, and can also, albeit with some more effort, read-up on bugs in each of those, the bugs (or 'updates') in video-drivers are a very dark area and I had not sufficiently realised their role in all this.
With all this in mind, though, I still struggle to explain the differences between my computers, since on all of them I use the same software to read the video's (WMP, VLC and Irfanview).
It's all become way too complicated, I feel. Part of the problem, for me at least, seems to be the lack of understandable documentation and internet articles which (partly) contradict each other.
For instance, the MTS files my camera produces usually play lousily; they do much better when rendered in the m2t format. Why? Wikipedia redirects from m2t directly to MTS and treats them like the same format. But they obviously are not, although both contain mpeg-2 encoded streams.
I have one more question. I would expect it to be easy to find out what codecs are installed under the OS. But the program I now use, Gspot, is quite often unable to determine whether or not a certain codec is installed. How can this be? Is Gspot's maker a dummy (I suspect not) or is the Windows approach just such a mess?
The .m2t extension is used with older HDV files, which are encoded as Mpeg2.
For an equivalent quality file, the mpeg2 files will be larger than the MTS files. Mpeg4 is a more highly compressed format, which demands more computing power to decode.
Hence the reason why the older, larger, but less compressed .m2t files play more smoothly. They need a lot less computing power.
To see what codecs you have installed on your machine, try this little program: http://www.updatexp.com/sherlock-codec-detective.html
It's free and it doesn't need to be installed. Just run the .exe.
Starting on the left you have a file reader, followed by a file splitter (aka demultiplexer), then audio and video decompressors and renderers. Sometimes the reader and splitter are the same filter. Sometimes other filters are added to overly subtitles, convert colorspace, etc. For system installed filters there is a priority system that specifies which filters have priority when you have multiple filters that can perform the same function. There are also programs that can let you change those priorities.
The idea is to have a flexible, modular system where anyone can create their own containers, codecs, and renders and have them play on any Windows computer simply by installing the filters. Unfortunately, along with that flexibility comes complexity.
GSpot in particular but there are filter editors that will show you installed filters, their priorities, etc. But they can only show system installed filters. They will not show you filters built into various players and editors, and not available to other programs.
Some players, like MPCHC, will let you see what filters are being used during playback.
Gspot, didn't help either: it gives the file-type as "MPEG-2 Transport Stream" and the Mime Type as "video/mp2t" but says, to my dismay, nothing at all about the contents of this container.
By now I am aware that all this is just about the container and, reading more closely, I now see that it indeed can contain mp4-encoded video and my files probably do, as the makers of amateur class video-camera no doubt strive to get as much video on the memory card as possible.
However, although your answer does take away some of my question marks, I stil struggle to see how it can fully explain the diferences I observed between the various computers I tried and the various players I used. But maybe I should give up on my question now that I seem to get closer to practical solutions for my practical problems
PS: The "sherlock-codec-detective" utility you mentioned seems to work fine on my windows7 computer - quite illuminating indeed! and thanks for mentioning it. Yet, the website clearly states the program is meant to work for Windows-XP only, so when it comes to explaining inexplicable malfunctionings, I doubt whether I can rely on this utility. We'll see.
Thanks again for your responses and the chart; it already helped me to solve some problems by experimenting in a somewhat more systematic way.
Also, just for clarity, file extensions (avi, mpg, mts, mp4, m2t, etc...) are containers ONLY, and does not necessarily tell what video/audio stream is contained within. As zing269 says, Mediainfo will tell you more accurately what the A/V stream inside the container is. Mpeg2, Mpeg4, AVC, Mpeg audio, Dolby digital audio, aac audio, etc... are different A/V streams that can be put within said containers. Subtitle streams of various types can also be added. And things can start to get complicated...
Sent from my 831C using Tapatalk
Also, anyone can change the file extension. So the extension doesn't necessarily reflect the container. Most media players will examine the contents of the file to determine what the true container is. The term "container" here means how the data is organized internally. Different containers have different ways of organizing the data within them.
@Zing264. Thanks a lot; I am now using MediaInfo which indeed provides a lot more info and is very helpful to me.
@Southstorm and @jagabo: I think I do understand the concept of container; my original posting made that clear I hope. And by now I do understand that a MPEG2 Transport Stream (MTS) can contain Mp4-encoded video, much like an MP4-file, but differently organised. Also, it's not just with video's that the file-extension doesn't adequately describes the file-type
My original questions were in fact partly about these differences in organisation of the container. Is it true that one container is more difficult to read then another (supposing they contain the same streams, of course)? Notably, is there any truth in the claim I read in numerous places, that the MTS-files of modern amateur-class video camera's are 'notoriously difficult to read' and 'require lots of CPU-power, which many devices don't have'? And if that is true, wouldn't remuxing to a more easy-to-read format be profitable? And why would camera makers choose such a difficult-to-read format?
These questions still stand, but I have to add: what prompted these questions was the difference in behaviour between the original MTS-files and the MP4-files produced by a converter, which otherwise seemed inexplicable to me as both files seemed to contain the same video streams. By now I learned (thanks to another thread I started under the heading 'MP4 headaches' since the heading of this thread didn't seem appropriate anymore), that the files that did badly contained an interlaced stream, while the files that did well contained a non-interlaced (25p) stream. Is that indeed the explanation for their different behaviour? I'd like to think so, but nobody really confirmed this.
Moreover, I do not really understand WHY. I can see how real-time de-interlacing can take lots of CPU-time, but I'm at a loss to see why de-interlacing is even necessary. (But with these last questions I am duplicating what I asked in the other thread so I'll stop here.)
I'm still curious to find answers to the above questions about MTS though, since they mean the difference between finding solutions in faster hardware or in other video-players or in doing some pre-processing to make files more easily readable. Meanwhile, lots of thanks to this forum and the people on it!
MTS files are only difficult to read because they can contain various A/V types. This means any hardware to reliably read them MUST support all the various A/V types. This is not practical on most hardware hence the headaches. The same applies to software players on pc's, though it is easier to deal with on a pc.
Regarding interlaced and progressive, this has a long history and is the cause of many problems when editing video as it's easier to edit a complete (progressive) frame than a separated (interlaced) frame. There is nothing good about interlacing. It is an old technology that has remained for backward compatibility with older systems. But since it still exists, we deal with it.
Sent from my 831C using Tapatalk
Yet the rest of what you're saying I do not really understand, although I have read many remarks in the same vein on the web. But never with an explanation. Please take into account this is the forum for newbies!
As I understand 'interlacing' was invented to make movements seem more more fluid while using the same bandwidth and picture-resolution. It was a great improvement at the time. My point is that I cannot really see what has changed since that time. We still struggle with bandwidth when watching video on-line; all the more when everyone is moving to 'the cloud'. No internet-connection will even try to provide you with 1080x1920 movies. Or, when watching video's on a computer or tablet, we struggle with CPU/GPU power, which comes down to the same thing. Therefore, interlacing video frames still seems to me a good idea for the very same reasons it was invented.
I know I'm asking a lot, but PLEASE explain to me WHY you think "There is nothing good about interlacing" and think it has remained with us only for backward compatibily? I mean: what are the technical advances which have made interlacing superfluous and/or undesirable?
Last edited by Mabel; 9th Jan 2016 at 16:16.
Well, an interlaced frame is NOT always a complete frame. If there is high motion, you can have combing artifacts.
The short story is, interlacing was used to reduce bandwidth on old broadcast systems. The argument for "smoother motion" is mute with the advent of newer standards. If smoother motion is needed now, a higher frame rate is used. My local broadcast system is ATSC. It allows for 1080@30p, 720@60p and other lower combinations. The old NTSC system was interlaced only and resolutions 720x480 and lower.
The difference is night and day.
Sent from my 831C using Tapatalk
.No, a straight interlaced frame is always a complete frame. I think what you're trying to say is that a telecined frame may not represent the complete frame of the original source. But in computer files, there is no such thing as a partial/incomplete frame. Such streams should rightly be considered corrupted.
Interlacing has been carried over into all but the very newest/highest standards, so it is NOT a moot point.
ATSC still to this day includes common support for interlacing and many stations showing HD are doing so using 1080i. Still pertinent.
And any worthwhile editing program should know how to properly handle interlaced frames just as easily as with progressive frames.
And there's no need to sidetrack the thread. The OP never mentioned any problems specific to interlacing.
The way I see it, the more opened and optioned the container is, the more versatile it is for multiple uses, but the more the burden is placed on the decoding app/device to the point where there is a fracturing into tiers of capabilities. And a mismatch of those tiered apps/devices with the tiered sources is the cause for all the problems you see.
Last edited by Cornucopia; 9th Jan 2016 at 18:42.
Of course, the fact that MTS can contain different codecs is not the main cause. AVI, MP4, and MKV can contain a much wider array of codecs and they are much less problematic.
I suspect the major problem is that most free, open source tools use ffmpeg as the source filter. And ffmpeg has problems with some transport streams.
That, plus the spec allows for different packet sizes (dependent upon application, which are not mutually compatible between devices) in the container.
I've read various responses with much interest, although I'm not sure I really understand them. I am left with some questions, though:
@smrpix and @Cornucopia: you say "An interlaced frame IS a complete frame." I can see this is true in the sense that two-halves make for one whole, but your comment regarded SouthStorm's contention that "this has a long history and is the cause of many problems when editing video as it's easier to edit a complete (progressive) frame than a separated (interlaced) frame.". I can not see what's wrong with SouthStorm's statement. I do not know of any video-editor able to work with the 'half-frames' of an interlaced format; as far as I know they are always de-interlaced to 'complete frames'.
As I now understand, a number of de-interlacing methods can be used, each with specific advantages and disadvantages. I understand that some methods give better results when camera or subject are more-or-less stationary, while other methods give a better result when the camera or the subject is moving swiftly.
This I can understand: the half-frames of an interlaced format may differ a lot when something is moving and this may well require de-interlacing methods which are different from those one would use when both half-frames were taken from a more stationary subject.
There's a lot i do NOT understand however:
Thus, I still fail to see why interlacing is outmoded. Yet it seems to me that computers often struggle to play interlaced video. I gather this is (at least partly) because the media-player has to de-interlace the video 'real-time' before showing it. That would at least explain why media-players and CPU's are struggling, but it doesn't explain why this de-interlacing is necessary in the first place.
I applaud you for thinking about these things, and encourage you to learn more about them.
It is 2016. A New Year is upon us, and it is probably worth contemplating the future even if that means reflecting on the past.
The only group of which I am aware that still champions interlaced over progressive content are broadcasters and for one, simple, unrelated reason: $$$$ (is there ever really any other reason?).
But, we live in a native progressive world. Nothing about the biology of our eyeballs is interlaced. Modern computer monitors are native progressive. Modern TVs are native progressive. Movies aren't interlaced. Neither are Youtube or photographs; and while the list goes on and on, more and more content is being authored as native progressive. I see no benefit of carrying the broadcaster torch other than I guess it makes you sound intelligent because interlacing is arcana. I suspect interlacing will go the way of the dodo soon and not be remembered fondly by anyone. Many countries have already completely stopped broadcasting interlaced content.
In the meantime, I de-interlace all my content. And I mean ALL of it. SD, HD, it doesn't matter. I won't bring footage into an NLE unless I have first pushed it though an Avisynth script with QTGMC(). Long live QTGMC! While de-interlacing SD content is smooth as butter (especially multi-threaded), HD content is harder. If I ever happen upon interlaced 4K content, I will simply press the delete key.
Remember that video signal foundation was created before electric computer existence (even relay computers was created decade later than video), most advanced semiconductor device in the beginning of the video era was natural semiconductor diode based on galena crystals (with cat's whiskers junction https://en.wikipedia.org/wiki/Cat%27s-whisker_detector ) - be amazed how big discovery is interlace that it was able to survive almost 100 years - this is not accident but outcome of real life - interlace is simply very well matched with our eye-bulb and cortex capabilities...
Regarding technology precedence and cats, I am going to go out on a limb and state, in reference to the chicken or the egg, that progressive came first.
I don't feel that discussing process of the chicken and egg production will lead us to interlace.
Once again - my point is: interlace is a very well matched with human visual perception system and it can be considered as lossy compression. I never said that interlace is better than progressive.