VideoHelp Forum
+ Reply to Thread
Results 1 to 24 of 24
Thread
  1. Member
    Join Date
    Dec 2015
    Location
    Amsterdam, The Netherlands
    Search PM
    My (sony) video camera produces MTS-files and can of course be set to different video qualities; the choices with highest picture quality are between 50i or 50p.
    The format 50p seems to provide the highest quality possible with my camera (and it produces the largest files). But these video's do not play well on any of our windows- or android devices (jerky image, broken sound) so I conclude this is too much to ask from the CPU. Or is the GPU? Or is it, in fact, the software? I'd like to know.
    For the time being I'd be well content to store these video's for later and make do with 25p MPEG-4v2 video instead, as this proved to be a format which plays well on all our devices. But surprisingly '25p' (or 30p for that matter) is not among the choices offered. Surely a device capable of recording 50p movies is also capable of recording 25p movies, so why does it revert to 50i for recording movies in 25 fps?
    These theoretical questions aside, my practical question is this:
    How do I get the best quality 25p, mp42-encoded, video's from my camera? Would it be by de-interlacing the 50i video? Or is there a program to convert 50p to 25p (for instance by throwing away every odd frame) which gives better results? I am of course aiming for a lossless conversion here.
    Mabel
    Quote Quote  
  2. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Your playback issues could be either CPU or GPU, depending on which is being tasked to decode the video. A GPU upgrade should solve it, but only in programs that support GPU-accelerated decoding.

    A truly lossless conversion isn't possible, unless you want to keep extremely large archive files that can only be played on PC. But I assume you don't mean lossless in the technical sense.

    Dropping every other frame is preferable to deinterlacing and a lot faster as well. Note that both methods will produce a video that stutters when compared to a 50p or 50i recording, if the shutter is above 1/25.
    Quote Quote  
  3. In addition, the problem may be neither the CPU or GPU but a player/demuxing problem. MTS files are often difficult to deal with. Try different players with different settings. Try remuxing the video into another container like MKV.
    Quote Quote  
  4. Instead of 1080p25 as your distribution format you might also want to consider 720p50. That is compatible with a lot of devices while still preserving the full temporal resolution. 25 vs. 50 Hz make a much bigger difference than 1080 lines vs. 720 lines. The latter will be unnoticeable to most people.
    Quote Quote  
  5. Originally Posted by sneaker View Post
    Instead of 1080p25 as your distribution format you might also want to consider 720p50. That is compatible with a lot of devices while still preserving the full temporal resolution. 25 vs. 50 Hz make a much bigger difference than 1080 lines vs. 720 lines. The latter will be unnoticeable to most people.
    +1 (unless of course you are viewing on a really big screen!)
    Quote Quote  
  6. Originally Posted by Mabel View Post
    My (sony) video camera produces MTS-files and can of course be set to different video qualities; the choices with highest picture quality are between 50i or 50p.
    The format 50p seems to provide the highest quality possible with my camera (and it produces the largest files). But these video's do not play well on any of our windows- or android devices (jerky image, broken sound) so I conclude this is too much to ask from the CPU. Or is the GPU? Or is it, in fact, the software? I'd like to know.
    Use 50p whenever it is possible. Software player with HW acceleration should improve playback.
    Additionally these 50p files can be converted to subformats that can be played ore easily by some limited software/hardware.
    Quote Quote  
  7. Originally Posted by pandy View Post
    Use 50p whenever it is possible. Software player with HW acceleration should improve playback..
    Or worsen. A lot of PCs with AMD GPUs will fail playing 1080p50 when using the AMD card but have no problem when falling back to full CPU software playback.
    Quote Quote  
  8. Originally Posted by sneaker View Post
    Originally Posted by pandy View Post
    Use 50p whenever it is possible. Software player with HW acceleration should improve playback..
    Or worsen. A lot of PCs with AMD GPUs will fail playing 1080p50 when using the AMD card but have no problem when falling back to full CPU software playback.
    With LAV filters works for me.
    Quote Quote  
  9. Member
    Join Date
    Dec 2015
    Location
    Amsterdam, The Netherlands
    Search PM
    Thanks for your responses; I learned a lot from them.
    Some responses, though, seem somewhat off-topic: I am NOT complaining about video-plaback and I am not about to 'try different players'. On the contrary: my aim is for video which will play well on ALMOST ANY odd system. I found that format in an MP4 container with a H.264 encoded stream of 25p video. Now, I'm just looking for the best way to 'convert' my camera files to this format - hence my question.
    Originally Posted by vaporeon800 View Post
    A truly lossless conversion isn't possible, unless you want to keep extremely large archive files that can only be played on PC. But I assume you don't mean lossless in the technical sense.
    I meant 'lossless' in the sense that no de-compression/re-compression is necessary. I think I understood by now that 'de-interlacing' can not be done without de-compression. But I thought that 'throwing away every odd frame' COULD be done without de-compressing. That is why I wondered which of the two methods would give a better result.
    Originally Posted by vaporeon800 View Post
    Dropping every other frame is preferable to deinterlacing and a lot faster as well. Note that both methods will produce a video that stutters when compared to a 50p or 50i recording, if the shutter is above 1/25.
    OK, that answers my question. Do you know sofware which will simply 'drop every other frame' without any de-compression/re-compression?
    Also I'd appreciate it if you could explain some more, especially about "if the shutter is above 1/25"?
    The sentence 'both methods will produce a video that stutters when compared to a 50p or 50i recording' I think I do understand up till where you mention 50i. That is where I get lost (not for the first time). Why would 50i display better then de-interlaced 50i, that is 25p? I asked this question elsewhere but never got a clear answer. Most people say a 50i video is always de-interlaced before display. (Never got an explanation about why this is necessary, though.)
    But anyway: my question was NOT about stuttering video but about the best way to get to 25p video.
    Originally Posted by jagabo View Post
    Try remuxing the video into another container like MKV.
    I tried re-muxing (with ffmpeg and mp4box) extensively but it didn't seem to make much difference. De-interlacing, however, DID make a big difference. (Sadly, ffmpeg made a mess of de-interlacing my MTS files. I use Freemake now.)
    Mabel
    Quote Quote  
  10. Originally Posted by Mabel View Post
    Do you know sofware which will simply 'drop every other frame' without any de-compression/re-compression?
    It's not possible with most high compression codecs. Frames aren't compressed as individual frames. The data from most frames only encode the differences between that frame and some other frames. So most frame can't be reconstructed without first reconstructiong other frames. So it's not possible to remove every other frame without a decompress/recompress cycle.

    Only with "all intra" codecs (each frame is encoded as a standalone object) could you discard every other frame without reecoding. Like many of the lossless codecs (huffyuv, lagarith, ut), MJPEG, all i-frame MPG, etc.

    Originally Posted by Mabel View Post
    Also I'd appreciate it if you could explain some more, especially about "if the shutter is above 1/25"?
    Motion blur reduces the flicker you get with low frame rates and high contrast images in motion. Longer exposures give more motion blur.
    Quote Quote  
  11. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Originally Posted by Mabel View Post
    Why would 50i display better then de-interlaced 50i, that is 25p?
    Because 50i -> 25p isn't just deinterlacing; it's either:
    1. Discarding half of the fields, then interpolating
    2. Interpolating using information from both fields, then discarding half of the result
    "Correctly" deinterlacing it would be 50i -> 50p.

    Most people say a 50i video is always de-interlaced before display. (Never got an explanation about why this is necessary, though.)
    All modern displays are progressive. They can't natively display interlaced content. They will do 50i -> 50p.

    But anyway: my question was NOT about stuttering video but about the best way to get to 25p video.
    To get the "best" 25p video, you will have to balance stutter vs blur.
    Quote Quote  
  12. 50i is better if it was shot 50i
    25p is better if it was shot as 25p
    Because both are shot with different shutter speed, for 50i shutter speed is shorter, for 25p longer, but that brings you all kinds of troubles, you do not want to shoot 25p basically , as a home user, difficult to shoot any motion, using handheld camera/camcorder, you'd need ND filter with daylight outdoors. If not, your footage would suck. I shoot 30p and understand it very well what I could shoot. And I have 30p available, cannot imagine having 5 frames gone on the top of it , and using 1/25. You could even use 1/50 without much blur in the video but even that would kind of "strobe" in video while having motions in video or jerky movement. With 1/25 you can barely get things sharp and focused while camera is hand held.

    If you shoot 50i, so far so good playing it as is, 50i, TV's handle it (showing 50i to 50p on screen), some devices could have problem deinterlacing it in the best way. But as soon as you start to transfer 50i to 25p you discard temporal resolution, imagine ball coming from left to right, using 50i you'd have 50 times information where it is within a second, 25 p would use only 25 informations about the position where the ball is at the moment, and because shutter was used for 50i, understand shorter, then it would seem jerky on screen. Understand , there is a difference if video is shot as 25p or if video is transcoded from 50i to 25p, you'd get different results. You can also deinterlace 50i to 25p and blend fields, but that ball would be rendered on screen with "ghosts", you could see "quantum physics" effect. The ball would not be there or there, in reality somewhere in the middle, or anywhere you could imagine it in between

    You have to transfer 50i to 50p to keep temporal resolution, like TV's do. But why not to shoot it 50p right away then.

    You can render, export 1280x720 for tablets, PC's whatever device, you can store both 1920x1080 50p and 1280x720 50p next to each other always and user would decide what to stream.
    Last edited by _Al_; 31st Jan 2016 at 14:14.
    Quote Quote  
  13. Member
    Join Date
    Dec 2015
    Location
    Amsterdam, The Netherlands
    Search PM
    Ok, I'm getting thoroughly confused now
    But let me start with the part that now seems to become a bit more clear to me. vaporean800 wrote "Dropping every other frame is preferable to deinterlacing" and nobody contradicted that. Jagabo contended that this cannot be done losslessly, as codecs work by comparing subsequent frames. Together I take this to mean that a lossy conversion is necessary in any case and that converting from 50p to 25p will give me better results then de-interlacing a 50i video. Am I right??
    If so, I only have to find out what conversion program to use for the best results. Any advice on this, anyone?

    Just one question keeps nagging me though: when rendering a video, I get to choose between a number of 'zones': e.g. 4 zones (fast) or 1 zone (slower, but better compression). The manual explains this as follows: "using multiple zones, every picture is divided into as many strips, which are compressed independently. This is much faster then having to take the whole picture into account but of course the compression algorithm will be less efficient and the resulting file will be larger." Being in no hurry, I set 'zones' to '1' and indeed, the file got considerably smaller without any loss in quality as far as I could make out. My point here is: there is no mention at all about a setting which would compare the pixels of not just the whole frame but also those from the previous (or the next) one. In fact, I have trouble understanding how comparing with another frame would be even compatible to a setting of '4 zones' considering that the whole speed-gain of multiple 'zones' seems to be that they are independently encoded. I hope Jagabo will care to comment. But anyway, this was just a nagging afterthought.

    However, just when I thought I got the whole thing sorted out, other considerations were brought up, notably about 'shutter speed'.
    Originally Posted by _Al_ View Post
    [...]both are shot with different shutter speed, for 50i shutter speed is shorter
    I was wondering about that. I can see why in the old days, when camera's were mechanical, shutter-speed and frame-rate were directly related. But I can't see why this would still be so. On my camera, like on most camera's I suppose, there is no setting for 'shutter-speed' so I take it the camera strikes some compromise. Under broad daylight, why wouldn't that result in a shutter-speed of, say, 1/500th - quite independent of any frame-rate setting?
    In other words: what is the relation between frame-rate and shutter-speed, if any?
    Sure, with a setting of 50p the camera can't use a shutter-speed of 1/25th - that is a limitation to be taken into account when shooting under poor lighting conditions. Apart from that - I can't see what the relation is. And even under poor conditions, I'd expect modern camera's to just produce grainier pictures while maintaining 50 or 60 frames per second. Right or wrong?

    Originally Posted by _Al_ View Post
    Understand , there is a difference if video is shot as 25p or if video is transcoded from 50i to 25p, you'd get different results.
    I think I do understand. My question is: how to get the best results.
    Originally Posted by _Al_ View Post
    25p (...) brings you all kinds of troubles, you do not want to shoot 25p basically , as a home user (...) 50i, TV's handle it (showing 50i to 50p on screen), some devices could have problem deinterlacing it in the best way. But as soon as you start to transfer 50i to 25p you discard temporal resolutionas soon as you start to transfer 50i to 25p you discard temporal resolution
    Yes, I can see I loose 'temporal resolution' when converting 50i to 25p. But when trying to turn this into practical advice I am greatly hampered by a lack of information.
    Are media-players (I mean computer software) really showing 50i video? Or are they de-interlacing the video on-the-fly and showing 25p instead? And if so, why? And wouldn't it, in that case, be advantageous to start with 25p (or 30p) in the first place?
    And what's the difference between a media player and a TV anyway?
    Do media-players show a 50p video really as 50p (or 60p as 60p)?
    And what about editing: can I edit 50i video without de-interlacing it, that is without loosing 'temporal resolution'?

    Mabel
    Quote Quote  
  14. Originally Posted by Mabel View Post
    Are media-players (I mean computer software) really showing 50i video? Or are they de-interlacing the video on-the-fly and showing 25p instead?
    For modern screens the playback software (or the display itself) need to de-interlace to either 25p (bad) or 50p (good). Without de-interlacing you will see combing artifacts (ugly).

    Originally Posted by Mabel View Post
    Do media-players show a 50p video really as 50p (or 60p as 60p)?
    Yes


    Again, what you should do:
    Record and edit in 1080p50
    Use 1080p50 as distribution format if target allows it (like youtube)
    Use 720p50 as distribution format if 1080p50 is not allowed on the target player (like BluRay or a slow computer/phone)

    Forget about interlacing.
    Quote Quote  
  15. You can deinterlace 50i to 50p, nowadays some become to forget that there was 25p. Anything can play 50p, phones as well. If not 1920x1080 50p, then encode 1280x720 50p.

    Forget about that Blu-Ray cannot accept 1920x1080 50p, if this is a case, why would you try to lean on this crippled specs. Blu-Ray specs are not for people but for industry in mind, nobody cares about everyday Joe, they want him buying a new disk , nobody cares about his home video, and why would you authorize it and not just use as data anyway. You can have your movies on thumbdrive, hardisk, a cloud, whatever. Anyway soon you can encode to 50p using UHD Blu-Ray specs, it allows 1920x1080 50p, HEVC perhaps but anyway ...

    As soon as you start to compromise like :

    deinterlacing 50p or 50i to 25p, what method, for 50p dropping every other frame (jerkiness might be introduced) or blending frames to 25p , 50i to 25i by blending fields, or creating 50p from 50p and then dropping frame or blend frames to 25p , ..

    ... you compromise a lot

    You do not seem to compromise much at all encoding 1280x720 50p from 1920x1080 50p for devices. Try it. Or even 1920x1080 50p but encoded with fast decode or easy settings etc. To deinterlace to 25p is just thing of the past. You can surely think of it that way. That lowered resolution is only version for those tablets anyway. You would not store those 25p deinterlaced versions for good would you? That's too harsh.

    TV's , monitor would show 50p, not sure why are you afraid of, if you feed them with 50i, then they have to deinterlace to 50p anyway, they do not show picture like 30 years ago, with fields but with frames. So if there is a half frame coming from interlace footage, they have to construct a whole frame out of it. You got a cheap TV and you get video even with interlace artifacts in it. You have a software platform that plays video in any device that deinterlaces badly, you can see artifacts as well. Stay away from interlace, not like you do not have a choice here. Platforms tend to forget about interlace, I use Kodi to play my old DVavi videos, interlaced, and it cannot properly deinterlace. So not even super TV would help you here if you software puts it on screen. I doubt that they are going to fix that, nobody seems to care about interlace now.

    If deinterlacing, the double frame rate deinterlace to 50p, but you are asking 50i or 50p, then 50p. I'd only repeat Sneaker to say it more plainly.
    Quote Quote  
  16. Originally Posted by Mabel View Post
    when rendering a video, I get to choose between a number of 'zones': e.g. 4 zones (fast) or 1 zone (slower, but better compression). The manual explains this as follows: "using multiple zones, every picture is divided into as many strips, which are compressed independently. This is much faster then having to take the whole picture into account but of course the compression algorithm will be less efficient and the resulting file will be larger." Being in no hurry, I set 'zones' to '1' and indeed, the file got considerably smaller without any loss in quality as far as I could make out.
    I don't know what encoder you're using but I see very little difference in quality/size between 1 or more threads in x264. (Note that the --zones settings in x264 is something different. What you are calling zones is controlled by the number of threads in x264, --threads. The frame is broken up into strips with each thread handling a strip.)

    Originally Posted by Mabel View Post
    My point here is: there is no mention at all about a setting which would compare the pixels of not just the whole frame but also those from the previous (or the next) one. In fact, I have trouble understanding how comparing with another frame would be even compatible to a setting of '4 zones' considering that the whole speed-gain of multiple 'zones' seems to be that they are independently encoded.
    That is how multithreading is implemented in most h.264 encoders. With one thread only one core of a N core CPU will be working on the video, working on the entire frame. The other N-1 cores will be sitting around doing nothing (or working on some other program). With N threads the frame is split into N strips with a different core working on each strip. Since the work is done in parallel you get much faster encoding. But this also means that the encoder finds fewer motion vectors resulting in less effective compression.
    Quote Quote  
  17. Member
    Join Date
    Dec 2015
    Location
    Amsterdam, The Netherlands
    Search PM
    Originally Posted by jagabo View Post
    With N threads the frame is split into N strips with a different core working on each strip. Since the work is done in parallel you get much faster encoding.
    My point exactly. So I'll ask again: how is this compatible with your claim that the frames are compressed using info from preceding and subsequent frames? Remember that this is what you provided as a reason why a reduction of 50p to 25p video could never be done by simply 'throwing out every other frame' but would always involve de-compressing/re-compressing. Reading your comment I see nothing about 'previous and subsequent frames'; quite the contrary: what you say seems to confirm that the compression algorithm is applied to one whole frame at the most and so I'm still somewhat doubtful as to the validity of your assertion.

    Originally Posted by _Al_ View Post
    You can deinterlace 50i to 50p, nowadays some become to forget that there was 25p. Anything can play 50p, phones as well. If not 1920x1080 50p, then encode 1280x720 50p.
    Your advice is clear and I thank you for it. @sneaker, too, adviced to use 50p as 'distribution format'. Yet my practical experience/options bear out something else.
    I used my editor (Sony Vegas 13) to 'render' some video's shot at 50p according to my camera settings. First in 50p (a format indicated as 'matching the input') and then an output at 25p. Both in MP4 format.
    Result: on my, fairly powerful, desktop the '50p' video played with noticeable jerks where the camera moved fast. On my other devices (laptop, tablet, smartphone) it was even worse.
    In contrast, the 25p output played smoothly on all devices.
    Just to be sure I repeated the procedure using my other editor (PowerDirector v. 14) and the results were the same or even more pronounced.
    Heeding your advice, I subsequently tried 1280x720 at 50p.
    The result, to my surprise and possibly yours, was not much different; the 50p video did indeed look somewhat less jerky then 1920x1080 video, but it still didn't look as smooth as the 25p video did, even at 1920x1080. On the other hand, on a 'native 1920x1080' screen, the 1280x720 video looked noticeably less sharp (as I verified with screenshots).
    Aside: I'm aware that the quality of displaying 1280x720 on a screen with 1920x1080 pixels may depend on video-hardware, video-drivers and the OS; nevertheless it stands to reason that a genuine 1920x1080 image, which is not alreay cheating somewhere, will look sharper on a 1920x1080 screen.

    Summarising: I, for one, have not been able to produce a 50p video which looks better, or even as good as, a 25p (or 30p) video, even when reducing the resolution. How can this be explained - did I make a mistake or are we living on different planets?

    Originally Posted by jagabo View Post
    I don't know what encoder you're using but I see very little difference in quality/size between 1 or more threads in x264. (Note that the --zones settings in x264 is something different. What you are calling zones is controlled by the number of threads in x264, --threads. The frame is broken up into strips with each thread handling a strip.)
    Call it 'strips' or 'zones', we are clearly talking about the same thing. I told you what encoder I'm working with: Sony Vegas v.13. The difference in size between the settings of '1 zone' or '4 zones' is about 35% and this stands to reason: a compression algorithm working on one whole frame can be more efficient then one working on just a strip of one-quarter of a frame, but it takes more time to be more efficient.
    It seems to me that 'threads' are a completely different issue, having to do with a division of labour between cpu-cores. This has to do with speed, but nothing with the questions at hand as far as I can see.

    The questions I was most curious about thus still seem to remain without answer.

    In response to the question of converting 50p to 25p without de-compression/re-compression, Jagabo contended this was impossible because the compression uses pixels form subsequent frames, but didn't answer my objections and describes a process which makes it unlikely that this is the reason.

    On the subject of 'interlace' I wondered: do media-players show and interlaced video as such or do they de-interlace the video before showing it? And if so, why? I think I already wrote here or elsewhere on this forum: interlacing is now universally derided, but can anyone explain why? What technological advances make it less desirable now than it was back then?
    @sneaker said "For modern screens the playback software (or the display itself) need to de-interlace to either 25p (bad) or 50p (good)" but gave no reasons as to WHY this would be the case. I still don't see any technical necessity for de-interlacing and I don't see any advantage. And again: why wouldn't the reasons to introduce interlacing not be just as valid today?

    Then: what is the relation between frame-rate and shutter-speed, if any? The quesion only came up after the posting of _AI_, but I didn't get an answer.

    One more: Can I edit 50i video without de-interlacing it, that is without loosing 'temporal resolution'?

    Mabel
    Quote Quote  
  18. Member
    Join Date
    Dec 2015
    Location
    Amsterdam, The Netherlands
    Search PM
    Originally Posted by vaporeon800 View Post
    All modern displays are progressive. They can't natively display interlaced content.
    Sure I can see that 'interlacing' was invented with the old CRT display in mind, but what I can't figure out is why modern displays would not be able to function the same way. As far as I know, modern screens can quite easily display 50 frames per second so I really don't see why they would be unable, or even 'less suited', to display 50 interlaced frames per second.
    The question came up in other threads on this forum and was met with the same hostility toward 'interlacing' but so far I didn't see any valid arguments. What is this aversion against interlacing based on?
    Mabel
    Quote Quote  
  19. Member
    Join Date
    Dec 2015
    Location
    Oregon, USA
    Search Comp PM
    I'll throw my 2 cents in...here's why I don't care for interlacing.

    Interlacing itself is not always a bad thing. It accomplished it's purpose in years past by reducing bandwidth needs. The problem I find with interlacing is how often it is improperly handled by others. This makes processing the video more challenging. But that's just my opinion.

    Sent from my 831C using Tapatalk
    Quote Quote  
  20. Originally Posted by Mabel View Post
    I used my editor (Sony Vegas 13) to 'render' some video's shot at 50p according to my camera settings. First in 50p (a format indicated as 'matching the input') and then an output at 25p. Both in MP4 format.
    Result: on my, fairly powerful, desktop the '50p' video played with noticeable jerks where the camera moved fast. On my other devices (laptop, tablet, smartphone) it was even worse.
    In contrast, the 25p output played smoothly on all devices.
    Just to be sure I repeated the procedure using my other editor (PowerDirector v. 14) and the results were the same or even more pronounced.
    Heeding your advice, I subsequently tried 1280x720 at 50p.
    The result, to my surprise and possibly yours, was not much different; the 50p video did indeed look somewhat less jerky then 1920x1080 video, but it still didn't look as smooth as the 25p video did, even at 1920x1080. On the other hand, on a 'native 1920x1080' screen, the 1280x720 video looked noticeably less sharp (as I verified with screenshots).
    Aside: I'm aware that the quality of displaying 1280x720 on a screen with 1920x1080 pixels may depend on video-hardware, video-drivers and the OS; nevertheless it stands to reason that a genuine 1920x1080 image, which is not alreay cheating somewhere, will look sharper on a 1920x1080 screen.

    Summarising: I, for one, have not been able to produce a 50p video which looks better, or even as good as, a 25p (or 30p) video, even when reducing the resolution. How can this be explained - did I make a mistake or are we living on different planets?
    Did not read the thread again, but 50p, especially 720p, no problem.
    It could be wrong settings, project properties, export properties, clips wrongly interpreted by videoeditor (rare), weak PC etc. As always, as always, if you post original clip, 5s, then encoded, 5s, it always clears out. Threads like these can go for millennia, but it always start to make more sense after samples are available.

    There are media players nowadays for about $50 that can play UHD in HEVC (not mentioning H.264) in 50p/60p. To think about 25p especially 50i to 25p or 50p to 25p is simply wrong. Tomorrow you'll be sorry.
    Last edited by _Al_; 1st Mar 2016 at 18:24.
    Quote Quote  
  21. Originally Posted by Mabel View Post
    Originally Posted by jagabo View Post
    With N threads the frame is split into N strips with a different core working on each strip. Since the work is done in parallel you get much faster encoding.
    My point exactly. So I'll ask again: how is this compatible with your claim that the frames are compressed using info from preceding and subsequent frames? Remember that this is what you provided as a reason why a reduction of 50p to 25p video could never be done by simply 'throwing out every other frame' but would always involve de-compressing/re-compressing.
    Threads working on strips is just a matter of how the compressed frames are produced. It's not directly related to how compressed frames reference earlier and later frames. Very few frames of a compressed video contain all the information required to reconstruct that frame. Most of the compressed frames reference other frames. Ie, they say something like "to reconstruct frame N: copy that block of pixels of frame N-1 to this frame, then copy the block of pixels at x1,y1 of frame N+1 to x2,y2 in this frame, etc." If frames N-1 and N+1 aren't available frame N can't be reconstructed. So if you want to convert 60p to 30p by discarding every other frame you need to first decompress the source so that every frame contains a complete frame, discard the frames you don't want, the recompress the remaining frames.
    Quote Quote  
  22. Originally Posted by _Al_ View Post
    . To think about 25p especially 50i to 25p or 50p to 25p is simply wrong. Tomorrow you'll be sorry.
    +1
    Quote Quote  
  23. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    This thread is ridiculous. OP asks a question, people answer, OP responds with nothing but doubt about the factual accuracy of the answers. Interframe video compression is not some fantasy that jagabo invented.

    https://en.m.wikipedia.org/wiki/Data_compression#Video

    If 720p50 is still stuttery, you may need to verify whether you have your monitor set to 50Hz instead of 60Hz.
    Quote Quote  
  24. Originally Posted by Mabel View Post
    Originally Posted by vaporeon800 View Post
    All modern displays are progressive. They can't natively display interlaced content.
    Sure I can see that 'interlacing' was invented with the old CRT display in mind, but what I can't figure out is why modern displays would not be able to function the same way. As far as I know, modern screens can quite easily display 50 frames per second so I really don't see why they would be unable, or even 'less suited', to display 50 interlaced frames per second.
    The question came up in other threads on this forum and was met with the same hostility toward 'interlacing' but so far I didn't see any valid arguments. What is this aversion against interlacing based on?
    Mabel
    For interlaced video each field is half a frame, but a different moment in time. Each field is comprised of every second scan-line though, but that's how interlaced CRTs refreshed the screen. first the odd scanlines, then the even scanlines. The phosphors light up and slowly dim in a way that's similar to the way our eyes respond to light, so the whole thing seems to flow quite naturally.

    Modern displays are progressive. Even progressive CRT computer monitors completely refresh the screen from top to bottom each refresh cycle, but I'd imagine even if LCDs could be built to refresh the odd scan lines and even scanlines independently, it probably wouldn't look good, and thinking about it, interlaced video would also have to match an interlaced monitor's native resolution. You can't resize interlaced video without de-interlacing it first.

    When interlaced video is de-interlaced for a progressive display it's generally de-interlaced by the player/TV to full frame rate.
    ie from 25fps interlaced to 50fps progressive to retain fluid of motion. When it's de-interlaced to 25fps progressive it tends to look a little less fluid than natively progressive film at 24fps or 25fps etc, probably due to a lower amount of motion blur (relating to shutter speed).

    There's examples in the zip file attached to this post. There's also some more samples attached to post #20. Just a section from a PAL DVD.
    One of the challenges of de-interlacing is to take two fields that are different moments in time and combine them into a progressive frame. It's not always easy. Look at the edge of the desk, or the blind on the rear wall at the top right of the frame as the camera pans.
    Yadif de-interlacing quality is something like the usual quality of hardware de-interlacing on playback. You should notice a fair difference between Yadif at 25fps and Yadif at 50fps.
    QTGMC is a high quality Avisynth de-interlacer. It de-interlaces to 50fps but it can output 25fps when it de-interlaces by dropping every second frame. It can add motion blur to compensate, but 50fps is going to look smoother.

    I know nothing about your camera but Level 4.1 support by hardware players is a fairly standard thing for h264 decoding, which means if you want 50fps, you need to stick to 720p. 50fps at 1080p requires Level 4.2 support. https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Levels
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!