VideoHelp Forum




+ Reply to Thread
Results 1 to 18 of 18
  1. Hi, I'm a ***


    and however. What is the best way to convert a 25P source to 50 ?!??
    Quote Quote  
  2. best might be just telling encoder to fake interlace it, so treat it in avisynth as 25p
    Quote Quote  
  3. Best? Usually done with MVTools2. I think you can use QTGMC (which uses MVTools2) as a front end, but here is the basic AVISynth code:

    Code:
    # Convert 25p progressive camera movies to 50i
    loadplugin("C:\Program Files\AviSynth 2.5\plugins\MVTools\mvtools2.dll") 
    
    blocksize=16
    
    source=AVISource("E:\fs.avi")
    
    superfps= MSuper(source,pel=2)
    backward_vec2 = MAnalyse(superfps, isb = true,blksize=blocksize)
    forward_vec2  = MAnalyse(superfps, isb = false,blksize=blocksize)
    MFlowFps(source,superfps, backward_vec2, forward_vec2, num=50, den=1, ml=200)
    SeparateFields()
    SelectEvery(4,0,3)
    output=Weave()
    
    #stackhorizontal(source.separatefields(),output.separatefields())  #For troubleshooting and determining if field order is correct.
    return output
    You might get a little better quality using a block size of 8 instead of 16.

    I adapted this from a 30p to 60i script I wrote years ago. It is possible that you might have to change the SelectEvery statement to selectevery(4,1,2). Someone who knows PAL video better than I do will have to provide guidance.
    Quote Quote  
  4. It's questionable whether interpolated motion is "best". It depends on the particular video. With some it will generate gross distortions. With others it may look fine. It's something to try though.

    Here's a bad case example with simple interlaced encoding on the left, with motion interpolation (johnmeyer's code) on the right.

    Code:
    function Spin(clip v, float angle)
    {
        Rotate(v, angle)
    }
    
    
    BlankClip(width=640, height=30, color=$ffffff)
    black = BlankClip(last, color=$000000).PointResize(width, 2)
    StackVertical(last,black,black,black)
    StackVertical(last,last,last,last)
    StackVertical(last,last,last,last)
    Blur(1.0)
    
    Animate(0,90, "Spin", last,0.0, last,180.0)
    Trim(0, 90)
    ConvertToYV12()
    AssumeFPS(25)
    
    source = last
    blocksize = 16
    superfps= MSuper(source,pel=2)
    backward_vec2 = MAnalyse(superfps, isb = true,blksize=blocksize)
    forward_vec2  = MAnalyse(superfps, isb = false,blksize=blocksize)
    MFlowFps(source,superfps, backward_vec2, forward_vec2, num=50, den=1, ml=200)
    SeparateFields()
    SelectEvery(4,0,3)
    output=Weave()
    
    StackHorizontal(source, output)
    Image Attached Files
    Last edited by jagabo; 18th Jun 2020 at 23:10.
    Quote Quote  
  5. Originally Posted by jagabo View Post
    It's questionable whether interpolated motion is "best". It depends on the particular video. With some it will generate gross distortions. With others it may look fine. It's something to try though.

    Here's a bad case example with simple interlaced encoding on the left, with motion interpolation (johnmeyer's code) on the right.
    I'm usually the one warning people about the artifacts you can get with motion estimation. And, if you create what we engineers call a "pathological case" (something not found in real-life), as Jagabo did, then you can demonstrate the fault of pretty much any approach.

    Moving vertical lines, like a horizontal pan over a picket fence, is the worst thing for motion estimation to handle, so Jagabo's example did indeed show the limitations of any motion estimation algorithm, not just MVTools2, and not just my particular script.

    I will admit that using the word "best" (which is banned over at doom9.org, for this very reason) was not a good choice. Perhaps I should have simply said that my script shows one of the more advanced techniques.

    You can search the web, but you will find that most of the tutorials do recommend what I posted. Here is one such tutorial:

    https://library.creativecow.net/articles/solorio_marco/interlacing_progressive_footage.php

    Finally, while Jagabo did a great job showing how motion estimation can blow up, what he really should have done was to create some other way to go from 25p to 50i and then compare that result with the motion estimation result, both using real-life video. With that comparison, I think you will find that the motion estimation approach holds up pretty darned well.

    Anything which creates the missing points in time and then weaves them together into a 50 field per second interlaced video is going to have some sort of artifact. A one example, any approach which blends adjacent frames, then separate fields and weaves is going to produce a very soft result.
    Quote Quote  
  6. Simple doubling the frames?
    ffmpeg -i input25.mkv -c:v libx264 -crf 18 -r 50 -c:a copy output50.mkv
    Quote Quote  
  7. [QUOTE=johnmeyer;2587210]
    Originally Posted by jagabo View Post
    Anything which creates the missing points in time and then weaves them together into a 50 field per second interlaced video is going to have some sort of artifact. A one example, any approach which blends adjacent frames, then separate fields and weaves is going to produce a very soft result.
    Here a comparison of Interpolation (using johnmeyer's script) vs Blending

    Blending script
    Code:
    convertfps(50)
    separatefields().selectevery(4,0,3).weave()
    Image Attached Files
    Quote Quote  
  8. Originally Posted by johnmeyer View Post
    what he really should have done was to create some other way to go from 25p to 50i
    I did. That's what the left half of the frame shows.
    Quote Quote  
  9. Originally Posted by jagabo View Post
    Originally Posted by johnmeyer View Post
    what he really should have done was to create some other way to go from 25p to 50i
    I did. That's what the left half of the frame shows.
    I must be slow this morning because I don't see how the left side of the frame shows 50i video. After all, you input the result of your animation directly into my script ("last" via the "source" variable). My script then creates 50i interlaced video. If "last" were interlaced, then my script would create a mess.

    Since I don't see progressive to interlaced code operating on the "last" variable between the point where it is created, and when it is used in the final "stack" line, I can only conclude that "last" is 25p and the output of my script is 50i.

    This has nothing to do with the artifacts created by my script; those are quite real, and quite expected when fed this torture test.
    Last edited by johnmeyer; 19th Jun 2020 at 10:24.
    Quote Quote  
  10. Originally Posted by johnmeyer View Post
    Since I don't see progressive to interlaced code operating on the "last" variable between the point where it is created, and when it is used in the final "stack" line, I can only conclude that "last" is 25p and the output of my script is 50i.
    Yes, last is 25p but it's encoded as 50i. The OP didn't specify he wanted motion (or any type of) interpolation. He may simply want to append a 25p video to a 50i video.

    We are talking about a cat, you know!
    Last edited by jagabo; 19th Jun 2020 at 10:57.
    Quote Quote  
  11. Increase the blocksize to 32 and johnmeyer's interpolation is almost perfect for jagabo's "pathological" testclip
    Quote Quote  
  12. That doesn't negate the fact that motion interpolation sometimes messes up badly. I could easily make another pattern that blocksize=32 screws up on. Or blocksize=8. Or any other value. And patterns like his aren't completely unrealistic. You often see shots in movies that include elements like this. Especially animated films. I made that particular pattern because I know something like that would be particularly obvious, even a full speed playback.
    Quote Quote  
  13. @jagabo
    Agree with everything. My 'blocksize=32' was meant with grain of salt, not as a general recommendation or even solution to the problem.
    A while ago a member over at doom9 took huge efforts to improve interpolation by introducing masks for deciding when to prefer interpolation or blending etc. It helped in some cases; his script and .dll are available from here
    https://github.com/mysteryx93/FrameRateConverter/releases.
    I don't think that I have ever tested it for synthesizing a missing field as discussed here though.
    Quote Quote  
  14. Originally Posted by Sharc View Post
    A while ago a member over at doom9 took huge efforts to improve interpolation by introducing masks for deciding when to prefer interpolation or blending etc. It helped in some cases; his script and .dll are available from here
    https://github.com/mysteryx93/FrameRateConverter/releases.
    I don't think that I have ever tested it for synthesizing a missing field as discussed here though.
    Thanks, I'll take a look. Some of the other frame rate changers do that too. Interframe has a Tuning option, for example. It lets you bias the output toward better motion but more distortions vs. simple blending when good motion vectors aren't found.
    Quote Quote  
  15. oh wow

    now I do some proof
    Quote Quote  
  16. These days one would probably prefer RIFE for motion interpolation (synthesizing the missing fields for 25p->50i).
    Quote Quote  
  17. please put an example script
    Quote Quote  
  18. You were given sample scripts using RIFE long ago.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!