VideoHelp Forum
+ Reply to Thread
Page 11 of 11
FirstFirst ... 9 10 11
Results 301 to 318 of 318
Thread
  1. Originally Posted by poisondeathray View Post
    Originally Posted by celsoac View Post
    I want to turn it progressive first because I want to upload it to YouTube (in fact, a version is already uploaded. I don't want YouTube to convert it to progressive for me.

    The original codec is MPEG2.


    From your description, the actual content probably is not actually "25i" . It's probably a "25p" speedup from 23.976p source, but just encoded as interlaced to make it compatible with 50Hz broadcast systems . This is essentially 2:2 pulldown . ie. it's probably already progressive content, and you do not want to deinterlace progressive content, or you will degrade it

    You probably have to re-encode it anyways for youtube because of 1) AR issues and 2) you don't want to upload progressive content flagged interlaced to YT, because it will apply deinterlace once it sees the flag

    If using a NLE, you'd have to remember to interpret it as progressive, using progressive timeline settings, and export settings - otherwise the NLE will degrade the footage too

    You'd have to upload an actual video sample to verify , someone will take a look at it

    (And this is way off topic for this thread...)
    Thank you. The original source is not 23.976. It's a Spanish documentary shot at 24fps in 1964 and broadcast years later as 25i PAL. Odd and even fields are different [EDIT: in the sense that they don't show the same pixels of the same frame; and one of the fields looks blurrier, the other sharper]. In fact, the MPEG2 I got had mixed the fields wrongly from some computer capture (I didn't do it) so that FieldB of Frame1 + FieldA of Frame 2 = FRAME; and FieldB of Frame2 + FieldA of Frame3 => FRAME, etc. It's not just a matter of field dominance, but of the source file missing 1 field at the beginning. To fix this, I have deinterlaced the MPEG2, eliminated 1 field, and reinterlaced it. But the issue is what method to do that, then.
    Last edited by celsoac; 28th Apr 2019 at 13:52. Reason: Adding a video.
    Quote Quote  
  2. Originally Posted by celsoac View Post
    Thank you. The original source is not 23.976. It's a Spanish documentary shot at 24fps in 1964 and broadcast years later as 25i PAL. Odd and even fields are different [EDIT: in the sense that they don't show the same pixels of the same frame; and one of the fields looks blurrier, the other sharper]. In fact, the MPEG2 I got had mixed the fields wrongly from some computer capture (I didn't do it) so that FieldB of Frame1 + FieldA of Frame 2 = FRAME; and FieldB of Frame2 + FieldA of Frame3 => FRAME, etc. It's not just a matter of field dominance, but of the source file missing 1 field at the beginning. To fix this, I have deinterlaced the MPEG2, eliminated 1 field, and reinterlaced it. But the issue is what method to do that, then.


    If your description is correct, it's "field shifted." It's what jagabo in the previous post

    Deinterlacing would usually be the wrong thing to do in general - since it's progressive. You lose about 1/2 the effective resolution of a progressive film frame

    You would either use TFM to field match as he suggested , or separate the fields, then trim off the 1st field , then weave them back together - which is similar to what you were doing, but without the degrading deinterlace step

    But - one blurry, one clear field might indicate additional issues e.g. from the transfer . For example if the fields are off a bit, or there was a bit of wobble, you might not be able to recover the film frames perfectly without combing - there might be other things, other filters you might have to apply to clean it up

    If you upload the original sample somewhere, a sample with motion, someone will examine it and make suggestions
    Quote Quote  
  3. Originally Posted by poisondeathray View Post
    Originally Posted by celsoac View Post
    Thank you. The original source is not 23.976. It's a Spanish documentary shot at 24fps in 1964 and broadcast years later as 25i PAL. Odd and even fields are different [EDIT: in the sense that they don't show the same pixels of the same frame; and one of the fields looks blurrier, the other sharper]. In fact, the MPEG2 I got had mixed the fields wrongly from some computer capture (I didn't do it) so that FieldB of Frame1 + FieldA of Frame 2 = FRAME; and FieldB of Frame2 + FieldA of Frame3 => FRAME, etc. It's not just a matter of field dominance, but of the source file missing 1 field at the beginning. To fix this, I have deinterlaced the MPEG2, eliminated 1 field, and reinterlaced it. But the issue is what method to do that, then.


    If your description is correct, it's "field shifted." It's what jagabo in the previous post

    Deinterlacing would usually be the wrong thing to do in general - since it's progressive. You lose about 1/2 the effective resolution of a progressive film frame

    You would either use TFM to field match as he suggested , or separate the fields, then trim off the 1st field , then weave them back together - which is similar to what you were doing, but without the degrading deinterlace step

    But - one blurry, one clear field might indicate additional issues e.g. from the transfer . For example if the fields are off a bit, or there was a bit of wobble, you might not be able to recover the film frames perfectly without combing - there might be other things, other filters you might have to apply to clean it up

    If you upload the original sample somewhere, a sample with motion, someone will examine it and make suggestions
    Thank you. I was about to start another thread and reexplain everything for other people, but since you are so kind to continue to help here, let me upload the sample (below) and tell you what I did. The source is MPEG2 PAL at 352x576; I converted it to interlaced MP4 H264 with PAR 24:11 to yield 768x576. So there are a couple of issues:

    - Fixing frame shift. For the version I have (which I want to improve) I think that I intuitively did what you suggest. I first separated the fields, that is, the JES software I use has these Deinterlacing options: Top field only, Bottom field only, Both fields, Blend. Also, you may deinterlace each field at normal height or half height (real encoded lines). I deinterlaced each field separately as 2yuv, half height. So the result is actually two, 352x288 files (below). I think one of the fields is sharper. I eliminated one frame/field from the TopField file (I think it was from the Top field file, since the MPEG2 is Top dominant and the first field in the first frame was from the previous original frame), and reinterlaced them again (I think with the opposite dominance, since now the first field in the Top file was a bottom field). So I restored the frames correctly, but my questions are:
    -should I have used Normal Height outuput for each field? I guess not, because that "normal" height would be by interpolation, and what we want is to remix the original fields.
    -how should I recombine the fields? I did it by reinterlacing: the program prompts you for two files, one for each field. I saved it as 2yuv.

    - Converting to MP4: I chose "progressive" output in MPEGStreamclip. Right, right?

    - Resolution: in the MP4, I decided to keep the 352x576 resolution and define PAR 24:11. Should I instead encode the MP4 with the final 768x576, 4x3 resolution? What is the difference, what is better? Since it is better to leave a real interlaced file as interlaced, and let the display software deal with deinterlacing, what about with horizontal resolution?
    Image Attached Files
    Quote Quote  
  4. Originally Posted by celsoac View Post
    -should I have used Normal Height outuput for each field? I guess not, because that "normal" height would be by interpolation, and what we want is to remix the original fields.
    half is correct

    -how should I recombine the fields? I did it by reinterlacing: the program prompts you for two files, one for each field. I saved it as 2yuv.
    This sounds correct , how does it look ?

    - Converting to MP4: I chose "progressive" output in MPEGStreamclip. Right, right?
    It should be progressive, if it was done correctly . Upload the output file

    - Resolution: in the MP4, I decided to keep the 352x576 resolution and define PAR 24:11. Should I instead encode the MP4 with the final 768x576, 4x3 resolution? What is the difference, what is better? Since it is better to leave a real interlaced file as interlaced, and let the display software deal with deinterlacing, what about with horizontal resolution?
    The next step is to slow it back down.

    Since this is for youtube, you should actually upscale it properly, square pixels, to HD. The bitrate distribution is more favorable proportionally for upscaled versions , and the quality is higher . Even when you watch in a small viewer , the quality is higher, fewer artifacts, details are better. It's one of very few situations where you can make a strong case for upscaling

    (It's a bit different now with VP9 , there are slightly different versions that different clients can get served by default, but in general, you should still upscale)
    Quote Quote  
  5. Poison (and jagabo), thank you very very much for all the very specific help, answers and explanations -- despite the fact that sometimes my own explanations are poor.

    Below is the previous sample, repaired, at 25fps. Is there an utility that does TFM automatically?

    Originally Posted by poisondeathray View Post
    The next step is to slow it back down.
    I have tried my home method with Lossless Frame Rate Converter, which probably just changes info about speed without reencoding, but the output doesn't play right in every player. Does anyone know of any utility/app to change framerate without reencoding? Or, could I edit the video header (I know how to do edit with Atom Inspector) to specify fps / sampling interval / whatever? How?

    I tried in MPEGStreamclip and it drops 1 frame per second, bad. FCP X does reframe 24 <=> 25 without inserting or dropping frames. But it reencodes it.

    Any Best Way to do this?
    Image Attached Files
    Quote Quote  
  6. Originally Posted by celsoac View Post

    Below is the previous sample, repaired, at 25fps

    It looks correct in terms of the field weaving , although there is a missing frame. This might have sync implications if you had multiple edits , probably ok if its only a single

    Also, something about your process (mpegstreamclip?) alters the levels and contrast to the point where it is blown out and clipped, details are missing



    Is there an utility that does TFM automatically?

    TFM is a specific avisynth filter. It's a field matcher that has post processing capability


    Since you have vapoursynth running, VFM is the functional equivalent of TFM (but VFM doesn't have built in comb PP, it's done as another filter) , or you can trim, combine fields as you have here . And you can slow it down.

    But vapoursynth does not support audio processing officially, that's a huge plus in favor of avisynth IMO. Avisynth's timestrech plugin has options to adjust and resample pitch / tempo using a high quality algorithm just like dedicated audio editors . You could do everything, including the slowdown for both audio and video , in a single avs script which you encode both audio & video with .

    The benefit of using scripts is no large 2yuv intermediate files for multiple steps to encode and recombine, no need to "patch" the speed aftewards, so it's faster to process without the need for lots of HDD space. It' s basically 1 script , 1 step , that you encode with



    Originally Posted by poisondeathray View Post
    The next step is to slow it back down.
    I have tried my home method with Lossless Frame Rate Converter, which probably just changes info about speed without reencoding, but the output doesn't play right in every player. Does anyone know of any utility/app to change framerate without reencoding? Or, could I edit the video header (I know how to do edit with Atom Inspector) to specify fps / sampling interval / whatever? How?

    I tried in MPEGStreamclip and it drops 1 frame per second, bad. FCP X does reframe 24 <=> 25 without inserting or dropping frames. But it reencodes it.

    Any Best Way to do this?

    There are many ways to alter the framerate without re-encoding - not sure if any/all of the are mac friendly .

    eg. You can use mp4box or lsmash muxer to mux the fixed audio, elementary video stream and change the playback framerate .

    Since you are uploading to youtube, and a "mp4" isn't strictly required - mkvtoolnix as a mkv would work too as another option.

    Another way would be to change the timecodes .e.g using mp4fpsmod .

    Timing information can be found at the video stream level, in metadata, or in container timecodes . Some methods might change 1 and might miss fixing the other(s) . It might confuse the playback hardware or software if it only looks to one location or only expects one type. Much like AR information can be specified in several places
    Quote Quote  
  7. this would work in Vapoursynth
    Code:
    from vapoursynth import core
    input=r'F:/VideoSample1.mp4'
    clip = core.lsmas.LibavSMASHSource(input)
    clip = core.std.SetFrameProp(clip, prop="_FieldBased", intval=2) #intval=1 for bff , 2 for tff, 0 for frame based
    fields = core.std.SeparateFields(clip)[1:] #separate fields and cut off first field
    clip = core.std.DoubleWeave(fields, tff = True)[::2]  #doubleweave and deleting every other frame
    clip = clip[:-1]    #cutting off last weird frame
    clip.set_output()
    Quote Quote  
  8. View fields stacked instead of woven (both same sharpness?):

    Code:
    LSmashVideoSource("VideoSample1.mp4") 
    AssumeTFF()
    SeparateFields()
    StackVertical(SelectEven(), SelectOdd())
    Manually recombine fields back to progressive frames (won't adapt to phase changes):

    Code:
    LSmashVideoSource("VideoSample1.mp4") 
    AssumeTFF()
    SeparateFields()
    Trim(1,0)
    Weave()
    Automatically recombine fields back to progresssive frames: (adapts to phase changes):

    Code:
    LSmashVideoSource("VideoSample1.mp4") 
    AssumeTFF()
    TFM()
    Samples, pixel-for-pixel, with SAR flags:
    Image Attached Files
    Quote Quote  
  9. oo, it looks like Vapoursynth's VFM:
    Code:
    clip = core.vivtc.VFM(clip, 1)
    does not post process like Avisynth's TFM, result is the same like separatefields, cut first field, weave

    so to use TFM in Vapoursynth using Avisynth:
    Code:
    from vapoursynth import core
    input=r'F:/VideoSample1.mp4'
    clip = core.lsmas.LibavSMASHSource(input)
    clip = core.avsw.Eval(
        'AssumeTFF()'                                              
        'TFM()',
        clips=[clip], clip_names=["last"])
    clip.set_output()
    Quote Quote  
  10. Note that TFM's behavior can differ a bit depending on the settings you use. TFM(field=0, pp=0) is identical to the manual recombination I did except for a few places where the phase changed or there were orphaned fields.
    Quote Quote  
  11. Originally Posted by _Al_ View Post
    oo, it looks like Vapoursynth's VFM:
    Code:
    clip = core.vivtc.VFM(clip, 1)
    does not post process like Avisynth's TFM, result is the same like separatefields, cut first field, weave

    so to use TFM in Vapoursynth using Avisynth:
    Code:
    from vapoursynth import core
    input=r'F:/VideoSample1.mp4'
    clip = core.lsmas.LibavSMASHSource(input)
    clip = core.avsw.Eval(
        'AssumeTFF()'                                              
        'TFM()',
        clips=[clip], clip_names=["last"])
    clip.set_output()
    Thank you both, _AI_ and jagabo, for the help and the samples. It seems that whatever I did to recombine (converting to 2yuv) flattened color a lot, but that's another issue. In terms of definition, it's hard to tell if there is any difference. And, yes, in this case it seems that both fields are equally sharp, but in another material I have from a NTSC tape it's clear one field is sharper. That was captured as MPEG2.

    Now, where do I input this code? I mean, I installed Hybrid, and when I open a video file and chose plugins etc. it generates a code, which I can edit. So, shall I just open it without any file, define the file path, and paste all that? (the TFM code, I mean). Or shall I open the file and chose the options in Hybrid? Are all available?

    One big problem I have is that I just don't understand what all these scripts are, that is, if they are line commands that can be entered from the terminal (I am able to do that or only from a given piece of software. And I don't understand what core means, etc., so to me all this is similar to just copying Chinese characters. Anyway, I'll explore all this.
    Quote Quote  
  12. yes, sorry, those scripts is just materializing what poisondeathray suggested

    those scripts could be used ouside of GUI's together with command lines. I think Avisynth cannot be used on MAC, only Vapoursynth. In that case think of Vapoursynth scrip as a Python program. Its just that. Vapoursynth script = Python script. So you give that script extension *.py or *.vpy (that specifies it as vapoursynth script not Python script) and it would run. Of course it would not output anything because output , clip.set_output(), needs to be loaded somewhere, the terminology is, script's stdout needs to be piped as stdin somewhere. That is done by command lines. For example executable vspipe comes with vapoursynth and running:
    Code:
    vspipe --y4m "my_script.vpy" - | x264 - --demuxer y4m --crf 18 --profile high --level 4.1 --preset slow --output "my_video.264"
    you'd need to type it into your command promt console on MAC. That "|" means piping vspipe output into x264 input
    this way you'd get just h264 stream, you'd need to mux it into MP4 with audio later, or you can use ffmpeg to process audio as well,
    for example getting ProRez video, without audio using ffmpeg:
    Code:
    vspipe --progress --y4m "my_script.vpy" - | ffmpeg -f yuv4mpegpipe -i - -c:v prores -an -y "my_video.mov"
    or using mapping you could encode audio from original or prepare it beforehand

    etc. not sure if you want to go this way


    I'm betting using Hybrid you can re-write its generated script. I have not Hybrid here now. The way you'd go is to figure out when the part of beginning of script ends with your video loaded as a clip, usually script starts with importing parts that load moduls, MAC lib modules (not sure about extensions now) , then video is loaded and then you'd include above mentioned part of scripts that just do processing on that clip. At the end you'd include clip.set_output(), that is to specify what you want to output for encoding. Note, name "clip" could be anything, it is just a programming script, so any name is good, name clip is used as it was kind of settled down name in manuals. For example your script could be:
    Code:
    from vapoursynth import core
    sample_from_Mexico=r'F:/VideoSample1.mp4'
    doc_from_Mexico = core.lsmas.LibavSMASHSource(sample_from_Mexico)
    doc_from_Mexico.set_output()
    and that would output exactly the same video for encoding as VideoSample1.mp4
    Last edited by _Al_; 28th Apr 2019 at 21:23.
    Quote Quote  
  13. Originally Posted by _Al_ View Post
    yes, sorry, those scripts is just materializing what poisondeathray suggested

    those scripts could be used ouside of GUI's together with command lines.
    ...
    etc. not sure if you want to go this way

    I'm betting using Hybrid you can re-write its generated script. I have not Hybrid here now. The way you'd go is to figure out when the part in the beginning of script ends with your video laded as a clip, usually script starts with import parts that load moduls, MAC lib modules (not sure about extensions now) , then video is loaded and then you'd include above mentioned scripts to work with clip itself. At the end you'd include clip.set_output(), that is to specify what you want to input. Note, name "clip" could be anything, it is just a programming script, so any name is good, name clip is used as it was kind of settled down in manuals. For example your script could be:
    Code:
    from vapoursynth import core
    input=r'F:/VideoSample1.mp4'
    doc_from_Mexico = core.lsmas.LibavSMASHSource(input)
    doc_from_Mexico.set_output()
    and that would output exactly the same video for encoding as VideoSample1.mp4
    Well, thanks again. I don't think I can learn to do all that by hand. I tried Hybrid and (1) yes, I figured it out how to change a script once the video is loaded, though (2) no, the TFM option is not there, I'll load it by copying it, but (3) it gives me errors for stuff it doesn't find (D2V Witch, though I have that in a directory, but it can't find it). So, no problem, I'll find all this out little by little.

    BTW, it's not a doc_from_Mexico, but a doc_from_Galiza,_Spain_in_1964, "Así es Galicia". The version I already did and uploaded is here:
    https://www.youtube.com/watch?v=8uA-006HpP8

    (the sample segment starts at min. 21'16'' or so), but I want to see if I can improve it now that I understand better all this stuff about interlacing etc.
    Quote Quote  
  14. Spain, ok
    yes, that TFM part needs avsproxy.dll , that needs to be in Vapoursynth plugin directory or manually loaded in script and I have Avisynth in PC, not sure if it works without Avisynth being installed, on your MaC at all

    it should not need d2vwitch, that indexes mpeg2 files (and some other stuff), but mp4 file could be loaded with lsmash source plugin:
    clip = core.lsmas.LibavSMASHSource(r'F:/VideoSample1.mp4')

    If your file is mpeg2, d2vwitch creates d2v file that containes indexed data, not sure where Hybrid needs it to be. Working directory or in the PATH would perhaps guaranteed that to work.
    Quote Quote  
  15. Originally Posted by _Al_ View Post
    Spain, ok
    yes, that TFM part needs avsproxy.dll , that needs to be in Vapoursynth plugin directory or manually loaded in script and I have Avisynth in PC, not sure if it works without Avisynth being installed, on your MaC at all

    it should not need d2vwitch, that indexes mpeg2 files (and some other stuff), but mp4 file could be loaded with lsmash source plugin:
    clip = core.lsmas.LibavSMASHSource(r'F:/VideoSample1.mp4')

    If your file is mpeg2, d2vwitch creates d2v file that containes indexed data, not sure where Hybrid needs it to be. Working directory or in the PATH would perhaps guaranteed that to work.
    Oh, alright, yes, I got that d2vwitch error when opening the original MPEG instead of an uncompressed or ProRes version. No, I don't edit the mp4, that's only for output and tests.
    Quote Quote  
  16. AviSynth works, at least superficially, under Wine.
    Quote Quote  
  17. If you don't have the inclination to learn vapoursynth , it's possible in ffmpeg too if you just wanted to copy/paste.

    But you have fewer options, more difficult to preview to adjust options, and it can be buggy if you're using the field match option .

    It has filters , some derived from avisynth : separatefields, weave, trim . Or fieldmatch is a direct TFM clone, but the latter can be especially buggy in ffmpeg. I think it partially has to so with avisynth or vapoursynth source filter indexing is way more accurate, or the way ffmpeg handles timecodes in some files, especially mpeg2 (which you have) . I wouldn't use fieldmatch in ffmpeg, I posted about examples demonstrating the issues before

    The trim method does not "adjust on the fly"; it only works if the field shift is constant . But TFM/VFM do adjust on the fly

    Fewer options, because you might want to filter preferentially. When you field match or weave fields, often there is some residual combing (the fields don't "fit" perfectly) on these lower quality transfers, for various reasons. There are more options to handle combing in avisynth and vapoursynth


    A workaround to get rid of the ffmpeg timecodes / framerate issues is to pipe a raw video stream into ffmpeg . It's slower, but it gets rid of the timecodes. This way you can reset everything and interpret that framerate as 24.0 (or whatever framerate you want) in the 2nd ffmpeg instance .

    I notice some methods drop a frame, such as the one you uploaded and FFmpeg does too. There is an orphan field when you trim 1 field and it drops it. But that field represents a 1/2 frame and it should be a "placeholder" (it should be deinterlaced to make up a frame). If you look at the durations on the files you uploaded it is 10s 360ms vs. 10s 320ms . 40ms is 1 25fps frame duration (1/25s = 40ms , and 41.6667ms if you interpreted it as 24.0 fps) . The frame count is 259 vs 258 . That has potential implications if you had multiple sections/edits to combine - each section would be 1 frame shorter . eg. 20 edits could be 20 frame shorter

    Anyways the commandline for the separate fields and trim method would look like this for 352x576 , AR signalling as full frame, interpreted as 24.0 fps .

    Code:
    ffmpeg -i INPUT.mp4 -vf setfield=tff,separatefields,trim=start_frame=1,weave=first_field=bottom -f rawvideo - | ffmpeg -f rawvideo -pix_fmt yuv420p -s 352x576 -r 24 -i - -c:v libx264 -crf 18 -x264opts force-cfr:colorprim=smpte170m:transfer=smpte170m:colormatrix=smpte170m -aspect 4/3 -movflags faststart OUTPUT.mp4
    Or if you wanted to do other manipulations, cover up or crop the bottom noise, or resize, you could do that too. Or maybe you wanted to fix the flicker, or adjust levels/colors a bit, up to you
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!