Thank you. The original source is not 23.976. It's a Spanish documentary shot at 24fps in 1964 and broadcast years later as 25i PAL. Odd and even fields are different [EDIT: in the sense that they don't show the same pixels of the same frame; and one of the fields looks blurrier, the other sharper]. In fact, the MPEG2 I got had mixed the fields wrongly from some computer capture (I didn't do it) so that FieldB of Frame1 + FieldA of Frame 2 = FRAME; and FieldB of Frame2 + FieldA of Frame3 => FRAME, etc. It's not just a matter of field dominance, but of the source file missing 1 field at the beginning. To fix this, I have deinterlaced the MPEG2, eliminated 1 field, and reinterlaced it. But the issue is what method to do that, then.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 301 to 318 of 318
Thread
-
Last edited by celsoac; 28th Apr 2019 at 13:52. Reason: Adding a video.
-
If your description is correct, it's "field shifted." It's what jagabo in the previous post
Deinterlacing would usually be the wrong thing to do in general - since it's progressive. You lose about 1/2 the effective resolution of a progressive film frame
You would either use TFM to field match as he suggested , or separate the fields, then trim off the 1st field , then weave them back together - which is similar to what you were doing, but without the degrading deinterlace step
But - one blurry, one clear field might indicate additional issues e.g. from the transfer . For example if the fields are off a bit, or there was a bit of wobble, you might not be able to recover the film frames perfectly without combing - there might be other things, other filters you might have to apply to clean it up
If you upload the original sample somewhere, a sample with motion, someone will examine it and make suggestions -
Thank you. I was about to start another thread and reexplain everything for other people, but since you are so kind to continue to help here, let me upload the sample (below) and tell you what I did. The source is MPEG2 PAL at 352x576; I converted it to interlaced MP4 H264 with PAR 24:11 to yield 768x576. So there are a couple of issues:
- Fixing frame shift. For the version I have (which I want to improve) I think that I intuitively did what you suggest. I first separated the fields, that is, the JES software I use has these Deinterlacing options: Top field only, Bottom field only, Both fields, Blend. Also, you may deinterlace each field at normal height or half height (real encoded lines). I deinterlaced each field separately as 2yuv, half height. So the result is actually two, 352x288 files (below). I think one of the fields is sharper. I eliminated one frame/field from the TopField file (I think it was from the Top field file, since the MPEG2 is Top dominant and the first field in the first frame was from the previous original frame), and reinterlaced them again (I think with the opposite dominance, since now the first field in the Top file was a bottom field). So I restored the frames correctly, but my questions are:
-should I have used Normal Height outuput for each field? I guess not, because that "normal" height would be by interpolation, and what we want is to remix the original fields.
-how should I recombine the fields? I did it by reinterlacing: the program prompts you for two files, one for each field. I saved it as 2yuv.
- Converting to MP4: I chose "progressive" output in MPEGStreamclip. Right, right?
- Resolution: in the MP4, I decided to keep the 352x576 resolution and define PAR 24:11. Should I instead encode the MP4 with the final 768x576, 4x3 resolution? What is the difference, what is better? Since it is better to leave a real interlaced file as interlaced, and let the display software deal with deinterlacing, what about with horizontal resolution? -
half is correct
-how should I recombine the fields? I did it by reinterlacing: the program prompts you for two files, one for each field. I saved it as 2yuv.
- Converting to MP4: I chose "progressive" output in MPEGStreamclip. Right, right?
- Resolution: in the MP4, I decided to keep the 352x576 resolution and define PAR 24:11. Should I instead encode the MP4 with the final 768x576, 4x3 resolution? What is the difference, what is better? Since it is better to leave a real interlaced file as interlaced, and let the display software deal with deinterlacing, what about with horizontal resolution?
Since this is for youtube, you should actually upscale it properly, square pixels, to HD. The bitrate distribution is more favorable proportionally for upscaled versions , and the quality is higher . Even when you watch in a small viewer , the quality is higher, fewer artifacts, details are better. It's one of very few situations where you can make a strong case for upscaling
(It's a bit different now with VP9 , there are slightly different versions that different clients can get served by default, but in general, you should still upscale) -
Poison (and jagabo), thank you very very much for all the very specific help, answers and explanations -- despite the fact that sometimes my own explanations are poor.
Below is the previous sample, repaired, at 25fps. Is there an utility that does TFM automatically?
I have tried my home method with Lossless Frame Rate Converter, which probably just changes info about speed without reencoding, but the output doesn't play right in every player. Does anyone know of any utility/app to change framerate without reencoding? Or, could I edit the video header (I know how to do edit with Atom Inspector) to specify fps / sampling interval / whatever? How?
I tried in MPEGStreamclip and it drops 1 frame per second, bad. FCP X does reframe 24 <=> 25 without inserting or dropping frames. But it reencodes it.
Any Best Way to do this? -
It looks correct in terms of the field weaving , although there is a missing frame. This might have sync implications if you had multiple edits , probably ok if its only a single
Also, something about your process (mpegstreamclip?) alters the levels and contrast to the point where it is blown out and clipped, details are missing
Is there an utility that does TFM automatically?
TFM is a specific avisynth filter. It's a field matcher that has post processing capability
Since you have vapoursynth running, VFM is the functional equivalent of TFM (but VFM doesn't have built in comb PP, it's done as another filter) , or you can trim, combine fields as you have here . And you can slow it down.
But vapoursynth does not support audio processing officially, that's a huge plus in favor of avisynth IMO. Avisynth's timestrech plugin has options to adjust and resample pitch / tempo using a high quality algorithm just like dedicated audio editors . You could do everything, including the slowdown for both audio and video , in a single avs script which you encode both audio & video with .
The benefit of using scripts is no large 2yuv intermediate files for multiple steps to encode and recombine, no need to "patch" the speed aftewards, so it's faster to process without the need for lots of HDD space. It' s basically 1 script , 1 step , that you encode with
I have tried my home method with Lossless Frame Rate Converter, which probably just changes info about speed without reencoding, but the output doesn't play right in every player. Does anyone know of any utility/app to change framerate without reencoding? Or, could I edit the video header (I know how to do edit with Atom Inspector) to specify fps / sampling interval / whatever? How?
I tried in MPEGStreamclip and it drops 1 frame per second, bad. FCP X does reframe 24 <=> 25 without inserting or dropping frames. But it reencodes it.
Any Best Way to do this?
There are many ways to alter the framerate without re-encoding - not sure if any/all of the are mac friendly .
eg. You can use mp4box or lsmash muxer to mux the fixed audio, elementary video stream and change the playback framerate .
Since you are uploading to youtube, and a "mp4" isn't strictly required - mkvtoolnix as a mkv would work too as another option.
Another way would be to change the timecodes .e.g using mp4fpsmod .
Timing information can be found at the video stream level, in metadata, or in container timecodes . Some methods might change 1 and might miss fixing the other(s) . It might confuse the playback hardware or software if it only looks to one location or only expects one type. Much like AR information can be specified in several places -
this would work in Vapoursynth
Code:from vapoursynth import core input=r'F:/VideoSample1.mp4' clip = core.lsmas.LibavSMASHSource(input) clip = core.std.SetFrameProp(clip, prop="_FieldBased", intval=2) #intval=1 for bff , 2 for tff, 0 for frame based fields = core.std.SeparateFields(clip)[1:] #separate fields and cut off first field clip = core.std.DoubleWeave(fields, tff = True)[::2] #doubleweave and deleting every other frame clip = clip[:-1] #cutting off last weird frame clip.set_output()
-
View fields stacked instead of woven (both same sharpness?):
Code:LSmashVideoSource("VideoSample1.mp4") AssumeTFF() SeparateFields() StackVertical(SelectEven(), SelectOdd())
Code:LSmashVideoSource("VideoSample1.mp4") AssumeTFF() SeparateFields() Trim(1,0) Weave()
Code:LSmashVideoSource("VideoSample1.mp4") AssumeTFF() TFM()
-
oo, it looks like Vapoursynth's VFM:
Code:clip = core.vivtc.VFM(clip, 1)
so to use TFM in Vapoursynth using Avisynth:
Code:from vapoursynth import core input=r'F:/VideoSample1.mp4' clip = core.lsmas.LibavSMASHSource(input) clip = core.avsw.Eval( 'AssumeTFF()' 'TFM()', clips=[clip], clip_names=["last"]) clip.set_output()
-
Note that TFM's behavior can differ a bit depending on the settings you use. TFM(field=0, pp=0) is identical to the manual recombination I did except for a few places where the phase changed or there were orphaned fields.
-
Thank you both, _AI_ and jagabo, for the help and the samples. It seems that whatever I did to recombine (converting to 2yuv) flattened color a lot, but that's another issue. In terms of definition, it's hard to tell if there is any difference. And, yes, in this case it seems that both fields are equally sharp, but in another material I have from a NTSC tape it's clear one field is sharper. That was captured as MPEG2.
Now, where do I input this code? I mean, I installed Hybrid, and when I open a video file and chose plugins etc. it generates a code, which I can edit. So, shall I just open it without any file, define the file path, and paste all that? (the TFM code, I mean). Or shall I open the file and chose the options in Hybrid? Are all available?
One big problem I have is that I just don't understand what all these scripts are, that is, if they are line commands that can be entered from the terminal (I am able to do that or only from a given piece of software. And I don't understand what core means, etc., so to me all this is similar to just copying Chinese characters. Anyway, I'll explore all this. -
yes, sorry, those scripts is just materializing what poisondeathray suggested
those scripts could be used ouside of GUI's together with command lines. I think Avisynth cannot be used on MAC, only Vapoursynth. In that case think of Vapoursynth scrip as a Python program. Its just that. Vapoursynth script = Python script. So you give that script extension *.py or *.vpy (that specifies it as vapoursynth script not Python script) and it would run. Of course it would not output anything because output , clip.set_output(), needs to be loaded somewhere, the terminology is, script's stdout needs to be piped as stdin somewhere. That is done by command lines. For example executable vspipe comes with vapoursynth and running:
Code:vspipe --y4m "my_script.vpy" - | x264 - --demuxer y4m --crf 18 --profile high --level 4.1 --preset slow --output "my_video.264"
this way you'd get just h264 stream, you'd need to mux it into MP4 with audio later, or you can use ffmpeg to process audio as well,
for example getting ProRez video, without audio using ffmpeg:
Code:vspipe --progress --y4m "my_script.vpy" - | ffmpeg -f yuv4mpegpipe -i - -c:v prores -an -y "my_video.mov"
etc. not sure if you want to go this way
I'm betting using Hybrid you can re-write its generated script. I have not Hybrid here now. The way you'd go is to figure out when the part of beginning of script ends with your video loaded as a clip, usually script starts with importing parts that load moduls, MAC lib modules (not sure about extensions now) , then video is loaded and then you'd include above mentioned part of scripts that just do processing on that clip. At the end you'd include clip.set_output(), that is to specify what you want to output for encoding. Note, name "clip" could be anything, it is just a programming script, so any name is good, name clip is used as it was kind of settled down name in manuals. For example your script could be:
Code:from vapoursynth import core sample_from_Mexico=r'F:/VideoSample1.mp4' doc_from_Mexico = core.lsmas.LibavSMASHSource(sample_from_Mexico) doc_from_Mexico.set_output()
Last edited by _Al_; 28th Apr 2019 at 21:23.
-
Well, thanks again. I don't think I can learn to do all that by hand. I tried Hybrid and (1) yes, I figured it out how to change a script once the video is loaded, though (2) no, the TFM option is not there, I'll load it by copying it, but (3) it gives me errors for stuff it doesn't find (D2V Witch, though I have that in a directory, but it can't find it). So, no problem, I'll find all this out little by little.
BTW, it's not a doc_from_Mexico, but a doc_from_Galiza,_Spain_in_1964, "Así es Galicia". The version I already did and uploaded is here:
https://www.youtube.com/watch?v=8uA-006HpP8
(the sample segment starts at min. 21'16'' or so), but I want to see if I can improve it now that I understand better all this stuff about interlacing etc. -
Spain, ok
yes, that TFM part needs avsproxy.dll , that needs to be in Vapoursynth plugin directory or manually loaded in script and I have Avisynth in PC, not sure if it works without Avisynth being installed, on your MaC at all
it should not need d2vwitch, that indexes mpeg2 files (and some other stuff), but mp4 file could be loaded with lsmash source plugin:
clip = core.lsmas.LibavSMASHSource(r'F:/VideoSample1.mp4')
If your file is mpeg2, d2vwitch creates d2v file that containes indexed data, not sure where Hybrid needs it to be. Working directory or in the PATH would perhaps guaranteed that to work. -
-
If you don't have the inclination to learn vapoursynth , it's possible in ffmpeg too if you just wanted to copy/paste.
But you have fewer options, more difficult to preview to adjust options, and it can be buggy if you're using the field match option .
It has filters , some derived from avisynth : separatefields, weave, trim . Or fieldmatch is a direct TFM clone, but the latter can be especially buggy in ffmpeg. I think it partially has to so with avisynth or vapoursynth source filter indexing is way more accurate, or the way ffmpeg handles timecodes in some files, especially mpeg2 (which you have) . I wouldn't use fieldmatch in ffmpeg, I posted about examples demonstrating the issues before
The trim method does not "adjust on the fly"; it only works if the field shift is constant . But TFM/VFM do adjust on the fly
Fewer options, because you might want to filter preferentially. When you field match or weave fields, often there is some residual combing (the fields don't "fit" perfectly) on these lower quality transfers, for various reasons. There are more options to handle combing in avisynth and vapoursynth
A workaround to get rid of the ffmpeg timecodes / framerate issues is to pipe a raw video stream into ffmpeg . It's slower, but it gets rid of the timecodes. This way you can reset everything and interpret that framerate as 24.0 (or whatever framerate you want) in the 2nd ffmpeg instance .
I notice some methods drop a frame, such as the one you uploaded and FFmpeg does too. There is an orphan field when you trim 1 field and it drops it. But that field represents a 1/2 frame and it should be a "placeholder" (it should be deinterlaced to make up a frame). If you look at the durations on the files you uploaded it is 10s 360ms vs. 10s 320ms . 40ms is 1 25fps frame duration (1/25s = 40ms , and 41.6667ms if you interpreted it as 24.0 fps) . The frame count is 259 vs 258 . That has potential implications if you had multiple sections/edits to combine - each section would be 1 frame shorter . eg. 20 edits could be 20 frame shorter
Anyways the commandline for the separate fields and trim method would look like this for 352x576 , AR signalling as full frame, interpreted as 24.0 fps .
Code:ffmpeg -i INPUT.mp4 -vf setfield=tff,separatefields,trim=start_frame=1,weave=first_field=bottom -f rawvideo - | ffmpeg -f rawvideo -pix_fmt yuv420p -s 352x576 -r 24 -i - -c:v libx264 -crf 18 -x264opts force-cfr:colorprim=smpte170m:transfer=smpte170m:colormatrix=smpte170m -aspect 4/3 -movflags faststart OUTPUT.mp4
Similar Threads
-
Capturing VHS tapes
By mtamimi in forum CapturingReplies: 7Last Post: 3rd Mar 2018, 01:09 -
IS the prores much better then AVCHD? A debunking video test about prores.
By Stears555 in forum Video ConversionReplies: 8Last Post: 21st Mar 2017, 02:22 -
Is prores or h264 more efficient compression at prores bitrates
By ezcapper in forum Newbie / General discussionsReplies: 14Last Post: 6th Feb 2017, 20:03 -
Capturing Hi8 tapes - latest advice.
By ex_directory in forum CapturingReplies: 21Last Post: 20th Sep 2014, 02:51 -
What's the best device for capturing VHS tapes?
By snafubaby in forum Newbie / General discussionsReplies: 1Last Post: 15th Aug 2014, 19:38