Suppose you want to convert 720p to 1080i? Yes, I know it will have artifacts. You will need the -s.I took, out -r and -s because they are redundant
You definitely need the range=limited to keep the colors true. I found that out the hard way.what was the reason for scale=out_color_matrix=bt709ut_range=limited ? I'm asking because that might be affected depending on where you place it in the filter chain. A "normal" YUV video should not require that
Is the chroma messed up with this iteration? Hint: my original source is 4:2:0, so I'm upconverting. Maybe try it without converting to 4:2:2?
I took out one of your two redundant -vf statements.Code:ffmpeg -y -i "C0008.MP4" -s 1920x1080 -vcodec mpeg2video -vf format=yuv422p,scale=out_color_matrix=bt709:out_range=limited,interlace -acodec mp2 -f mpegts output.ts
+ Reply to Thread
Results 31 to 60 of 71
-
Last edited by chris319; 27th Jun 2019 at 23:54.
-
Then you should move it back into the linear filter chain . You already have scale specified there, just add the width=1920:height=1080 arguments . Then you can control how you do it.
It was the same idea with the -pix_fmt issue with interlaced chroma artifacts . Moving it into the filter chain as "format" enables you to specify how it's done , and what order . That 420 to 422 is scaling the chroma channels, so you can get the same sorts of problems here
what was the reason for scale=out_color_matrix=bt709ut_range=limited ? I'm asking because that might be affected depending on where you place it in the filter chain. A "normal" YUV video should not require that
Is the chroma messed up with this iteration? Hint: my original source is 4:2:0, so I'm upconverting. Maybe try it without converting to 4:2:2?
Code:ffmpeg -y -i "C0008.MP4" -s 1920x1080 -vcodec mpeg2video -vf format=yuv422p,interlace,scale=out_color_matrix=bt709:out_range=limited -flags +ilme+ildct -r 29.97 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -acodec mp2 -f mpegts output.ts
That works... for my source. I don't know about yours . But I would put interlace at the end, so all operations are progressive until the end
format=yuv422p is converting to 4:2:2, progressively , because it occurs before interlace filter
If you don't need 4:2:2, leave it out -
How about this?
Code:ffmpeg -y -i "C0008.MP4" -vcodec mpeg2video -vf format=yuv422p,scale=out_color_matrix=bt709:width=1920:height=1080:out_range=limited,interlace -acodec mp2 -f mpegts output.ts
-
Filter chain looks ok; don't forget to add back the flags and encoding settings
-
-
I don't
You should specify things like bitrate, gop , buffer size, maxrate, etc.. also metadata flags 709, etc..
What do you use to determine if the chroma samples are messed up?
It looks like either colored horizontal lines on colored object edges, or sometimes "ghosting" like an echo image . Most visible with colors like red
It occurs when you resize the chroma channels incorrectly (resizing in an interlaced manner while progressive, or vice versa) . When you convert 4:2:0 to 4:2:2 you're resizing the chroma channels -
Yes, that's what I see
But those aren't colorimetry flags ; those are scale settings. Remember, earlier you had -color_primaries bt709 -color_trc bt709 -colorspace bt709 ? Those are metadata that will show up in things like mediainfo, some mediaplayers might read it, some programs etc..
This is interlaced content, progressive encoding. Remember earlier you had -flags +ilme+ildct ? to encode interlaced
The audio and video codecs are specified, but no encoding settings are specified. When you do it for real, you'd probably want to include those -
Shoot, those flags got lost when I was trying to incorporate your little code bits instead of the full script.
Take a look at this version, but I don't want to have another debate about fixed vs. variable bit rate.
Code:ffmpeg -y -i "C0008.MP4" -vcodec mpeg2video -vb 5.5M -minrate 5.5M -maxrate 5.5M -bufsize 5.5M -muxrate 6.0M -vf format=yuv422p,scale=out_color_matrix=bt709:width=1920:height=1080:out_range=limited,tinterlace=4:vlpf -flags +ilme+ildct -r 29.97 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -acodec mp2 -ab 192k -f mpegts output.ts
Some of those color flags give me this in MediaInfo:
Code:-color_primaries bt709 -color_trc bt709 -colorspace bt709
Color primaries : BT.709
Transfer characteristics : BT.709
Matrix coefficients : BT.709Last edited by chris319; 28th Jun 2019 at 08:22.
-
1. The avisynth documentation I've found neglects to mention that an avisynth script takes the file extension ".avs".
2. VirtualDub cannot open mp4 files; it is instead necessary to use VirtualDub2.
3. Documentation also neglects to mention that in VirtualDub2 you need to open a video file (with ".avs" extension) rather than "run script".
4. Armed with the above knowledge I attempted to run my .avs script containing the above avisynth code and it did not make it past the first line before quitting with an error message about not finding "ffvideosource".
WTF? Four strikes and we're starting off with a bang. -
Looks ok , but 5.5Mb/s CBR MPEG2 422 isn't going to look very good . Typically 50Mb/s is used
For generic MPEG2 streams, typically -g 15 for NTSC gop size, -bf 2 for 2 b-frames
http://avisynth.nl/index.php/Getting_started
For testing create a file called test.avs and put the following single line of text in it:
Version()
http://avisynth.nl/index.php/FFmpegSource
Otherwise you have to use LoadPlugin("PATH\ffms2.dll") in the script. x86 version for x86 avisynth, x64 version for x64 avisynth
You're probably going to have many "attempts" before getting avisynth working . Lots of little quirks or specific ways of doing things . All programs have them . FFMpeg has lots too - look how many "strikes" trying to make a proper interlace stream. None of the problems are documented either; you only figure those out from testing, looking at the output, retesting, rinse, repeat -
The first three lines from the very first introductory link at the AviSynth site:
After downloading and installing AviSynth, let's run the most basic of AviSynth scripts:
Version()
Type the above in your text editor and save it as "version.avs".
http://avisynth.nl/index.php/First_script
2. VirtualDub cannot open mp4 files; it is instead necessary to use VirtualDub2.
3. Documentation also neglects to mention that in VirtualDub2 you need to open a video file (with ".avs" extension) rather than "run script".
You now have a script that can be opened by most AVI players in your machine: Windows Media Player 6.4 (or higher) will play it; so will Media Player Classic, VirtualDub, VirtualDubMod and many others. This is done through the magic of frameserving. -
Here is the documentation I found:
http://avisynth.nl/index.php/Getting_started
Basically, AviSynth works like this:
First, you create a simple text document with special commands, called a script. These commands make references to one or more videos and the filters you wish to run on them. Then, you run a video application, such as VirtualDub, and open the script file. This is when AviSynth takes action. It opens the videos you referenced in the script, runs the specified filters, and feeds the output to video application. The application, however, is not aware that AviSynth is working in the background. Instead, the application thinks that it is directly opening a filtered AVI file that resides on your hard drive.
You're probably going to have many "attempts" before getting avisynth working . Lots of little quirks or specific ways of doing things . All programs have them . FFMpeg has lots too - look how many "strikes" trying to make a proper interlace stream. None of the problems are documented either; you only figure those out from testing, looking at the output, retesting, rinse, repeat
I still don't see how this is a preferable solution to VLC which has been dependable in my experience. At my work, a broadcast TV station, we use VLC to privately stream our programming with no issues.
Once I get these fields split I'm not sure what I'm looking for and if it's a foolproof solution, IOW, can any monkey use it to evaluate interlace without applying subjective judgement? With VLC interlacing "jumps out" at you and you can't miss it, either by turning the deinterlacer off or using the "Phosphor" deinterlacer. -
Avisynth is frame server,
it serves uncompressed frames, it has to be loaded somewhere, it cannot just run itself. You load it into something. Run button (F5) in a console would not work. The only console that Avisynth could be run at is AvsPmod (or I do not know any other). That is dedicated Avisynth console, creator/player. But just to get video on screen. If you need to encode avs script, you'd need to load it somewhere , virtualdub2, avfs.exe (creates virtual avi) or other encoding software that accepts avs as input, for example x264 encoder can also load avs script.
Vapoursynth script has extension *.vpy or just *.py (like general python script). If loaded into VirtualDub2 , it needs to have *.vpy extension and python and vapoursynth needs to be installed. If you work with portable versions of vapoursynth and python, you cannot load *.vpy into VirtualDub2, you'd need to fix registry (I think ChaosKing posted something on doom9, not sure where it is). Now there is a major difference from Avisynth. Because it is a Python script it could be run in any python console. Of course you'd need to implement couple of line code at the end that brings a clip on screen using other python moduls like opencv, Qt, PIL . I have no problem using opencv or QT. Or you can just request frames in a loop for troubleshooting without actual preview. But of course, codes for that visual can get technical so there is something similar like for Avisyth's avsPmod -> Vapoursynth has also dedicated console/designer/editor , it is called Vapoursynth Editor, VSEditor, VSEdit made by Mystery Keeper. It could be called differently because it just has general name.
VirtualDub could never load mp4 file. There is guy with nick shekh who adds new functions to that beautiful VD legacy naming it VirtualDub2. But VirtualDub2 is Windows only, so if using Vapoursynth, I'd focus using things that are also crossplatform. Vapoursynth is crossplatform.
Videos could be loaded into Avisynth using different plugins. AviSource is included, other formats can use ffmpegsource, lsmash source, mpeg2source (to load indexed mpeg2 files that you make by DGIndex), directshow source.
AviSource("avi_video", some args)
audio = FFAudioSource("video", some args)
video = FFVideoSource("video", some args)
LSMASHAudioSource("video", some args)
LSMASHVideoSource("video", some args)
LWLibavAudioSource("video", some args)
LWLibavVideoSource("video", some args)
MPEG2Source("indexed_d2v_file", some args)
DirectShowSource("video", some args)
Vapoursynth has it similar, but remember, no audio support. So it has avisource (included) loads avi or avs, ffms2.Source, LibavSMASH, d2v.Source (to load indexed d2v files that you make by d2vwitch or dgindex). d2vwitch is crossplatform as well, so it can be used instead of dgindex.
import vapoursynth as vs
clip = vs.core.avisource.AVISource('avi_video', some args)
clip = vs.core.lsmas.LibavSMASHSource('video', some args) #general iso files mpr,mov
clip = vs.core.lsmas.LWLibavSource('video') #transport streams, ts etc
clip = vs.core.core.ffms2.Source('video', some args)
clip = vs.core.d2v.Source('indexed_d2v_file', some args)
clip.set_output()
#output must be specified in python to specify output clip, in avisynth it is last if not specified ,
#because it does video only so it comes natural, but in vapoursynth you can output more items like:
# clip1.set_output(0), clip2.set_output(1) and then requested it later in any part of script or other script!
#clip = vs.get_output(index=0) , clip = vs.get_output(index=1) etc.
# so you might not know clips name from importing script but you can get its output
Just avisource is included, other source plugins need to be downloaded, for windows you need DLL and put it into Avisynth or Vapoursynth plugin directory. Always make sure you have 64 bit version if Vapoursynth or avisynth is 64 bit and same for 32bit.
for linux, (no avisynth. just vapoursynth), you might download the whole package from a repository , djcj has a vapoursynth plugins package that installs all source plugins (except avisource, but ffms2 can load avi as well)
Audio could be supported in limited way in Vapoursynth so far, if you edit video, cut it, you have to provide wav file and use damb plugin that would generate a new edited wav. And you'd have to deal with that audio separatelly, encoding it and mux into video.Last edited by _Al_; 28th Jun 2019 at 16:08.
-
Typically 50Mb/s is used
https://en.wikipedia.org/wiki/KPBS_(TV)#Digital_channels
Under the ATSC spec we could be broadcasting H.264 but broadcasters are unwilling to do that for fear there will be TV receivers "out there" that can't handle the new signal and won't be able to receive our station. This leads to reception complaints from viewers and the station would likely revert to MPEG-2. It's not like the www where Chrome or Firefox can update itself behind the user's back. -
If you think the knowledgeable pdr's experience counts as being unfounded (because it's just one person saying it?), I don't even have VLC installed on my computer as I find it useless in my work. I much prefer to use MPC-HC to check my video work or to play videos and AviSynth scripts. Sure, plenty of people love and use VLC player. But I'm not one of them.
-
If you think the knowledgeable pdr's experience counts as being unfounded (because it's just one person saying it?)
-
Sure you can use it for streaming, for quick and dirty simple viewing - but if you cannot examine fields (or double rate deinterlace) reliably, you will misdiagnoise some
Once I get these fields split I'm not sure what I'm looking for and if it's a foolproof solution, IOW, can any monkey use it to evaluate interlace without applying subjective judgement? With VLC interlacing "jumps out" at you and you can't miss it, either by turning the deinterlacer off or using the "Phosphor" deinterlacer.
If every field is different during movement (accounting for the slight field offset, if not using smart bobber) - it's true 59.94 fields/s content . Duplicates = 29.97p . 3:2 pattern = 23.976 . There are other patterns, but those are the basic ones and commonly used .
I'll upload an example later where it "looks" interlaced according to your proposed method , but is actually progressive . It's not common but it happens in real life . You don' t do want to be "that guy" that misses things this simple
I'm talking about the submission format . It will get encoded properly for the distribution streams later. You should be something resembling XDCAMHD422. This is the universal currency in terrestrial and sat broadcast, even Europe -
yes avisynth or vapoursynth, it just separates fields and nothing else
,
nothing going on behind scenes, upscale to bob or whatever
anyway, in MPC-HC I guess it also depends what renderer you use if its possible, like ffdshow, you do not check deinterlace , it will not deinterlaceLast edited by _Al_; 28th Jun 2019 at 17:55.
-
That's not the right question;
You should be asking if you can ENable a double rate deinterlacer, and have it working reliably/correctly . Either that , or the ability to view individual, separate fields
And the ability to navigate / step through field(frame) accurately . That's a big "ask" for some types of streams, especially ones with very long GOP's, open GOP's, many b-frames. Accurate seeking can be an issue for FFMpeg based libraries (this means almost all common media players, and most certainly free media players), unless it's using some indexed method (none of the "players" do; it's too slow).
Media players are not optimized for this; it's not their goal to accurately analyze - their main goal is playback smoothness with a nice UI -
it "looks" interlaced according to your proposed method , but is actually progressive
Is there anything besides ffms2.dll that I need to get this avisynth/virtualdub setup running? -
yes avisynth or vapoursynth
What exactly am I looking for?
Given moving video, in progressive scan there's going to be a slight change in the image every 1/59.94 second.
In interlaced video there is likewise going to be a slight change in the image every 1/59.94 second, only you're going to see odd or even scan lines depending on which field you're looking at.
So what am I looking for? Am I looking at the spacing between lines in the individual fields? -
That is correct answer in this case, but only because VLC was buggy for you
If VLC has the deinterlacer turned off, and you're calling it progressive (you don't see what you're calling the "line pairing") , then that suggests that VLC version or setup on your computer either has a decoding bug , or some other bug (deinterlacing somewhere in the chain when it shouldn't be) .
If you decode it correctly, and if deinterlacing turned off actually works, you should see the "line pairing" or the horizontal lines.
If you decode it correctly, and if deinterlacing turned on actually works, you should not see those
The version I'm using works differently, I see the lines for both on/off/auto or any choice (deinterlacing doesn't work correctly)
Eitherway VLC is buggy for both of us
Is there anything besides ffms2.dll that I need to get this avisynth/virtualdub setup running? -
Exactly. a 59.94p source converted to 59.94 fields/s (or 29.97i) will have 59.94 moments in time represented . That's
A "smart" bobber like yadif compensates for that up/down scan line even/odd offset . Bob does not . You ignore that offset, you're looking actual motion of objects
You look at an object , eg. a train, you mentioned in your prior example . 29.97i content will have 59.94 different pictures /s .
29.97p content will have duplicates , so only 29.97 different pictures /s . The temporal resolution is cut in half
23.976p will have triplicate,duplicate - so 3:2:3:2 -
The version I'm using works differently
Update to the latest version, which I'm using. "Line pairing" was a bad choice of words — call it "scalloping" if you will. If you're doing it as I described on properly-interlaced video, you can't miss it. If you can't reproduce my results then you're not doing the same experiment and have no basis to bash my method. Yes, I saw the line pairing in the belle-nuit video. It was not the same scalloping I see.
You look at an object , eg. a train, you mentioned in your prior example . 29.97i content will have 59.94 different pictures /s .
Again, I'm looking for spacing between scan lines in individual fields, correct?
I'll try to get avisynth/virtualdub2 working and see how well it reveals interlacing. -
Not everything, but this one can be described as buggy for this - according to what I see, and what you describe
And since what we see are both different, it's not only buggy, but also inconsistent
Maybe some other difference, e.g. GPU driver, I tested on Win 8, etc... but VLC is known to be buggy for many things
Update to the latest version, which I'm using. "Line pairing" was a bad choice of words — call it "scalloping" if you will. If you're doing it as I described on properly-interlaced video, you can't miss it. If you can't reproduce my results then you're not doing the same experiment and have no basis to bash my method. Yes, I saw the line pairing in the belle-nuit video. It was not the same scalloping I see.
Can you describe it better or take a screenshot ? I have no idea what "scalloping" is
I'm just reporting what I see here. You reported what you saw. And they are different. Maybe some configuration difference, maybe GPU driver setting. Eitherway both of our VLC's are not working correctly
I'm not "bashing" your method. And it's not "your" method. This has been described many years ago. And this method of looking at weaved fields as a single frame will misdiagnose some streams , like this one, and others, if VLC (or any player or program) is working correctly. If you do not believe this, you clearly do not understand what interlace really is... Again, it's not common to have these sorts of streams, but it does happen in real life . But I'd rather be 100% certain, than 99.9%.
You look at an object , eg. a train, you mentioned in your prior example . 29.97i content will have 59.94 different pictures /s .
Again, I'm looking for spacing between scan lines in individual fields, correct?
If you're looking at separated individual fields instead, they are now organized as half height frames. There are no scan lines when it's arranged this way. You're looking at alternate even/odd fields when you frame advance. And you're looking for the same thing - object motion
Try ffplay
ffplay -i "what_am_i.mp4"
That's what it should look like with deinterlacing turned off . Hit the spacebar to pause. Hit "s" key to frame advance
ffplay -i "what_am_i.mp4" -vf yadif=mode=1
That's what it should look like with yadif 2x deinterlacing turned on . Hit the spacebar to pause. Hit "s" key to frame advance -
This will give you an idea of what scalloping looks like. Note the wavy pattern. Picture scalloping was a common malady in quad videotape machines when the guide height was set incorrectly.
https://upload.wikimedia.org/wikipedia/commons/0/09/Argopecten_irradians.jpg
Similar Threads
-
Capmaster Checking In
By Capmaster in forum Off topicReplies: 4Last Post: 5th Apr 2019, 12:35 -
Spell-checking errors
By nunit in forum SubtitleReplies: 3Last Post: 22nd Mar 2019, 11:41 -
Virtualdub: GUI convert PAL interlace to NTSC interlace
By kalemvar1 in forum Video ConversionReplies: 4Last Post: 23rd Sep 2017, 15:30 -
Checking through VHS tapes on old player?
By Master Tape in forum MediaReplies: 10Last Post: 22nd Sep 2016, 10:16 -
checking videos
By natty in forum Newbie / General discussionsReplies: 8Last Post: 24th May 2016, 07:04