BlindPP relies on the block boundaries being properly aligned, that is still in the block boundary positions of the coded stream. That means no cropping and no resizing before you apply BlindPP(). So you can pad extra space to the bottom but don't pad the top. I'm a bit surprised that BlindPP() doesn't accept such frames, because it is very common to have this difference between the coded and display sizes.
It may be a bit faster but I have not tested it. Functionally it should be the same.Also ... is it better to use the Groucho2004 SSE optimized DGD build on my PIV XP laptops?
+ Reply to Thread
Results 31 to 60 of 102
-
Guest34343Guest
Last edited by Guest34343; 2nd Jun 2012 at 07:05.
-
Guest34343Guest
I can't answer that unless you post a link to a source sample with a lot of motion. You can cut the VOB directly with DGSplit or open the VOB in DGIndex and set a range and then do Save project and demux video, which gives you an M2V. Upload to mediafire.com and post the link here.
The picture you see is a blend of two pictures. For instance, a ball is thrown. In the blended picture you see two balls, because the picture blends pictures showing the ball in two different positions.What exactly does "see any blended pictures" mean?
That's where a sample comes in handy because we can look at the same thing and explain it to you. aaa means the same picture 3 times in a rwo, though it may move up and down a tiny bit due to the field offset.I'm not sure I can correctly distinguish between abcdef & aaabbcccdd.
It could also be 3:2 pulled down, field-blended (see above), have irregular pulldown, several other pathological but rare cases, or a hybrid of all of them. Experience helps you to identify things. Again, a sample would help.I'm guessing that I am seeing a 3:2 video.
If the video is not interlaced & not purely progressive, what else might it be?
3:2 or ?
Then the result will be crappy. See below.I have (so far) no (proper) IVTC/deinterlace experience.
If this is a 3:2 video, what if I do not deinterlace & just ignore the IVTC?
If you do nothing then you will code hard pulldown in your final product. That is fine if your display device is interlaced. But you probably want progressive output as you talk about deinterlacing. If you deinterlace, it depends on your deinterlace algorithm. If it blends, you will create blended pictures on two fifths of your frames. If you interpolate, you will lose about half the vertical resolution on two fifths of your pictures, and have ugly stairstepping on them as well. It's not a realistic option.What would be the difference in output between applying/ignoring the IVTC?
For deinterlacing or IVTC, yes.I could/would set the DivX codec to progressive source in either case?
Best case (but unlikely although I don't know what algorithm they use), nothing. Worst case, all your pictures will be degraded. Anyway, Avisynth deinterlacing is way better. And if it is 3:2 material, then IVTC is way way way better. IVTC is not hard to do. You have to learn about it to process the video correctly.(I wonder what the DivX codec actually does
when deinterlace is selected and there is no true interlacing in the video ...?)
Again it depends on the specific video, so a sample would be helpful. It could be as simple as setting the Forced Film option in DGIndex using Video/Field Operation.How would/should I approach applying an IVTC?Last edited by Guest34343; 2nd Jun 2012 at 07:09.
-
That IS eyeballing it.
Like the FAQ said, find a portion with significant motion, like a horizontal panning shot, a car pass in front of the camera, someone walking by in the foreground, etc., the step through the frames after SeparateFields() or Bob(). If you see a image repeated 3 times*, then the next image repeated 2 time, then the next image repeated 3 times, times, etc. you have have telecined film. If each image is unique you have fully interlaced video.
* ignore a single line up and down bounce. Remember, you are looking at fields so each is only half a picture, and the two fields differ in location by 1 line vertically.Last edited by jagabo; 2nd Jun 2012 at 07:37.
-
There are a number of deblocking filters which don't have that restriction:
http://avisynth.org/mediawiki/External_filters#Deblocking
And Deblock_QED, which has already been recommended to you several times, doesn't have to be Mod16. -
For whatever it is worth, I submit the following»
in general redubbing a high bitrate source to a rather low bitrate introduces blocking/quilting.
I have found that using a deblocker is effective against that & allows me to redub at a lower bitrate
than would be possible without a deblocker ...
No ... before neuron2 pointed me to those FAQs,
the only "eyeballing it" explanations I had seen were to look for horizontal line separation -
not anything "scientific" like using AssumeTFF()/AssumeBFF() ...
Well, I had already tested QED & knew it did not have a problem with the 480x360 dimensions.
I can't say for sure why you jumped to the conclusions that you did.
QED, in any case, DOES have the same problem with Mod16.
The Deblock_QED_MT2.avs has AddBorders in several places,
so QED just automates the AddBorders step. BlindPP does not.
That is what I am trying to do ... without annoying anyone (including myself ...)
I actually came across this option before reading your post.
Is this FieldDeinterlace p-p command specific to the plugin of the same name?If the film percentage is low but still mostly film,
you can try using Force Film and see what you get.
You may find some stray combed frames in the output.
You can fix those by post-processing with FieldDeinterlace(full=false).
If the result is satisfactory to you, then fine.
[EDIT: I guess it is from here» FD & N2]
Where in my/an avs script would I put that?
... never-ending morass ...Last edited by AEN007; 3rd Jun 2012 at 01:05.
-
Nice to see that neuron2's still around.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
-
# Changes 2010-05-25:
# - Explicitly specified parameters of mt_LutSpa()
# (required due to position of new 'biased' parameter, starting from MaskTools 2.0a43)
# - Non mod 16 input is now padded with borders internallyCode:# add borders if clp is not mod 16
-
What's your point? Deblock_QED requires only Mod8 input. The borders, if required, are added to the right and below and don't affect the deblocking at all. If you prefer using Blind PP (why?), add your own borders to the bottom and then remove them later on.
-
Guest34343GuestYou really should give us a sample or at least tell us the film percentage reported by DGIndex. Then we can advise you without having to guess.I actually came across this option before reading your post.
Is this FieldDeinterlace p-p command specific to the plugin of the same name?If the film percentage is low but still mostly film,
you can try using Force Film and see what you get.
You may find some stray combed frames in the output.
You can fix those by post-processing with FieldDeinterlace(full=false).
If the result is satisfactory to you, then fine.
[EDIT: I guess it is from here» FD & N2]
Where in my/an avs script would I put that?
... never-ending morass ...
Your script could look like this:
mpeg2source("file.d2v")
fielddeinterlace(full=false)
But you may not need to do that at all. It depends on your sample. You're not giving us the information we need to help properly. -
4June2012
Greetings.
So, FieldDeinterlace(full=false) should always come immediately after mpeg2Source?
I am also wondering about the relationship/interaction of/between cpu & cpu2 settings.
Do the first 2 cpu2 settings have any effect on the cpu settings?
or are they independent?
Does this cpu2 setting cancel the cpu setting?Code:mpeg2Source("vobFile.d2v", cpu=4, cpu2="ooooxx")
The %File % was around 75%.
I'm trying to reach some level of knowledge/experience
where I can dub video files as correctly/properly as possible;
so I need to be able to know/decide what to do without asking
power users to look the source video & tell me what to do ...
I do not expect to dub these non purely progressive files with absolute perfection.
I have no idea how to do that, and it does not seem worth the effort.
It seems I've been dubbing non purely progressive videos incorrectly for years;If the result is satisfactory to you, then fine.
however, the final outputs have not been "crappy" - I would not add crappy outputs
to my collection - except maybe in the case of these crappy mp4/flv YT downloads,
which I'm not sure if I'll ever be able to make "materially" less crappy ...
ForceFilm changes the d2v file fps to 23.976, for example.
I always dub my videos with the output fps to be 23.976.
I'm just curious if reducing the fps is coincidentally similar in end result to an IVTC? -
Guest34343Guest
Not necessarily. It should come before any resizing and before any filters that may require progressive video. Putting it immediately after the source filter guarantees those conditions are satisfied. One case where it might be better later is if you are cropping. If you crop before then fielddeinterlace has smaller frames to work with and will be faster.
You use one or the other. cpu is presets for the postprocessing. cpu2 gives full control.I am also wondering about the relationship/interaction of/between cpu & cpu2 settings.
Then for best results you should use Honor Pulldown and do the IVTC in your script. But forced film plus fielddeinterlace() may be acceptable if you don't mind some degradation. It's also possible you have blended fields, which would change things. But you haven't provided us a video sample and haven't told us if you saw blended pictures.The %File % was around 75%.
We're trying to teach you how to analyze and process video. That necessarily involves guiding you through your first attempts. If I am teaching you to be an editor for a newspaper I am going to want to see the original text and how you changed it. When you become a competent editor then you're on your own.I'm trying to reach some level of knowledge/experience
where I can dub video files as correctly/properly as possible;
so I need to be able to know/decide what to do without asking
power users to look the source video & tell me what to do ...
That doesn't mean you don't do the best job you can given your time. You speak as if doing an IVTC is some time-consuming complex thing, but it isn't.I do not expect to dub these non purely progressive files with absolute perfection.
I have no idea how to do that, and it does not seem worth the effort.
It's somewhat subjective. If one knows that one has unnecessarily degraded the video, then one may be unhappy with it.It seems I've been dubbing non purely progressive videos incorrectly for years;
however, the final outputs have not been "crappy" - I would not add crappy outputs
to my collection - except maybe in the case of these crappy mp4/flv YT downloads,
which I'm not sure if I'll ever be able to make "materially" less crappy ...
I don't know what you mean by "dubbing" here, i.e., how you reduce the frame rate if not doing IVTC. IVTC throws away duplicate fields, nothing is really lost. Any other method would presumably throw away (or blend) non-duplicate fields -- it is a degradation.ForceFilm changes the d2v file fps to 23.976, for example.
I always dub my videos with the output fps to be 23.976.
I'm just curious if reducing the fps is coincidentally similar in end result to an IVTC?Last edited by Guest34343; 4th Jun 2012 at 08:09.
-
Well ... as far as I know except for when using DirectStreamCopy ...
there is ALWAYS some degradation when (re)dubbing a movie.
Following the right tips & tricks can sufficiently/acceptably hide/mask/minimize the degradation,
so I appreciate what I am learning in this thread.
I ALWAYS only watch my movies on my laptops,
so I don't have to worry about how they will look on a big screen ...
an external monitor is about a big a screen as I ever have occasion to use.
No, I didn't see any blended images but also didn't look at every frame ...
I am reading your DeComb manual ...
... and again wondering about FullProcessing versus FastRecompress.Use Fast Recompress If Possible If you are serving into VirtualDub for transcoding, and you don't need to do any filtering or other processing in VirtualDub, then use VirtualDub's Fast Recompress mode.
1)If I do filtering only via mpeg2Source & use some DeComb feature, could/should I use FR?
2) If I use something(s) like QED and so use YV not RGB colorspace, could/should I use FR?
The default post Telecide setting is 2.
If I am using Telecide, then %Film was too low for ForceFilm;
so this is equivalent to using ForceFilm & FieldDeinterlace(Full=False)?
I am not sure how to know if/when to use Decimate(cycle=5) as opposed to some other cycle value ...
Using FF+FD means DGD already picked/applied a Decimate cycle value & FD cleans up leftovers?
I am not sure if I will be able to discern any ouptut difference
between FF+FD versus Telecide+Decimate ... but I'll "see" ...
Tonight I am running some tests on those crappy YT downloads using SmoothD -
a defunct unfinished filter that nonetheless really has some effect on the output
(unlike anything I have yet encountered). SmoothD is not the fastest but
is certainly much faster than many other much slower / much less effective filters
(that I have tried) ... I am not sure, however, if I can get the result I want from SmoothD ...
These are "hand/held/made" concert videos and so often have a sizeable black backdrop
which seemingly always shows a blocking pattern ... unlike closeups on the musicians ...
Anyone know something about removing blocking from a black backdrop? -
It doesn't matter if you use Fast Recompress or Full Processing mode.
Again, it doesn't matter.
When VirtualDub sees you're not filtering it will use Fast Recompress mode even if you have Full Processing mode selected. As long as you don't force a colorspace via Video -> Color Depth.
Regarding IVTC, you need to figure out what your source video is then handle it properly. -
Well, how do I not force a colorspace?
When FR is selected, Color Depth is grayed-out.
When FP is selected, there are always Color Depth settings ...
These filters always require some certain color space ...
I could/should select FR, so Color Depth will be grayed-out.
If I only use mpeg2source or YV filters, ViDub will use FR?
YES ... SmoothD2 seems like a miracle drug ...
The blackhole blocking is now hardly perceptible &
slightly tweaking the SmD2 settings doubled the dub fps/speed ... -
Of course you can't select a colorspace in Fast Recompress mode. That's the whole point of that mode. The video always goes out (to the encoder) the same colorspace it came in. If Full Processing mode you can force a colorspace if you need to. Otherwise leave it at Autoselect.
-
-
Hello,
Please help me (cannot make a new thread) with one quick problem: I'm getting en error: Splice: one clip has audio and the other doesn't"
This is my script:
There is one picture among two videos that is causing the problem because it obviously doesn't have a audio. What is the workaround? Please help and sorry for hijacking this thread!Code:A=DirectShowSource("E:\mio\Fotky\1.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ffffff).ConvertToYV12.BilinearResize(1920,1080) B=ImageSource("E:\mio\Fotky\2.JPG",end=61,fps=29.970).AssumeFPS(30000,1001).fadeio(10,$ffffff).ConvertToYV12.BilinearResize(1920,1080) C=DirectShowSource("E:\mio\Fotky\3.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ffffff).ConvertToYV12.BilinearResize(1920,1080) A++B++C LoadPlugin("C:\Program Files\MeGUI\tools\avisynth_plugin\UnDot.dll") Undot() # Minimal Noise -
Guest34343Guest
Something like this?
A=DirectShowSource("E:\mio\Fotky\1.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ ffffff).ConvertToYV12.BilinearResize(1920,1080)
B=ImageSource("E:\mio\Fotky\2.JPG",end=61,fps=29.9 70).AssumeFPS(30000,1001).fadeio(10,$ffffff).Conve rtToYV12.BilinearResize(1920,1080)
B=AudioDub(B,BlankClip(A))
C=DirectShowSource("E:\mio\Fotky\3.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ ffffff).ConvertToYV12.BilinearResize(1920,1080)
A++B++C -
neuron you are my hero! Thank you so much!!! I've spent so many hours on the internet and couldn't find a simple solution! One question: in the BlankClip function there could be also C and it would work the same?
-
-
I don't think you did.

Your solution is actually quite subtle, and made me think a little.
At first glance, it appears that B=AudioDub(B,BlankClip(A)) is wrong, since it gives B an audio track the same length as A. However, by using the ++ (AlignedSplice) operator in A++B++C, B's (silent) audio track is extended or truncated as required to keep C in sync in the final result.
Similar Threads
-
Improve performance with VirtualDub & AviSynth
By ziggy1971 in forum Video ConversionReplies: 5Last Post: 26th Jan 2012, 17:17 -
How to use sharpen filters with Avisynth & MeGUI??
By jeticson in forum DVD RippingReplies: 10Last Post: 28th Aug 2011, 08:40 -
Order for filters in Avisynth on Virtualdub!!
By Cauptain in forum Video ConversionReplies: 10Last Post: 4th Jun 2011, 14:29 -
Virtualdub filters in avisynth-faster? How to use them?
By salidarius in forum EditingReplies: 10Last Post: 3rd Mar 2011, 21:13 -
Is there a way to use avisynth plugins/filters within Virtualdub?
By Milardo in forum EditingReplies: 5Last Post: 5th Jan 2011, 03:52


Quote
