VideoHelp Forum




+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 102
  1. Guest34343
    Guest
    Originally Posted by AEN007 View Post
    I am using an mp4 YT download with 480x360 dims for testing how to cleanup YT downloads.
    It seems I cannot use BlindPP()»
    Need mod16 height
    Is there any work around? other than resizing to 320x240 before deblocking?
    Maybe letterbox it somehow? I've not yet ever tried that ...
    BlindPP relies on the block boundaries being properly aligned, that is still in the block boundary positions of the coded stream. That means no cropping and no resizing before you apply BlindPP(). So you can pad extra space to the bottom but don't pad the top. I'm a bit surprised that BlindPP() doesn't accept such frames, because it is very common to have this difference between the coded and display sizes.

    Also ... is it better to use the Groucho2004 SSE optimized DGD build on my PIV XP laptops?
    It may be a bit faster but I have not tested it. Functionally it should be the same.
    Last edited by Guest34343; 2nd Jun 2012 at 07:05.
    Quote Quote  
  2. Guest34343
    Guest
    Originally Posted by AEN007 View Post
    Originally Posted by neuron2 View Post
    The first two questions here tell you how to use your eyes:http://neuron2.net/faq.html
    Applying the steps in FAQ#2 to a vob file seems to indicate TFF.
    (This procedure is sure BETTER than just eyeballing it!)
    So this vob file is NOT progressive, ja?
    It might be purely interlaced or (probably) 3:2 pulled-down progressive?
    I can't answer that unless you post a link to a source sample with a lot of motion. You can cut the VOB directly with DGSplit or open the VOB in DGIndex and set a range and then do Save project and demux video, which gives you an M2V. Upload to mediafire.com and post the link here.

    What exactly does "see any blended pictures" mean?
    The picture you see is a blend of two pictures. For instance, a ball is thrown. In the blended picture you see two balls, because the picture blends pictures showing the ball in two different positions.

    I'm not sure I can correctly distinguish between abcdef & aaabbcccdd.
    That's where a sample comes in handy because we can look at the same thing and explain it to you. aaa means the same picture 3 times in a rwo, though it may move up and down a tiny bit due to the field offset.

    I'm guessing that I am seeing a 3:2 video.
    If the video is not interlaced & not purely progressive, what else might it be?
    3:2 or ?
    It could also be 3:2 pulled down, field-blended (see above), have irregular pulldown, several other pathological but rare cases, or a hybrid of all of them. Experience helps you to identify things. Again, a sample would help.

    I have (so far) no (proper) IVTC/deinterlace experience.
    If this is a 3:2 video, what if I do not deinterlace & just ignore the IVTC?
    Then the result will be crappy. See below.

    What would be the difference in output between applying/ignoring the IVTC?
    If you do nothing then you will code hard pulldown in your final product. That is fine if your display device is interlaced. But you probably want progressive output as you talk about deinterlacing. If you deinterlace, it depends on your deinterlace algorithm. If it blends, you will create blended pictures on two fifths of your frames. If you interpolate, you will lose about half the vertical resolution on two fifths of your pictures, and have ugly stairstepping on them as well. It's not a realistic option.

    I could/would set the DivX codec to progressive source in either case?
    For deinterlacing or IVTC, yes.

    (I wonder what the DivX codec actually does
    when deinterlace is selected and there is no true interlacing in the video ...?)
    Best case (but unlikely although I don't know what algorithm they use), nothing. Worst case, all your pictures will be degraded. Anyway, Avisynth deinterlacing is way better. And if it is 3:2 material, then IVTC is way way way better. IVTC is not hard to do. You have to learn about it to process the video correctly.

    How would/should I approach applying an IVTC?
    Again it depends on the specific video, so a sample would be helpful. It could be as simple as setting the Forced Film option in DGIndex using Video/Field Operation.
    Last edited by Guest34343; 2nd Jun 2012 at 07:09.
    Quote Quote  
  3. Originally Posted by AEN007 View Post
    Originally Posted by neuron2 View Post
    The first two questions here tell you how to use your eyes:http://neuron2.net/faq.html
    Applying the steps in FAQ#2 to a vob file seems to indicate TFF.
    (This procedure is sure BETTER than just eyeballing it!)
    That IS eyeballing it.

    Originally Posted by AEN007 View Post
    I'm not sure I can correctly distinguish between abcdef & aaabbcccdd.
    Like the FAQ said, find a portion with significant motion, like a horizontal panning shot, a car pass in front of the camera, someone walking by in the foreground, etc., the step through the frames after SeparateFields() or Bob(). If you see a image repeated 3 times*, then the next image repeated 2 time, then the next image repeated 3 times, times, etc. you have have telecined film. If each image is unique you have fully interlaced video.

    * ignore a single line up and down bounce. Remember, you are looking at fields so each is only half a picture, and the two fields differ in location by 1 line vertically.
    Last edited by jagabo; 2nd Jun 2012 at 07:37.
    Quote Quote  
  4. Originally Posted by AEN007 View Post
    It seems I cannot use BlindPP()»
    Need mod16 height
    There are a number of deblocking filters which don't have that restriction:

    http://avisynth.org/mediawiki/External_filters#Deblocking

    And Deblock_QED, which has already been recommended to you several times, doesn't have to be Mod16.
    Quote Quote  
  5. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by AEN007 View Post
    1) using a deblocker has allowed me to use a lower bitrate
    than without using a deblocker;
    Deblocking replaces blocks (easily compressed) with smooth gradients (difficult to compress)
    so deblocked video requires more bitrate, not less.
    For whatever it is worth, I submit the following»
    in general redubbing a high bitrate source to a rather low bitrate introduces blocking/quilting.
    I have found that using a deblocker is effective against that & allows me to redub at a lower bitrate
    than would be possible without a deblocker ...

    Originally Posted by jagabo View Post
    Originally Posted by AEN007 View Post
    Originally Posted by neuron2 View Post
    The first two questions here tell you how to use your eyes:http://neuron2.net/faq.html
    Applying the steps in FAQ#2 to a vob file seems to indicate TFF.
    (This procedure is sure BETTER than just eyeballing it!)
    That IS eyeballing it.
    No ... before neuron2 pointed me to those FAQs,
    the only "eyeballing it" explanations I had seen were to look for horizontal line separation -
    not anything "scientific" like using AssumeTFF()/AssumeBFF() ...


    Originally Posted by manono View Post
    Originally Posted by AEN007 View Post
    It seems I cannot use BlindPP()»
    Need mod16 height
    ... Deblock_QED, which has already been recommended to you several times, doesn't have to be Mod16.
    Well, I had already tested QED & knew it did not have a problem with the 480x360 dimensions.
    I can't say for sure why you jumped to the conclusions that you did.
    QED, in any case, DOES have the same problem with Mod16.
    The Deblock_QED_MT2.avs has AddBorders in several places,
    so QED just automates the AddBorders step. BlindPP does not.


    Originally Posted by neuron2 View Post
    You have to learn about it to process the video correctly.
    That is what I am trying to do ... without annoying anyone (including myself ...)

    Originally Posted by neuron2 View Post
    It could be as simple as setting the Forced Film option in DGIndex using Video/Field Operation.
    I actually came across this option before reading your post.
    If the film percentage is low but still mostly film,
    you can try using Force Film and see what you get.
    You may find some stray combed frames in the output.
    You can fix those by post-processing with FieldDeinterlace(full=false).
    If the result is satisfactory to you, then fine.
    Is this FieldDeinterlace p-p command specific to the plugin of the same name?
    [EDIT: I guess it is from here» FD & N2]
    Where in my/an avs script would I put that?

    ... never-ending morass ...
    Last edited by AEN007; 3rd Jun 2012 at 01:05.
    Quote Quote  
  6. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Nice to see that neuron2's still around.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  7. Either I'm not understanding you or you're making no sense at all:
    Originally Posted by AEN007 View Post
    Well, I had already tested QED & knew it did not have a problem with the 480x360 dimensions.
    .
    .
    QED, in any case, DOES have the same problem with Mod16.
    What? I also tested on a 480x360 video and got no error message. Therefore Deblock_QED doesn't require a Mod16 height.
    Quote Quote  
  8. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    # Changes 2010-05-25:
    # - Explicitly specified parameters of mt_LutSpa()
    # (required due to position of new 'biased' parameter, starting from MaskTools 2.0a43)
    # - Non mod 16 input is now padded with borders internally
    Code:
    # add borders if clp is not mod 16
    Quote Quote  
  9. What's your point? Deblock_QED requires only Mod8 input. The borders, if required, are added to the right and below and don't affect the deblocking at all. If you prefer using Blind PP (why?), add your own borders to the bottom and then remove them later on.
    Quote Quote  
  10. Guest34343
    Guest
    Originally Posted by lordsmurf View Post
    Nice to see that neuron2's still around.
    Thanks, lordsmurf!
    Quote Quote  
  11. Guest34343
    Guest
    Originally Posted by neuron2 View Post
    It could be as simple as setting the Forced Film option in DGIndex using Video/Field Operation.
    I actually came across this option before reading your post.
    If the film percentage is low but still mostly film,
    you can try using Force Film and see what you get.
    You may find some stray combed frames in the output.
    You can fix those by post-processing with FieldDeinterlace(full=false).
    If the result is satisfactory to you, then fine.
    Is this FieldDeinterlace p-p command specific to the plugin of the same name?
    [EDIT: I guess it is from here» FD & N2]
    Where in my/an avs script would I put that?

    ... never-ending morass ...
    You really should give us a sample or at least tell us the film percentage reported by DGIndex. Then we can advise you without having to guess.

    Your script could look like this:

    mpeg2source("file.d2v")
    fielddeinterlace(full=false)

    But you may not need to do that at all. It depends on your sample. You're not giving us the information we need to help properly.
    Quote Quote  
  12. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    4June2012
    Greetings.
    Originally Posted by neuron2 View Post
    Your script could look like this:
    mpeg2source("file.d2v")
    fielddeinterlace(full=false)
    So, FieldDeinterlace(full=false) should always come immediately after mpeg2Source?

    I am also wondering about the relationship/interaction of/between cpu & cpu2 settings.
    Do the first 2 cpu2 settings have any effect on the cpu settings?
    or are they independent?
    Code:
    mpeg2Source("vobFile.d2v", cpu=4, cpu2="ooooxx")
    Does this cpu2 setting cancel the cpu setting?

    The %File % was around 75%.
    I'm trying to reach some level of knowledge/experience
    where I can dub video files as correctly/properly as possible;
    so I need to be able to know/decide what to do without asking
    power users to look the source video & tell me what to do ...

    I do not expect to dub these non purely progressive files with absolute perfection.
    I have no idea how to do that, and it does not seem worth the effort.
    If the result is satisfactory to you, then fine.
    It seems I've been dubbing non purely progressive videos incorrectly for years;
    however, the final outputs have not been "crappy" - I would not add crappy outputs
    to my collection - except maybe in the case of these crappy mp4/flv YT downloads,
    which I'm not sure if I'll ever be able to make "materially" less crappy ...

    ForceFilm changes the d2v file fps to 23.976, for example.
    I always dub my videos with the output fps to be 23.976.
    I'm just curious if reducing the fps is coincidentally similar in end result to an IVTC?
    Quote Quote  
  13. Guest34343
    Guest
    Originally Posted by AEN007 View Post
    So, FieldDeinterlace(full=false) should always come immediately after mpeg2Source?
    Not necessarily. It should come before any resizing and before any filters that may require progressive video. Putting it immediately after the source filter guarantees those conditions are satisfied. One case where it might be better later is if you are cropping. If you crop before then fielddeinterlace has smaller frames to work with and will be faster.

    I am also wondering about the relationship/interaction of/between cpu & cpu2 settings.
    You use one or the other. cpu is presets for the postprocessing. cpu2 gives full control.

    The %File % was around 75%.
    Then for best results you should use Honor Pulldown and do the IVTC in your script. But forced film plus fielddeinterlace() may be acceptable if you don't mind some degradation. It's also possible you have blended fields, which would change things. But you haven't provided us a video sample and haven't told us if you saw blended pictures.

    I'm trying to reach some level of knowledge/experience
    where I can dub video files as correctly/properly as possible;
    so I need to be able to know/decide what to do without asking
    power users to look the source video & tell me what to do ...
    We're trying to teach you how to analyze and process video. That necessarily involves guiding you through your first attempts. If I am teaching you to be an editor for a newspaper I am going to want to see the original text and how you changed it. When you become a competent editor then you're on your own.

    I do not expect to dub these non purely progressive files with absolute perfection.
    I have no idea how to do that, and it does not seem worth the effort.
    That doesn't mean you don't do the best job you can given your time. You speak as if doing an IVTC is some time-consuming complex thing, but it isn't.

    It seems I've been dubbing non purely progressive videos incorrectly for years;
    however, the final outputs have not been "crappy" - I would not add crappy outputs
    to my collection - except maybe in the case of these crappy mp4/flv YT downloads,
    which I'm not sure if I'll ever be able to make "materially" less crappy ...
    It's somewhat subjective. If one knows that one has unnecessarily degraded the video, then one may be unhappy with it.

    ForceFilm changes the d2v file fps to 23.976, for example.
    I always dub my videos with the output fps to be 23.976.
    I'm just curious if reducing the fps is coincidentally similar in end result to an IVTC?
    I don't know what you mean by "dubbing" here, i.e., how you reduce the frame rate if not doing IVTC. IVTC throws away duplicate fields, nothing is really lost. Any other method would presumably throw away (or blend) non-duplicate fields -- it is a degradation.
    Last edited by Guest34343; 4th Jun 2012 at 08:09.
    Quote Quote  
  14. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    Originally Posted by neuron2 View Post
    But forced film plus fielddeinterlace() may be acceptable if you don't mind some degradation.
    Well ... as far as I know except for when using DirectStreamCopy ...
    there is ALWAYS some degradation when (re)dubbing a movie.
    Following the right tips & tricks can sufficiently/acceptably hide/mask/minimize the degradation,
    so I appreciate what I am learning in this thread.

    I ALWAYS only watch my movies on my laptops,
    so I don't have to worry about how they will look on a big screen ...
    an external monitor is about a big a screen as I ever have occasion to use.

    No, I didn't see any blended images but also didn't look at every frame ...

    I am reading your DeComb manual ...
    Use Fast Recompress If Possible If you are serving into VirtualDub for transcoding, and you don't need to do any filtering or other processing in VirtualDub, then use VirtualDub's Fast Recompress mode.
    ... and again wondering about FullProcessing versus FastRecompress.

    Originally Posted by jagabo View Post
    Everything you do in AviSynth is the equivalent of full processing mode in VirtualDub.
    So, if you're doing all the filtering in AviSynth, and just using VirtualDub as a front end to the Divx codec,
    it doesn't matter if you select full processing mode or fast recompress mode.
    1)If I do filtering only via mpeg2Source & use some DeComb feature, could/should I use FR?
    2) If I use something(s) like QED and so use YV not RGB colorspace, could/should I use FR?

    The default post Telecide setting is 2.
    If I am using Telecide, then %Film was too low for ForceFilm;
    so this is equivalent to using ForceFilm & FieldDeinterlace(Full=False)?

    I am not sure how to know if/when to use Decimate(cycle=5) as opposed to some other cycle value ...
    Using FF+FD means DGD already picked/applied a Decimate cycle value & FD cleans up leftovers?

    I am not sure if I will be able to discern any ouptut difference
    between FF+FD versus Telecide+Decimate ... but I'll "see" ...

    Tonight I am running some tests on those crappy YT downloads using SmoothD -
    a defunct unfinished filter that nonetheless really has some effect on the output
    (unlike anything I have yet encountered). SmoothD is not the fastest but
    is certainly much faster than many other much slower / much less effective filters
    (that I have tried) ... I am not sure, however, if I can get the result I want from SmoothD ...
    These are "hand/held/made" concert videos and so often have a sizeable black backdrop
    which seemingly always shows a blocking pattern ... unlike closeups on the musicians ...
    Anyone know something about removing blocking from a black backdrop?
    Quote Quote  
  15. Guest34343
    Guest
    [never mind, posted in error]
    Quote Quote  
  16. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    5June2012
    ... turns out there is at this very moment a SmoothD2 under development ...
    seems like it might be the most effective yet ... although seemingly rather slow ...
    Quote Quote  
  17. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    is this a blended image?
    Quote Quote  
  18. Originally Posted by AEN007 View Post
    is this a blended image?
    Yes.
    Quote Quote  
  19. Originally Posted by AEN007 View Post
    1)If I do filtering only via mpeg2Source & use some DeComb feature, could/should I use FR?
    It doesn't matter if you use Fast Recompress or Full Processing mode.

    Originally Posted by AEN007 View Post
    2) If I use something(s) like QED and so use YV not RGB colorspace, could/should I use FR?
    Again, it doesn't matter.

    When VirtualDub sees you're not filtering it will use Fast Recompress mode even if you have Full Processing mode selected. As long as you don't force a colorspace via Video -> Color Depth.

    Regarding IVTC, you need to figure out what your source video is then handle it properly.
    Quote Quote  
  20. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    Originally Posted by jagabo View Post
    When ViDub sees you're not filtering it will use Fast Recompress mode
    even if you have Full Processing mode selected.
    As long as you don't force a colorspace via Video -> Color Depth.
    Well, how do I not force a colorspace?
    When FR is selected, Color Depth is grayed-out.
    When FP is selected, there are always Color Depth settings ...

    These filters always require some certain color space ...

    I could/should select FR, so Color Depth will be grayed-out.
    If I only use mpeg2source or YV filters, ViDub will use FR?

    YES ... SmoothD2 seems like a miracle drug ...
    The blackhole blocking is now hardly perceptible &
    slightly tweaking the SmD2 settings doubled the dub fps/speed ...
    Quote Quote  
  21. Of course you can't select a colorspace in Fast Recompress mode. That's the whole point of that mode. The video always goes out (to the encoder) the same colorspace it came in. If Full Processing mode you can force a colorspace if you need to. Otherwise leave it at Autoselect.
    Quote Quote  
  22. Member AEN007's Avatar
    Join Date
    Mar 2009
    Location
    Croatia
    Search Comp PM
    Originally Posted by jagabo View Post
    Otherwise leave it at Autoselect.
    that addresses only half of the Color Depth settings. "Output format to compressor/display" has to be set, so
    "Same as decompression format"?
    I don't know why that is "not forcing a colorspace", but one of the given options has to be selected ...
    Quote Quote  
  23. Originally Posted by AEN007 View Post
    Originally Posted by jagabo View Post
    Otherwise leave it at Autoselect.
    that addresses only half of the Color Depth settings. "Output format to compressor/display" has to be set, so
    "Same as decompression format"?
    Yes.
    Quote Quote  
  24. Hello,

    Please help me (cannot make a new thread) with one quick problem: I'm getting en error: Splice: one clip has audio and the other doesn't"
    This is my script:

    Code:
    A=DirectShowSource("E:\mio\Fotky\1.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ffffff).ConvertToYV12.BilinearResize(1920,1080)
    B=ImageSource("E:\mio\Fotky\2.JPG",end=61,fps=29.970).AssumeFPS(30000,1001).fadeio(10,$ffffff).ConvertToYV12.BilinearResize(1920,1080)
    C=DirectShowSource("E:\mio\Fotky\3.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ffffff).ConvertToYV12.BilinearResize(1920,1080)
    A++B++C
    
    LoadPlugin("C:\Program Files\MeGUI\tools\avisynth_plugin\UnDot.dll")
    Undot() # Minimal Noise
    There is one picture among two videos that is causing the problem because it obviously doesn't have a audio. What is the workaround? Please help and sorry for hijacking this thread!
    Quote Quote  
  25. Guest34343
    Guest
    Something like this?

    A=DirectShowSource("E:\mio\Fotky\1.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ ffffff).ConvertToYV12.BilinearResize(1920,1080)
    B=ImageSource("E:\mio\Fotky\2.JPG",end=61,fps=29.9 70).AssumeFPS(30000,1001).fadeio(10,$ffffff).Conve rtToYV12.BilinearResize(1920,1080)
    B=AudioDub(B,BlankClip(A))
    C=DirectShowSource("E:\mio\Fotky\3.MOV", fps=29.970, audio=true, convertfps=true).AssumeFPS(30000,1001).fadeio(10,$ ffffff).ConvertToYV12.BilinearResize(1920,1080)
    A++B++C
    Quote Quote  
  26. neuron you are my hero! Thank you so much!!! I've spent so many hours on the internet and couldn't find a simple solution! One question: in the BlankClip function there could be also C and it would work the same?
    Quote Quote  
  27. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by Morphos View Post
    in the BlankClip function there could be also C and it would work the same?
    B=AudioDub(B,BlankClip(C))
    would also work, but would have to be moved to follow the definition of C (which is not yet defined at that point in the script).

    C=DirectShowSource(...)
    B = AudioDub(B,BlankClip(C))
    Quote Quote  
  28. Guest34343
    Guest
    Of course, Gavino is correct. I probably stole it from him anyway, because he's the Avisynth meister.
    Quote Quote  
  29. OK, Thank you neuron2 and Gavino, you really helped me a lot!
    Quote Quote  
  30. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by neuron2 View Post
    Gavino is correct. I probably stole it from him anyway
    I don't think you did.
    Your solution is actually quite subtle, and made me think a little.

    At first glance, it appears that B=AudioDub(B,BlankClip(A)) is wrong, since it gives B an audio track the same length as A. However, by using the ++ (AlignedSplice) operator in A++B++C, B's (silent) audio track is extended or truncated as required to keep C in sync in the final result.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!