VideoHelp Forum

+ Reply to Thread
Results 1 to 19 of 19
Thread
  1. Hello. As usual, the first thing I did was to call separatefields() but I'm not sure what to make of the result. Even with assumetff()/assumebff() there's a 'picture wobble' I've not encountered before. Anyone willing to have a look and tell me how to handle it has my thanks.
    Image Attached Files
    Quote Quote  
  2. What do you mean by picture wobble? I see some jerky camerawork but otherwise it's normal interlaced tff video.

    Code:
    Mpeg2Source("sample.d2v") 
    QTGMC(preset="fast")
    Quote Quote  
  3. Hi and thanks, jagabo. I can't explain my 'wobble' in words so here's a pic of 4 consecutive frames with separatefields() called. See how the wall is more or less straight in the first two frames yet appears bent one way then the other in frames 3 and 4...
    Image Attached Thumbnails Click image for larger version

Name:	wobble.jpg
Views:	48
Size:	205.1 KB
ID:	60840  

    Quote Quote  
  4. Those are just compression artifacts. SMDegrain() might be able to reduce that.
    Last edited by jagabo; 21st Sep 2021 at 10:17.
    Quote Quote  
  5. OK, thanks.
    Quote Quote  
  6. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    If you are talking about the edge of the blue wall on the right bending, that is the infamous rolling shutter effect. It is caused by the CMOS sensor in the camera and there is pretty much nothing you can do about it at this stage.
    Quote Quote  
  7. I thought he was talking about the yellowish wall just behind the woman's knee. But I don't see those artifacts when I use SeparateFields() and step through the video. I think he has some temporal filtering issue.

    Code:
    Mpeg2Source("sample.d2v") 
    AssumeTFF()
    Bob(0.0, 1.0)
    ShowFrameNumber(x=20, y=20)
    StackHorizontal(last, last.Trim(1,0), last.Trim(2,0), last.Trim(4,0))
    Image
    [Attachment 60855 - Click to enlarge]
    Last edited by jagabo; 21st Sep 2021 at 19:09.
    Quote Quote  
  8. Yes, it was the blue wall on the right I meant. There's an obvious bend (one way then the other) in the last two frames (post #3) which aren't in jagabo's attachment. I've just loaded the script again with separatefields and the bend has gone this time! Very weird...

    Jagabo, in your post, you say you've used SeparateFields() but it's not part of your script. Is 'Bob' an equivalent?
    Quote Quote  
  9. SeparateFields() leaves you with half height images. Bob() separates the fields then resizes each field to the full height of the original frame. I don't know what you used to make your image but it wasn't simply SeparateFields() because the images aren't half height.
    Quote Quote  
  10. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    @ pooksahib
    Well, you should have mentioned your first screenshots aren't unprocessed (anything more than SeparateFields()).
    I did not look at the video sample because it seemed quite logical to assume it's a rolling shutter issue.
    Quote Quote  
  11. With a new DVD, I index it using MeGUI. If I see mouseteeth, I run the script with SeparateFields() and, yes, the preview is half height but MeGUI's preview window has a 'Preview DAR' tickbox. I tick it simply to better see whether there's movement every frame or 2nd frame. Perhaps I should be making that check while the preview is still half-height? Or with SeparateFields() followed by Bob()? Or just Bob() and forget about SeparateFields()?
    Last edited by pooksahib; 22nd Sep 2021 at 10:33.
    Quote Quote  
  12. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    Having SeparateFields() followed by Bob() is wrong in any case. Use either one but not both. Pick the one you like better. Personally I would use Bob() because I find it easier to see stuff with full height frames.
    Quote Quote  
  13. Danke, Skiller. I suspected that Bob and Separatefields were interchangeable but left every possibility in my question.
    Quote Quote  
  14. I've been trying to find out why jagabo has used "Bob(0.0, 1.0)" rather than just Bob() but I've failed miserably. So, why the 0.0, 1.0?

    Similarly, I've experimented and it seems that "StackHorizontal(last, last.Trim(1,0), last.Trim(2,0), last.Trim(4,0))" has the same effect as simply saying "StackHorizontal(last)". Why the longer command?

    Thanks!
    Last edited by pooksahib; 22nd Sep 2021 at 15:54. Reason: additional query
    Quote Quote  
  15. http://avisynth.nl/index.php/Bob

    Bob(0.0, 1.0) preserves the original fields for RGB and YUY2 and preserves the Luma but not the Chroma for YV12.
    It doesn't matter for the purposes discussed here.
    Quote Quote  
  16. Noted, thanks. What would you say to my StackHorizontal observation?
    Quote Quote  
  17. Originally Posted by pooksahib View Post
    Noted, thanks. What would you say to my StackHorizontal observation?
    It's wrong.

    StackHorizontal(last, last.Trim(1,0), last.Trim(2,0), last.Trim(3,0)) stacks four consecutive frames side by side.

    StackHorizontal(last) stacks the same frame side by side.

    Code:
    BlankClip(width=320, height=240)
    AssumeTFF()
    ShowFrameNumber(x=40, y=150, size=100)
    
    v0 = Addborders(0,0,960,0).Subtitle("sequentially numbered frames:")
    v1 = StackHorizontal(last, last.Trim(1,0), last.Trim(2,0), last.Trim(3,0)).Subtitle("StackHorizontal(last, last.Trim(1,0), last.Trim(2,0), last.Trim(3,0)):")
    
    StackHorizontal(last)
    StackHorizontal(last)
    v2 = Subtitle(last, "StackHorizontal(last) StackHorizontal(last):")
    
    StackVertical(v0, v1, v2)
    Image
    [Attachment 60918 - Click to enlarge]


    Oops, I noticed that the last Trim() in post #7 was Trim(4,0) -- it was supposed to be Trim(3,0).
    Quote Quote  
  18. Got it, thanks.
    Quote Quote  
  19. Since I'm currently testing clip on getting a feel on when to use BasicVSR++:
    Code:
    # Imports
    import os
    import sys
    import ctypes
    # Loading Support Files
    Dllref = ctypes.windll.LoadLibrary("I:/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Import scripts folder
    scriptPath = 'I:/Hybrid/64bit/vsscripts'
    sys.path.insert(0, os.path.abspath(scriptPath))
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/MiscFilter/EdgeFixer/EdgeFixer.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DenoiseFilter/NEO_FFT3DFilter/neo-fft3d.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/EEDI3.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/vsznedi3.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/temporalsoften.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/scenechange.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # Import scripts
    import havsfunc
    # source: 'C:\Users\Selur\Desktop\sample.mpg'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x576, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: top field first
    # Loading C:\Users\Selur\Desktop\sample.mpg using D2VSource
    clip = core.d2v.Source(input="E:/Temp/mpg_54f90a4a45a95fd36fd7cd2830232176_853323747.d2v", rff=False)
    # making sure input color matrix is set as 470bg
    clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 25
    clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # setting field order to what QTGMC should assume (top field first)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=2)
    # Deinterlacing using QTGMC
    clip = havsfunc.QTGMC(Input=clip, Preset="Fast", TFF=True) # new fps: 25
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    clip = clip[::2]
    # adjusting color space from YUV420P8 to RGBS for vsBasicVSRPPFilter
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # Quality enhancement using BasicVSR++
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=3, tile_x=360, tile_y=288)
    # adjusting color space from RGBS to YUV444P16 for vsEdgeFixer
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P16, matrix_s="470bg", range_s="limited")
    # Fix bright and dark line artifacts near the border of an image using EdgeFixer
    clip = core.edgefixer.Continuity(clip=clip,left=8,top=4,right=4,bottom=4,radius=10)
    # cropping the video to 704x568
    clip = core.std.CropRel(clip=clip, left=4, right=12, top=2, bottom=6)
    # adjusting output color from: YUV444P16 to YUV420P10 for x265Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, range_s="limited")
    # set output frame rate to 25.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
    # Output
    clip.set_output()


    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555
    Quote Quote