VideoHelp Forum




+ Reply to Thread
Results 1 to 21 of 21
  1. I'm a newbie around here and I've searched/tested some Avisynth techniques I found here before creating this thread, because everything I tested did not work as I expected.

    The problem is: when I deinterlace the video (see attached .zip file here), it leaves some "pink" shadows/ghosting on the red pants of the fighter and these shadows/ghosting are always created in the ODD fields, never in EVEN fields. With that said, I believe that my original interlaced video is TFF dominant.

    The only deinterlace method I tested that leaves no pink shadows at all was when I deinterlaced it using Handbrake, by setting the filter as TFF and using Bob. But all it did was removing the odd fields from the video where the pink shadows/ghosting are, leaving the even fields only. But this also removes parts of the movement of the fighter, since some frames were removed. And that's one thing I don't want to happen at all.

    I also tried several deinterlacing plugins using Avisynth but the pink shadow/ghosting side-effect is still there, so I don't know if those shadows/ghost effects are coming from a bad deinterlace method or if it's a color problem or maybe something else.

    Please helpppp!!!
    Last edited by rcoltrane; 8th Feb 2022 at 14:08.
    Quote Quote  
  2. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    I am no expert in de-interlacing but that 'original' is hardly so.

    Strange frame size. 60 fps and 'odd' codec. Also created by vdub2 with no indication of field-order.
    Quote Quote  
  3. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    It's not an interlaced file, that's why you can't deinterlace it. Somewhere along the line it was mishandled,
    it's now a progressive file with progressive contents with those blends/combing cooked in
    Quote Quote  
  4. The video samples i provided here are just a reference to show the problem that is happening. The original footage is copyrighted and therefore i cannot put a "full" piece of it here. So I've cropped a small piece of the video screen with vdub2 to make the file smaller to upload. The original video is 720x480 60fps mp4 file.

    I would like someone to tell me what could be the origin of the ghosting artifact seen in the deinterlaced sample i provided.
    Quote Quote  
  5. Member
    Join Date
    Mar 2008
    Location
    United States
    Search Comp PM
    Would be better to post a few seconds of the actual source
    Quote Quote  
  6. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    Nonsense.

    If you want assistance you upload a true unlfiltered, uncropped, stream-copy sample. And again, no original would be 60 fps uncompressed ?


    And the sample is too short for proper analysis. Don't need the 'after' just the 'before'
    Quote Quote  
  7. This type of artifact is commonly caused by YUV chroma upsampling to RGB conversion progressive manner, while video is still interlaced (in fields) , instead of using interlaced aware upsampling. This results in "chroma ghosting" during motion. But you would expect both fields to be affected. The even fields are less affected, so this can't be the entire explanation

    What format is the original video ? Examine the original fields to see if the artifact is present.

    It looks like duplicate frames in the "deinterlaced" video, 12,13. If you rotate video and deinterlace in avisynth, they are unique frames, not duplicates . Something went wrong in your process , maybe with timestamps

    It would be better to upload a stream copy of the original source to rule out issues that you might have inadvertently caused. e.g. vdub progressive conversion to RGB in progressive manner
    Quote Quote  
  8. Originally Posted by rcoltrane View Post
    The original footage is copyrighted and therefore i cannot put a "full" piece of it here.
    Sure you can. A short sample is considered fair use.

    Start VirtualDub. Open your source video. Select Video -> Direct Stream Copy, mark in, mark out, File -> Save As AVI (or File -> Save Video if using VirtualDub2). Upload the resulting AVI.
    Quote Quote  
  9. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    Reading another topic on here today gives me some insight as to how one can get a 60 fps 'original' (from a source that was not 60 fps)

    But if the OP really does want assistance 'he' really does need to provide that extract and elaborate the work-flow on how it got there. I see all sorts of youtube vids that describe ways to capture from SD to PC one of which is VHS >> upscale(HDMI) >> usb (the method employed in the topic I read today). If that really is the method employed I seriously doubt that the ultimate 'source' is interlaced (even tho the original source was)
    Quote Quote  
  10. Ok, here's a quick sample I did with mvtoolnix-gui. It's exactly what I have here, I extracted it from the original MP4 file as .mkv but without any reencoding, all settings are exactly the same. If these video settings are incorrect, then the guys who did the conversion from u-matic tapes to mp4 did it the wrong way and all my 21 video files are messed up already :/

    In this video you can see the chroma ghosting, even before deinterlacing it. If I try to deinterlace, it gets worse. The "ghosting" is the pink areas projected by the red pants through quick movements, but it also occurs in other places as well, such as on his torso/arms, but in these areas it's less noticeable.

    The Chroma Ghosting is more proeminent in the odd fields for some reason, the even fields seems to be fine, without any ghosting issues.

    I hope there's a way to improve the quality of these videos without having to pay for a new transfer, since it's quite expensive for me :/
    Last edited by rcoltrane; 22nd Feb 2022 at 13:11. Reason: Had to remove my video sample since the museum requested it to be removed!
    Quote Quote  
  11. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    Even that sample is reported as progressive.

    u-matic would have been 29.97 (asume NTSC) fps so the transfer to mp4 appears to have been de-interlaced with double frame-rate (and possibly screwed in to the bargain)


    But then 60 fps is not double 29.97 so that may well be the part of the issue.
    Last edited by DB83; 8th Feb 2022 at 14:41. Reason: addit info
    Quote Quote  
  12. Originally Posted by DB83 View Post
    Even that sample is reported as progressive.
    Ok, so that means I can't do anything to soften that chroma ghosting side effect? If it's progressive, why deinterlacing works to remove the "interlaced" lines? And why I have different results while deinterlacing TFF and BFF if it's not an interlaced video?

    Please, remembere that I'm a noob here...be nice
    Quote Quote  
  13. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    It may be 'fixable' in avisynth but I would expect anyone who attempts this would require a clip lasting more than a few seconds.
    Quote Quote  
  14. You should get your money back. The luma is interlaced, the chroma isn't. The chroma of the two fields doesn't appear to be blended together (the more common mistake). It looks like the chroma for the top field is duplicated for the bottom field. So there is motion in the luma for every field but the chroma only moves every other field -- so all the odd fields have the colors in the wrong location.

    I don't have time now but later today I'll try synthesizing the missing chroma via motion interpolation of the even field's chroma. But motion interpolation usually doesn't work well with complex motions (like rotations, motion blur, etc).
    Quote Quote  
  15. Originally Posted by jagabo View Post
    You should get your money back. The luma is interlaced, the chroma isn't. The chroma of the two fields doesn't appear to be blended together (the more common mistake). It looks like the chroma for the top field is duplicated for the bottom field. So there is motion in the luma for every field but the chroma only moves every other field -- so all the odd fields have the colors in the wrong location.

    I don't have time now but later today I'll try synthesizing the missing chroma via motion interpolation of the even field's chroma. But motion interpolation usually doesn't work well with complex motions (like rotations, motion blur, etc).
    Thanks for the valuable explanations people! I will try to reach the museum staff to explain this and see if they would accept redoing some of the u-matic tapes transfers with no cost, since it was their mistake. I hardly doubt they will do that because it's a museum but I'll try anyway. Fortunately, not all of my videos have fast moving movements, just 5 or 6 tapes.

    But if there's a technique to reduce or to get rid of this nasty ghosting artifact, I would like to try.

    I'm attaching a little bigger video sample here in case someone wants to take a closer look into this. If it needs to be bigger, please let me know.
    Last edited by rcoltrane; 22nd Feb 2022 at 13:11. Reason: Had to remove my video sample since the museum requested it to be removed!
    Quote Quote  
  16. Yes, I would try to get a proper transfered first. That is your best course of action.


    Here is comparison of interlacedsample.mkv using rife in vapoursynth. Left is discarding the duplicate frames (to 30FPS) and QTGMC. Right is odd fields discarded and interpolated/replaced using rife in vapoursynth using model 2.3

    - "Good" interpolation requires 2 good source frames to interpolate from , and the generated frame is in the middle. If the source frames have problems, the problems will be propogated into the synthesized frames. Sometimes it's a good idea to fix the source frames as best as you can before interpolating. There are various "deblurring" algorithms that can help reduce motion blur , it might help with interpolation accuracy

    -often there are errors in interpolation, edge morphing artifacts. But rife is generally one of the better algorithms that is usable today

    - some motions are interpolated slightly different than the actual motion (e.g. hand might be bending a few degrees more) , because the midpoint between two frames are interpolated, but in reality - real motion is not perfectly linear.

    - some models might be better or worse on certain frames. You can mix and match whole frames between models or even different algorithms (e.g. mvtools2) with remapframes

    - you can mix/match and fix isolated frame specific problems with maksing and compositing, such as as after effects, hitfilm, natron etc...
    eg. look at the dirt on the mat by the right foot , around frames 115-117. I would mask that out to fix it . I use interpolation as a starting point, and refine it later with other tools



    If you can't get a proper transfer, and you want more information on using motion interpolation methods (bit of a learning curve) , or compositing, just ask and I'll try to walk you through it. But I would seriously try to get a better transfer first

    EDIT: removed by request
    Last edited by poisondeathray; 22nd Feb 2022 at 15:13.
    Quote Quote  
  17. This looks a little better in some frames:

    Code:
    LWLibavVideoSource("sample2video.mkv", cache=false, prefer_hw=2)
    AssumeTFF()
    SelectEven()
    QTGMC()
    
    even = SelectEven()
    odd = SelectOdd()
    interp = even.FrameRateConverter().SelectOdd()
    odd = MergeChroma(odd, interp)
    fixed = Interleave(even, odd)
    
    StackHorizontal(last, fixed)
    FrameRateConverter() doubles the frame rate by creating motion interpolated frames between each existing frame (of the even fields with better chroma). We then take only the motion interpolated frames, SelectOdd(), and merge their chroma with the odd frames.

    Regular double frame rate deinterlace with QTGMC on the left, plus chroma interpolation of odd fields (now frames) on the right.

    <samples removed on request>
    Last edited by jagabo; 22nd Feb 2022 at 15:32.
    Quote Quote  
  18. WOOOWWWW!!! That looks SO MUCH BETTER!! I will try to run that script tomorrow morning here asap!!

    Thank you ver much Jagabo / poisondeathray for the quick replies with great solutions to my problem!! And thank you people who also replied here as well!!

    I will try tô convince the museum tô redo the transfers properly. Any suggestion on what do i have to say to them to avoid doing the same mistakes again?
    Last edited by rcoltrane; 22nd Feb 2022 at 13:24.
    Quote Quote  
  19. For comparison on sample2video, here is the rife processed version




    I don't know how picky you are , but for rife 2.3 version:

    -Arm shadow looks thin, weird on frame 1
    -R.arm should be more visible on frame 59 (body occlusion mask blurs it away)
    -L.foot on 69 looks weird
    -angle of R leg/foot on 89 makes arc look improbable
    -Foot looks weird on frame 93
    -Belt Tie looks weird around frames 104-108
    -text "strong" on 167

    These issues are "fixable" with by switching the model for a specific frame, or combination of some rotowork /masking, but there is no automatic way; it takes human eye at minimum to identify



    Originally Posted by rcoltrane View Post

    I will try tô convince the museum tô redo the transfers properly. Any suggestion on what do i have to say to them to avoid doing the same mistakes again?
    Need more details on their setup, hardware, connections, software etc... but definitely do not encode progressive e.g. check with mediainfo (view=>text) "Scan Type: Progressive"

    EDIT: oops uploaded wrong video


    EDIT: removed by request
    Last edited by poisondeathray; 22nd Feb 2022 at 15:13.
    Quote Quote  
  20. RE: vapoursynth + rife

    I will post in this thread instead of PM, so other people can participate learning from examples, or to help answer questions

    Hopefully you can get a better transfer and won't need this, but they are nice tools to have just in case:

    Code:
    import vapoursynth as vs
    core = vs.core
    import havsfunc as haf
    from vsrife import RIFE as RF3
    
    clip = core.lsmas.LWLibavSource(r'PATH\sample2video.mkv')
    
    #mark clip matrix as 170m
    clip = core.std.SetFrameProp(clip, prop="_Matrix", intval=6) 
    
    #select even frames (drop duplicates)
    clip = clip[::2]
    
    #double rate deinterlace 
    q = haf.QTGMC(clip, Preset='slower', Sharpness=0.5, Border=True, TFF=True)
    
    #select even frames (drop odd frames)
    qe = q[::2]
    
    #convert to float RGB for rife
    r = core.resize.Bicubic(qe, format=vs.RGBS, matrix_in_s="170m")
    
    #synthesize the discarded odd frames using vsrife using model 2.3
    r23 = RF3(r, model_ver=2.3)
    
    #convert back to 8bit YUV 4:2:0
    r23 = core.resize.Bicubic(r23, format=vs.YUV420P8, matrix_s="170m")
    
    #mark r23 matrix as 170m
    r23 = core.std.SetFrameProp(r23, prop="_Matrix", intval=6) 
    
    #trim off last frame, because it will be duplicate (no later frames to interpolate from)
    r23 = core.std.Trim(r23, length=len(r23)-1)
    
    #get last QTGMC frame before dropping odd frames
    ql = core.std.Trim(q, first=len(q)-1)
    
    #tack on original last (odd) QTGMC frame
    final = r23 + ql
    
    final.set_output()
    IMO, it's more difficult to get vapoursynth running than avisynth . Many of the machine learning projects and filters are python based , and unfortunately not available in avisynth

    There is a "vapoursynth portable fatpack" assembled by ChaosKing that bundles almost everything.
    https://forum.doom9.org/showthread.php?t=175529

    There is also vsrepo a plugin manager
    https://forum.doom9.org/showthread.php?t=175590

    And vs database
    https://vsdb.top/

    and there are some GUI's like Selur's Hybrid that can help with scripts

    If it's too complicated, there are various GUI's that can run Rife separately, such as FlowFrames , or Rife_GUI, or Waifu2x-Extension-GUI . So you might preprocess with avisynth (discard the duplicates, apply QTGMC, discard the odd frames) , then use that as input into one of those GUIs . It's just nice to be able to preview scripts, make adjustments, perform manipulations etc... before having to render/encode it. e.g. maybe you want to use a different model for a specific frame, or preview several different versions in a grid so you can pick the "best" model (StackHorizontal,StackVertical etc..) , so those GUI's are quite limited compared to the flexibility of vapoursynth


    To preview scripts, you can use vsedit, but you need the mod version if you are using API4 (newer vapoursynth versions r57 or newer) . Vdub2 also works for .vpy scripts, but some people might have issues with API4
    https://github.com/YomikoR/VapourSynth-Editor/releases

    For Vapoursynth RIFE, there are 2 versions, one uses CUDA (vsrife) and is faster if you have a recent NVidia card. The newest vsrife v2.0 only uses model 4.0 and is based on practial-rife - it's significantly faster, but the quality is slightly worse than the 2.3 or 2.4 models on average for many types of content. In general, 2.3 is probably the best model overall for most types of live action content. To access older models, you need to use older releases. The way I organize it is 2 different folders, using a namespace to call different models in scripts. So the older vs-rife 1.3 folder I have named "vsrife", and the vs-rife 2.0 named "vsrife4" . "from vsrife import RIFE as RF3" would use RF3 as the namespace for the older models - this is the one I used in the script.
    https://github.com/HolyWu/vs-rife

    The other uses Vulkan (VapourSynth-RIFE-ncnn-Vulkan), and can run on most GPU's that can run Vulkan (Nvidia can use it too) . This version is limited to 3 models and uses a .dll "RIFE.dll" . If you don't have Nvidia card , it's easy to substitute. Unfortunately, the 2.3 model is not available (widely regarded as the best overall in general), but the 2.4 model is fairly close
    https://github.com/HomeOfVapourSynthEvolution/VapourSynth-RIFE-ncnn-Vulkan
    Quote Quote  
  21. Thank you poisondeathray for the Vapoursynth/Rife guide!! It's quite complex but it is a valuable information that I will definitely try to use here!

    I've sent an e-mail to the museum today about doing new transfers, let's see what will be their answer. I will let you guys know here when they reply.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!