VideoHelp Forum
+ Reply to Thread
Results 1 to 24 of 24
Thread
  1. Hello,

    I had this ballet below in color but it was in low resolution. I tried to upscale it but with bad results (garbage in -> garbage out). Luckily, I found rare DVD of the same performance from 1977 so i was excited and bought it. When DVD arrive, the quality was stellar but whoever was in charge decided to apply this huge red tint all over the video (from diamond back to garbage).
    Anyhoo, Is there a way to kinda combine both videos? keep stellar quality of DVD rip, but apply color from from LQ video? I tried adjusting U V layers and it didnt remove the tint properly. I also saw some coloring scripts, but they work if you do these color smudges for each scene in paint...
    So was wondering if there is an easier way, please.

    thank you
    Image Attached Files
    Quote Quote  
  2. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Extensive color correction needed.

    Or give GamMatch https://forum.doom9.org/showthread.php?t=176004 a try:

    Code:
    # plugins directory
    plugins_dir="C:\Users\giuse\Documents\VideoSoft\MPEG\AviSynth\extFilters\"
    
    	# FFmpegSource
    loadPlugin(plugins_dir + "ffms2_87bae19\x86\ffms2.dll")
    
    	# GamMatch
    loadPlugin(plugins_dir + "GamMatch_25&26_x86_x64_dll_v0.05_20190106\Avisynth+_x86\GamMatch_x86.dll")
    
    video_org=FFmpegSource2("DVD but red tint.mpg", atrack=-1)
    
    video_ref=FFmpegSource2("Not so DVD quality but in color.avi", atrack=-1).trim(74,0)
    
    /*
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(video_ref,"video_ref",size=20,align=2)\
    )
    */
    
    # GamMatch
    video_rest=GamMatch(video_org.convertToRGB(),video_ref.convertToRGB(),RedMul=0.4,GrnMul=0.8,BluMul=0.8,Show=true).converttoyv12()
    
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(video_rest,"video_rest",size=20,align=2),\
    subtitle(video_ref,"video_ref",size=20,align=2)\
    )
    
    
    #stackhorizontal(\
    #subtitle(video_ref.histogram("levels"),"video_ref",size=20,align=2),\
    #subtitle(video_rest.histogram("levels"),"video_rest",size=20,align=2)\
    #)
    A quick attempt here: x.avi

    Play a lot with the parameters for optimal results
    Quote Quote  
  3. Maybe, just use grayworld (https://github.com/Asd-g/AviSynthPlus-grayworld):
    Images: https://imgsli.com/MTk0MTg5, https://imgsli.com/MTk0MTkx
    or some preset LUT like https://www.freepresets.com/product/free-luts-forest-film-stock/, Image: https://imgsli.com/MTk0MTk1 (in Avisynth LUTs can be applied through AVSCube or DGCube)

    Cu Selur
    Last edited by Selur; 25th Jul 2023 at 13:56.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  4. I agree with Lollo: just use some heavy-handed color correction. Here's the result of a few minutes with the color tools in Vegas:

    Color Corrected

    One "trick" to getting the DVD to look like the other video is to make the same mistake they made: blow out the highlights. You'll see that I did that in my "corrected" video. I initially didn't clip the highlights, but I couldn't wash out the pink to the extent of the other clip until I did.

    BTW, since this is on stage, you do have to allow for the possibility that it was lit with red gels and therefore is supposed to look somewhat red.

    Oh yes, I've posted this before, but whenever I hear that music I always thing of Ravel's quote: “I've written only one masterpiece – Boléro. Unfortunately, there's no music in it”
    Quote Quote  
  5. *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  6. This:

    Originally Posted by lollo View Post
    Extensive color correction needed.
    There are non linear differences along each channel between the reference and DVD.

    I have the BorisFX plugins for Adobe, and RE:VisionFX ReMatch - they do not match this very accurately automatically.

    For the DVD, each scene camera has slightly different characteristics in terms of levels and RGB curves. You would need to grade each camera view differently - In theory it should be consistent for the whole production for each camera . Your sample had 3 cameras (but there might be more) . If you apply 1 set of filters to the whole thing, one camera view might match , but the others will be off slightly, maybe a pink or green tint, maybe crush some shadows on one, or clip some highlights in the other.

    I "baked" the transforms into 3 cube luts and you can apply it in avisynth/vapoursynth/some NLE, but you would have to divide up the camera shots to apply the separate luts. I applied them in avisynth for the demos below, and needed to apply an additional black level adjustments post-lut (some minor issues with the LUT translation) . The background black level isn't fully black on purpose, because you start crushing too much detail. But you can make adjustments pre/post lut as you see fit. Let me know if you want more details.

    The NTSC DVD is 25p content => field duplicated => telecine for 59.94 fields/s interlaced. I inverse telecined back to "25p" , and included a rife version for fun (interpolated to 59.94p)
    Image Attached Files
    Quote Quote  
  7. I too applied different color corrections to the opening, the background chorus, and the closeup.
    Quote Quote  
  8. Originally Posted by lollo View Post
    Extensive color correction needed.

    Or give GamMatch https://forum.doom9.org/showthread.php?t=176004 a try:

    Code:
    # plugins directory
    plugins_dir="C:\Users\giuse\Documents\VideoSoft\MPEG\AviSynth\extFilters\"
    
    	# FFmpegSource
    loadPlugin(plugins_dir + "ffms2_87bae19\x86\ffms2.dll")
    
    	# GamMatch
    loadPlugin(plugins_dir + "GamMatch_25&26_x86_x64_dll_v0.05_20190106\Avisynth+_x86\GamMatch_x86.dll")
    
    video_org=FFmpegSource2("DVD but red tint.mpg", atrack=-1)
    
    video_ref=FFmpegSource2("Not so DVD quality but in color.avi", atrack=-1).trim(74,0)
    
    /*
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(video_ref,"video_ref",size=20,align=2)\
    )
    */
    
    # GamMatch
    video_rest=GamMatch(video_org.convertToRGB(),video_ref.convertToRGB(),RedMul=0.4,GrnMul=0.8,BluMul=0.8,Show=true).converttoyv12()
    
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(video_rest,"video_rest",size=20,align=2),\
    subtitle(video_ref,"video_ref",size=20,align=2)\
    )
    
    
    #stackhorizontal(\
    #subtitle(video_ref.histogram("levels"),"video_ref",size=20,align=2),\
    #subtitle(video_rest.histogram("levels"),"video_rest",size=20,align=2)\
    #)
    A quick attempt here: Image
    [Attachment 72677 - Click to enlarge]


    Play a lot with the parameters for optimal results
    Hello,

    agreed on the extensive color adjustment. Initially, i was playing in TMPGEnc and was working with U and V layers. It got rid of the tint and get nice beige tones, but the issue is that is also changed the color of the podium from red to sepia. This ballet is famous for red color of the podium as it is inspired by Spain and Toreadors so changing the color of podium is like changing the color of sky in the video to from blue to green. [attached video: sepia]
    https://www.google.com/search?q=bejart+bolero&tbm=isch&source=lnms&sa=X&sqi=2&ved=2ahU...&bih=929&dpr=1

    I was thinking that maybe i could do it in DaVinci. Everytime i watch Movie studios restoration videos, they all use DaVinci for coloring. I'm able to split the video into scenes and then group the scenes together based on camera angle (all shots shot from front camera can be put into one bucket and then you can edit them all at once). I could then use mask and apply different color for podium. There are many tutorials on youtube, but i had no time to go through them, but i would like to learn coloring.

    Im trying GamMatch and I'm blown away. I need to fine tune it as you say but so far, the results are very close! i'm very impressed. I have to play with red tint in the shadows but it looks very promising. As always, you save the day
    [video gam match]

    Code:
    video_org=FFmpegSource2("1977 - Bolero - Maya Plisetskaya (red).mpg", atrack=-1).Spline64Resize(640,480)
    
    video_ref=FFmpegSource2("1977 - Bolero - Maya Plisetskaya (color).avi", atrack=-2).trim(85,0)
    
    
    /*
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(video_ref,"video_ref",size=20,align=2)\
    )
    */
    
    # GamMatch
    video_rest=GamMatch(video_org.convertToRGB(),video_ref.convertToRGB(),RedMul=0.4,GrnMul=0.7,BluMul=0.8,Show=true).converttoyv12()
    
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(video_rest,"video_rest",size=20,align=2),\
    subtitle(video_ref,"video_ref",size=20,align=2)\
    )
    Image Attached Files
    Quote Quote  
  9. Originally Posted by Selur View Post
    This is interesting but it seems it doesnt fully get rid of the tint. In the shadows, the colors is still very vibrant magenta, and highlights have this poisonous green tint.
    Instead of natural results, it gives me these 70s neon ambience... like Dario Argento's Suspiria when he did neon red/green or neon red/blue lighting... (fun fact, both these ballet and the Suspiria movie are from the same year 1977. Now this makes me wonder )
    Image Attached Thumbnails Click image for larger version

Name:	1344941.jpeg
Views:	14
Size:	108.0 KB
ID:	72712  

    Click image for larger version

Name:	suspiria1977_red1.jpg
Views:	23
Size:	76.8 KB
ID:	72713  

    Quote Quote  
  10. Originally Posted by themaster1 View Post
    Originally Posted by johnmeyer View Post
    I too applied different color corrections to the opening, the background chorus, and the closeup.
    Originally Posted by poisondeathray View Post
    This:

    Originally Posted by lollo View Post
    Extensive color correction needed.
    There are non linear differences along each channel between the reference and DVD.

    I have the BorisFX plugins for Adobe, and RE:VisionFX ReMatch - they do not match this very accurately automatically.

    For the DVD, each scene camera has slightly different characteristics in terms of levels and RGB curves. You would need to grade each camera view differently - In theory it should be consistent for the whole production for each camera . Your sample had 3 cameras (but there might be more) . If you apply 1 set of filters to the whole thing, one camera view might match , but the others will be off slightly, maybe a pink or green tint, maybe crush some shadows on one, or clip some highlights in the other.

    I "baked" the transforms into 3 cube luts and you can apply it in avisynth/vapoursynth/some NLE, but you would have to divide up the camera shots to apply the separate luts. I applied them in avisynth for the demos below, and needed to apply an additional black level adjustments post-lut (some minor issues with the LUT translation) . The background black level isn't fully black on purpose, because you start crushing too much detail. But you can make adjustments pre/post lut as you see fit. Let me know if you want more details.

    The NTSC DVD is 25p content => field duplicated => telecine for 59.94 fields/s interlaced. I inverse telecined back to "25p" , and included a rife version for fun (interpolated to 59.94p)
    oh wow, this one has the closest results, PoisonDeathRay/John.. I've never worked with LUT, but seems to be similar to DaVinci's scene detection and then grouping them into buckets based on camera angles.. After you split video into bucket, you then use the BorixFX plugin (at the beginning of the post, you mention it does not work automatically, but then im not sure if you end up using it after your cube LUT split)??

    I like the interpolation. i usually go for DAIN, but it takes forever to process. I interpolated another Bolero from 1962 I had and this one was next (after the color processing), so thanks for giving me an idea how it will look like <3
    Image Attached Files
    Quote Quote  
  11. Originally Posted by JadHC View Post
    . I've never worked with LUT, but seems to be similar to DaVinci's scene detection and then grouping them into buckets based on camera angles.. After you split video into bucket, you then use the BorixFX plugin (at the beginning of the post, you mention it does not work automatically, but then im not sure if you end up using it after your cube LUT split)??
    No, I mentioned the LUT a mechanism for transferring a grade or filters between programs . ie. After you do the manual correction, you can export the end effect as a LUT, for example if you wanted to use it in another program. Let's say you want to make some final adjustments in Premiere, or avisynth or whtatever program. You can apply the LUT

    BorisFX, several other "auto" color matching plugins, all have problems in that example . I didn't play with them too much, but it's pretty clear right away they all fail. Maybe if you pre-corrected the input a bit closer, they might have better results

    I like the interpolation. i usually go for DAIN, but it takes forever to process. I interpolated another Bolero from 1962 I had and this one was next (after the color processing), so thanks for giving me an idea how it will look like <3
    I used to use DAIN. RIFE produces similar results, but is 10-50x faster . Occasionally, there might be a scene that RIFE might fail on (all the model versions), that DAIN might be able to solve. It's like that with machine learning models, you have to experiment with versions to get the "best" results. e.g. maybe model 4.0 fails, but 2.4 work ok etc... RIFE also can achieve any fps with the 4.x models. 25=>60 would be difficult for DAIN. You'd need multiple DAIN interpolations for the lowest power of 2 common multiple, then select every nth frame. It would take forever many times over
    Quote Quote  
  12. This is interesting but it seems it doesnt fully get rid of the tint. In the shadows, the colors is still very vibrant magenta, and highlights have this poisonous green tint.
    Are you referring to the LUT or the grayworld approach?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  13. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Whatever techniques you'll apply, remember to frame-synchronize the videos for optimal results. I did not with my quick attempt. Good luck!
    Quote Quote  
  14. I forgot about GamMac. When it works, it can do miracles. You do sometimes need to play around with which channel to use as the reference. With my film scripts, these are my starting parameters:

    Code:
    #GamMac Parameters
    LockChan   = 1                                     #(0=red channel)
    LockVal    = 128.0                                 #default 128 -- Used when LockChan = -1 (for flicker)
    Scale      = 2                                     #Fred recommended 2 instead of 1
    RedMul     = 1.0
    GrnMul     = 1.0
    BluMul     = 1.0
    Th         = 0.1
    GMx        = 0
    GMy        = 0
    GMw        = 0
    GMh        = 0
    LOTH = 0.2
    HITH = 0.2
    OMIN =   0                                         #limiting the output a little bit makes it a little 'softer' to look at
    OMAX = 255
    Al2  =  20
    autolev_bord1 = 50
    Quote Quote  
  15. *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  16. Originally Posted by poisondeathray View Post
    No, I mentioned the LUT a mechanism for transferring a grade or filters between programs . ie. After you do the manual correction, you can export the end effect as a LUT, for example if you wanted to use it in another program. Let's say you want to make some final adjustments in Premiere, or avisynth or whtatever program. You can apply the LUT

    BorisFX, several other "auto" color matching plugins, all have problems in that example . I didn't play with them too much, but it's pretty clear right away they all fail. Maybe if you pre-corrected the input a bit closer, they might have better results
    oh, I see what you mean. I also chatGPT-ed "cube LUT" and now it's all clear. I also agree with themaster1's article. If the saturation/tint is increased too much beyond gamut, it makes it difficult to recover original values. So I see why the auto color plugins are failing in this case.
    I think I will try to convert video to sepia (to remove those extreme magenta out-of-gamut colors) and then try to bring it from neutral colors to target colors. (going to grayscale would most likely be an overkill).

    Code:
    In video editing and color grading, a LUT (Lookup Table) is a tool used to map one set of colors to another. Specifically, a cube LUT, also known as a 3D LUT or RGB LUT, is a type of LUT that operates on three color channels: red, green, and blue. It takes input RGB values and converts them to new RGB values based on a predefined color transformation.
    
    Cube LUTs are widely used in the post-production process to achieve specific color grading looks or to match footage from different cameras. They allow editors and colorists to apply consistent color adjustments to their video footage quickly and accurately.
    
    Here's how a cube LUT works:
    
    Input RGB Values: The cube LUT takes the RGB values of each pixel in the video footage as input. These RGB values represent the color information of the footage in the form of red, green, and blue color channels.
    
    Transformation: Each RGB value is then transformed based on the color adjustments defined in the LUT. These adjustments are typically created by colorists to achieve a particular aesthetic, mood, or look for the video.
    
    Output RGB Values: After the transformation, the LUT generates new RGB values for each pixel, resulting in the desired color grading applied to the video.

    Originally Posted by poisondeathray View Post
    I used to use DAIN. RIFE produces similar results, but is 10-50x faster . Occasionally, there might be a scene that RIFE might fail on (all the model versions), that DAIN might be able to solve. It's like that with machine learning models, you have to experiment with versions to get the "best" results. e.g. maybe model 4.0 fails, but 2.4 work ok etc... RIFE also can achieve any fps with the 4.x models. 25=>60 would be difficult for DAIN. You'd need multiple DAIN interpolations for the lowest power of 2 common multiple, then select every nth frame. It would take forever many times over
    I recently started using FlowFrames. It is bit buggy although latest version works fine. It always crashes with DAIN but i have 0 issues with Rife, coder took the scripts and adjusted them for Vulkan (NCNN/VS) which speeds up process even faster. 20 min 720x480 video on Rife took me 1-2 hours to interpolate. Now, on Vulkan/Cuda FlowFrames, it takes me about 10 min which is very impressive. Sometimes software does not properly recognize input frame rate but thats like 1 case out of 20, otherwise RIFE code works very well.
    I'm now trying to find a code that can replace duplicate frames using RIFE (NOT by blending frame before/after). There are so many videos that people record online and recordings skip so many frames (shocking people still video screen videos when you can get most vids by CocoCut or Video DownloadHelper with perfect frame rate). I've seen some scripts with VapourSynth on this forum but codes only blend frames before/after (not true interpolation) and I dont use VapourSynth (yet). I used to decimate duplicates and then RIFE it but if duplicates are not occuring in same periodicity, one gets odds results after interpolation....
    Quote Quote  
  17. Originally Posted by Selur View Post
    This is interesting but it seems it doesnt fully get rid of the tint. In the shadows, the colors is still very vibrant magenta, and highlights have this poisonous green tint.
    Are you referring to the LUT or the grayworld approach?
    the grayworld approach. The Whites are tinted to green but the mid-range values are still magenta. I really don't think it is addressing magenta tint in the video in this case and the magenta color is still popping through. In addition, green is complimentary color of red, so keeping magenta (even though weaker tone) but adding green to whites and shadows just creates another contrast of red-green [vs typical black-white]. Somehow video is more contrasting compared to before when video was just shifted towards magenta hues. My screen is set to vivid colors so it could be just my monitor, tbh. I still appreciate your time and inputs. I learnt something new!

    On the LUT approach, this is helpful. As poisondeathray said, might be the best if i build custom LUT using original reference and final tinted video [or adjusted version of it to more neutral colors] and apply it. I think it would be a pity to use some google LUT preset when i have original reference/colors available.
    Quote Quote  
  18. Originally Posted by lollo View Post
    Whatever techniques you'll apply, remember to frame-synchronize the videos for optimal results. I did not with my quick attempt. Good luck!
    yes, thanks! I spotted that in the longer version. Video starts in sync and then gradually goes out of sync so I will have to play with it on multiple levels.


    Originally Posted by johnmeyer View Post
    I forgot about GamMac. When it works, it can do miracles. You do sometimes need to play around with which channel to use as the reference. With my film scripts, these are my starting parameters:

    Code:
    #GamMac Parameters
    LockChan   = 1                                     #(0=red channel)
    LockVal    = 128.0                                 #default 128 -- Used when LockChan = -1 (for flicker)
    Scale      = 2                                     #Fred recommended 2 instead of 1
    RedMul     = 1.0
    GrnMul     = 1.0
    BluMul     = 1.0
    Th         = 0.1
    GMx        = 0
    GMy        = 0
    GMw        = 0
    GMh        = 0
    LOTH = 0.2
    HITH = 0.2
    OMIN =   0                                         #limiting the output a little bit makes it a little 'softer' to look at
    OMAX = 255
    Al2  =  20
    autolev_bord1 = 50
    thanks! This is great. I was playing with MUL options only so knowing about additional settings gives me more options Ill play with it and post some results
    Quote Quote  
  19. Originally Posted by JadHC View Post
    If the saturation/tint is increased too much beyond gamut, it makes it difficult to recover original values. So I see why the auto color plugins are failing in this case.
    Yes, but it's not a massive amount of out of gamut values (negative RGB values, or values >255 in 8bit RGB), so you can come close . You can also minimize the clipping that you do by working in float ("out of gamut" ie. negative RGB values, values >1 in float 0-1 scale are kept, so you can correct them), and/or using a wide gamut colorspace . sRGB (integer) is quite limited, and will clip out of gamut values

    Sometimes software does not properly recognize input frame rate but thats like 1 case out of 20, otherwise RIFE code works very well.
    That's FlowFrames. If you use avisynth/vapoursynth, you have full control

    I'm now trying to find a code that can replace duplicate frames using RIFE (NOT by blending frame before/after). There are so many videos that people record online and recordings skip so many frames (shocking people still video screen videos when you can get most vids by CocoCut or Video DownloadHelper with perfect frame rate). I've seen some scripts with VapourSynth on this forum but codes only blend frames before/after (not true interpolation) and I dont use VapourSynth (yet). I used to decimate duplicates and then RIFE it but if duplicates are not occuring in same periodicity, one gets odds results after interpolation....
    If you skip a frame (dropped), did you mean inserted compensatory duplicate frame ?

    duplicates or more , triplicates, 4x, etc.. ?

    Did you mean interpolate "over" the 2nd duplicate, so it becomes an in-between frame ? "Replicate" would give you the same thing

    Original: ABC
    Bad recording: AAC
    Interpolate: A(ac)C

    where "ac" is a 50/50 mix (not blend, interpolation) using reference points A and C ?

    If so, you want FillDropsRife, there are several variations posted . The 2nd duplicate depending on a detection threshold, will be interpolated over.

    There are "manual" interpolation functions where you specify frames or ranges to interpolate as well, using 2 reference endpoints . "Auto" detection algorithms and syntax get tricky after more than 3 same frames



    On the LUT approach, this is helpful. As poisondeathray said, might be the best if i build custom LUT using original reference and final tinted video [or adjusted version of it to more neutral colors] and apply it. I think it would be a pity to use some google LUT preset when i have original reference/colors available.

    Actually it's not the best way if you mean 3d cube LUT (This is what most people in image/video editing would be referring to for "LUT") . 3D cube LUT's interpolate between values - they are generally less accurate than doing it manually. Sometimes you lose a bit of accuracy, or there are some translation issues when importing or exporting a LUT . The interpolation algorithm matters too - tetrahedral is generally more accurate, whereas trilinear sometimes causes weird gradients. Think of a 3D Cube LUT as all the filters "baked" or "flattened" into 1 set of transforms, so you can use them in other programs - but it's still an approximation

    The only reason I even mentioned LUT was for use in other programs - to share with others if they wanted to play with them on this video. I can upload them if anyone is interested and post how I used them in avisynth with adjustments

    There is a "calculation" way that uses HaldCLUTs (different kind of LUT than 3D cube LUT) and gimp , and you can convert them to CUBE luts in a later step to apply in other programs. The accuracy is very high, it's probably the most accurate "auto" method - but it requires almost identical source and destination video (except for color), perfectly aligned/overlapping. If you have some noise, a few pixels off, it doesn't work. It won't work for your example unless you align them spatially so they superimpose. You need a matching frame (not one blended, or deinterlacing artifacts, unless the artifacts appear in both)
    Quote Quote  
  20. Originally Posted by poisondeathray View Post
    Originally Posted by JadHC View Post
    Sometimes software does not properly recognize input frame rate but thats like 1 case out of 20, otherwise RIFE code works very well.
    That's FlowFrames. If you use avisynth/vapoursynth, you have full control

    I'm now trying to find a code that can replace duplicate frames using RIFE (NOT by blending frame before/after). There are so many videos that people record online and recordings skip so many frames (shocking people still video screen videos when you can get most vids by CocoCut or Video DownloadHelper with perfect frame rate). I've seen some scripts with VapourSynth on this forum but codes only blend frames before/after (not true interpolation) and I dont use VapourSynth (yet). I used to decimate duplicates and then RIFE it but if duplicates are not occuring in same periodicity, one gets odds results after interpolation....
    If you skip a frame (dropped), did you mean inserted compensatory duplicate frame ?

    duplicates or more , triplicates, 4x, etc.. ?

    Did you mean interpolate "over" the 2nd duplicate, so it becomes an in-between frame ? "Replicate" would give you the same thing

    Original: ABC
    Bad recording: AAC
    Interpolate: A(ac)C

    where "ac" is a 50/50 mix (not blend, interpolation) using reference points A and C ?

    If so, you want FillDropsRife, there are several variations posted . The 2nd duplicate depending on a detection threshold, will be interpolated over.

    There are "manual" interpolation functions where you specify frames or ranges to interpolate as well, using 2 reference endpoints . "Auto" detection algorithms and syntax get tricky after more than 3 same frames
    Yes, precisely. Lets say I want to interpolate 25 fps video to 50 fps and Im sampling 5 frames below:

    Before Interpolation: ABCDE
    After Interpolation (2x): A(ab)B(bc)C(cd)D(de)E

    If the frame rate is messed up, i then run the decimate counting the average dupped frames in a cycle and then interpolate (for instance, if every 3rd frame is a dupe, 25 fps would be decimated to 16 fps (cycle 3) and then interpolated 3x = 48 fps)

    Before Interpolation: AAADE (BC corrupted and replaced with A)
    After Decimation: ADE
    After Interpolation (3x): A(ad)(ad)D(de)(de)E

    Usually, i get okay results, but if the original video has too many dupes (especially 3-4 in a row), then i get serious jumps in video (as you can see in the decimation example above, frame D got into the place of frame C and became the middle frame, which would affect the video flow).

    I never knew there is a script for RIFE. I am looking at AviSynth External filters but dont see any Interpolation plugins. Is it a python script, please? I did google search and only found filldrops but not filldropsRIFE.

    http://forum.doom9.net/showthread.php?p=1960741

    Thanks a lot in advance
    Quote Quote  
  21. Originally Posted by poisondeathray View Post

    Actually it's not the best way if you mean 3d cube LUT (This is what most people in image/video editing would be referring to for "LUT") . 3D cube LUT's interpolate between values - they are generally less accurate than doing it manually. Sometimes you lose a bit of accuracy, or there are some translation issues when importing or exporting a LUT . The interpolation algorithm matters too - tetrahedral is generally more accurate, whereas trilinear sometimes causes weird gradients. Think of a 3D Cube LUT as all the filters "baked" or "flattened" into 1 set of transforms, so you can use them in other programs - but it's still an approximation

    The only reason I even mentioned LUT was for use in other programs - to share with others if they wanted to play with them on this video. I can upload them if anyone is interested and post how I used them in avisynth with adjustments

    There is a "calculation" way that uses HaldCLUTs (different kind of LUT than 3D cube LUT) and gimp , and you can convert them to CUBE luts in a later step to apply in other programs. The accuracy is very high, it's probably the most accurate "auto" method - but it requires almost identical source and destination video (except for color), perfectly aligned/overlapping. If you have some noise, a few pixels off, it doesn't work. It won't work for your example unless you align them spatially so they superimpose. You need a matching frame (not one blended, or deinterlacing artifacts, unless the artifacts appear in both)
    So I was trying to see how i can do LUT over this weekend. After couple failed attempts, something came over me... and I did something very naughty... something dirty, vile and very unconscionable.....
    I had a funny idea to simply use "mergechroma" instead of trying to change colors of original video. Yes, I know that the chroma layer from DVD has more resolution and better information, but keeping luma information, i thought it could work... and I guess it did somehow?? (see attached). The reference video has chroma layer compressed, so i can run reference video through Topaz to deblock chroma, but otherwise it seems to do the trick?

    Couple issues. I am okay with decimate function, but i never ran inverse telecine... when i did tfm().tdecimate(), reference video remained interpolated for some reason. Therefore, I ended up running QTGMC and the regular decimate. This way, i converted framerate from NTSC (29.97) to PAL (25) instead of 23.745, which i actually prefer.
    However, as the reference video was not properly decimated, some frames are still blended and after mergechroma, you can see some color bleeding (especially around 1:40 minute). Is there a way to fix the decimation, please? Otherwise, I presume I would add fixchromableeding functions to adjust for bleeding. please. My code below:

    Is this an okay solution? or do you think exploring adjustment of original chroma layer (such as GamMatch) would be better, please?

    Code:
    video_org=FFmpegSource2("1977 - Bolero - Maya Plisetskaya (red).mpg", atrack=-1).ConvertToYV12(interlaced = true).AssumeTFF().QTGMC(preset="Slow", FPSDivisor=2, sharpness=0.1, EdiThreads=3).Prefetch(10).DuplicateFrame(1).Spline64Resize(640,480).tweak(cont=1.7,bright=-0.5).tdecimate(mode=1,cycle=6)
    
    video_ref=FFmpegSource2("1977 - Bolero - Maya Plisetskaya (color) HQ.avi", atrack=-2).ConvertToYV12(interlaced = true).AssumeTFF().QTGMC(preset="Slow", FPSDivisor=2, sharpness=0.1, EdiThreads=3).Prefetch(10).trim(83,0).crop(24,22,-12,-6).AddBorders(0,0,12,12).Spline64Resize(640,480).tdecimate(mode=1,cycle=6)
    
    
    video_rest=MergeChroma(video_org,video_ref,1.0)
    #video_rest2=MergeLuma(video_rest,video_ref,1.0)
    
    stackhorizontal(\
    subtitle(video_org,"video_org",size=20,align=2),\
    subtitle(video_rest,"video_rest",size=20,align=2),\
    subtitle(video_ref,"video_ref",size=20,align=2)\
    )
    Image Attached Files
    Quote Quote  
  22. Are you looking for avs version of filldrops(rife) ? It's just a modified filldrops with RIFE . This is the 1 duplicate version

    I uncomment out the levels as a debug function to adjust the threshold value, to see if they are being affected

    Code:
    function filldropsrife (clip c, int "Model")
    {
    Model = Default(Model, 5) #2.3
    r=c.z_convertformat(pixel_type="RGBPS").rife(model=Model).SelectOdd().z_ConvertFormat(pixel_type=c.PixelType).trim(0,c.framecount-1)#.levels(0,2,255,0,255,false)
    ConditionalFilter(c, r, c, "YDifferenceFromPrevious()", "lessthan", "0.3")
    }

    This is a duplicate/triplicate version
    Code:
    function filldropsrife2 (clip c, int "Model", float "thresh")
    {
    Model = Default(Model, 5) #2.3
    thresh = default(thresh, 0.3)
    L1 = c.loop(2,0,0)
    global r1 = c.z_convertformat(pixel_type="RGBPS").rife(model=Model).SelectOdd().z_ConvertFormat(pixel_type=c.PixelType).trim(0,c.framecount-1)#.levels(0,3,255,0,255,false)
    global c = c
    global L1 = L1
    global thresh = thresh
    c.scriptclip("""(YDifferenceFromPrevious<thresh) && (YDifferenceToNext<thresh) ? Rhelper1of2(c, current_frame) \
    : (YDifferenceFromPrevious(L1)<thresh) && (YDifferenceToNext(L1)<thresh) ? Rhelper2of2(c, current_frame) \
    : (YDifferenceFromPrevious<thresh) && (YDifferenceFromPrevious(L1)>thresh) ? r1 \
    : c""")
    }
    
    function Rhelper1of2(clip Source, int "FirstFrame", int "Model")
    {
    Model = Default(Model, 22)
    start=Source.Trim(FirstFrame-1,-1)
    end=Source.Trim(FirstFrame+2,-1)
    clip1 = start ++ end ++ end
    r = clip1.z_ConvertFormat(pixel_type="RGBPS")
    r = r.RIFE(model=Model, factor_num=8, factor_den=3)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType)
    r = r.Trim(1,-1)
    r = r.AssumeFPS(FrameRate(Source))
    return r
    }
    
    function Rhelper2of2(clip Source, int "FirstFrame", int "Model")
    {
    Model = Default(Model, 22)
    start=Source.Trim(FirstFrame-2,-1)
    end=Source.Trim(FirstFrame+1,-1)
    clip1 = start ++ end ++ end
    r = clip1.z_ConvertFormat(pixel_type="RGBPS")
    r = r.RIFE(model=Model, factor_num=9, factor_den=3)
    r = r.z_ConvertFormat(pixel_type=Source.PixelType)
    r = r.Trim(2,-1)
    r = r.AssumeFPS(FrameRate(Source))
    return r
    }
    Quote Quote  
  23. Originally Posted by JadHC View Post

    However, as the reference video was not properly decimated, some frames are still blended and after mergechroma, you can see some color bleeding (especially around 1:40 minute). Is there a way to fix the decimation, please? Otherwise, I presume I would add fixchromableeding functions to adjust for bleeding. please.
    Surprisingly works ok for many frames

    It's a blended chroma issue, not a decimation issue. You're adding the blended chroma - that's the underlying problem you'd have to fix
    Quote Quote  
  24. Just noticed there is an ESRGAN model named RedImage: https://openmodeldb.info/models/1x-RedImage
    Out of curiosity, I tested it: https://imgsli.com/MTk3NjM5
    I attached, what happens if you apply it to the whole clip.

    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!