You should post your final script, or at least parts of it to show what you did. It will help others in the future.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays!
+ Reply to Thread
Results 61 to 81 of 81
Thread
-
-
Unfortunately, I don't have the original script anymore, but I can rewrite it.
I created two different videos, one with combing interpolated, and one with combing interpolated and also using the half-height method.
I interpolated comb frames but I also interpolated the occasional dupe frame.
Code:FFmpegSource2("LikeToySoldiers.mkv", atrack=1) #import video file with audio tdecimate() function ReplaceFramesSVPFlow(clip Source, int N, int X) { # N is number of the 1st frame in Source that needs replacing. # X is total number of frames to replace #e.g. ReplaceFramesSVPFLow(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for SVPFlow interpolation start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point start+end AssumeFPS(1) #temporarily FPS=1 to use mflowfps super=SVSuper("{gpu:1}") vectors=SVAnalyse(super, "{}") SVSmoothFps(super, vectors, "{rate:{num:"+String(X+1)+", den:1}}", url="www.svp-team.com", mt=1) AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining Trim(1, framecount-1) #trim ends, leaving replacement frames Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0) } ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame # and repeat
For the second version, I did the same thing, but resized to half height. This was used only for several shots where interpolation did not suffice.
Code:FFmpegSource2("LikeToySoldiers.mkv", atrack=1) #import video file with audio tdecimate() function ReplaceFramesSVPFlow(clip Source, int N, int X) { # N is number of the 1st frame in Source that needs replacing. # X is total number of frames to replace #e.g. ReplaceFramesSVPFLow(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for SVPFlow interpolation start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point start+end AssumeFPS(1) #temporarily FPS=1 to use mflowfps super=SVSuper("{gpu:1}") vectors=SVAnalyse(super, "{}") SVSmoothFps(super, vectors, "{rate:{num:"+String(X+1)+", den:1}}", url="www.svp-team.com", mt=1) AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining Trim(1, framecount-1) #trim ends, leaving replacement frames Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0) } ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame ReplaceFramesSVPFlow(????,?) #interpolate frame # and repeat Lanczos4Resize(864,240) #resize to half height nnedi3_rpow2(2) #double both height and length Lanczos4Resize(854,480) #stretch back to original AR
The machine learning algorithms add a bunch of noise, so I imported it back into AviSynth and ran a simple TemporalDegrain2() on it. I find de-graining after the upscale preserved much more detail.
The biggest problem left is the aliasing, which can be fixed using santiag after I resize to half-height, dosen't seem to work until I resize to half-height though. It works on most (but not all) of the aliasing issues, however, the detail loss is way too much, and I'd rather have minor aliasing.Last edited by embis2003; 6th Sep 2020 at 09:42. Reason: Edited mistake in the script, got confused with another project that was PAL.
-
I appreciate that! I find its still the best "quality" of the video around, and that allows me to look past some of those issues, however, if anybody in the future can propose a fix to said problems I would not be opposed to starting the project over again. now that I know my way around a little more, it probably wouldn't take as long as 9 months.
-
1) There are sections with temporal aliasing that remain in this last version, that you could probably improve with temporal AA filters, such as QTGMC in progressive mode (just splice in with trim() to mix/match sections) such as the CG grafitti ~00:01:01, a few other sections the like couch ~00:01:11 , a few others. They can be globally filtered or limited to parts within frames with masking/roto . Then you plug in the clean frames to the neural net +/- additional filtering
I used the mkv version. The avs version has a slight red shift, I think from awarpsharp2; but the vapoursynth version does not . grafitti_compare.mp4 below (this is globally filtered (entire frames)
Code:LWLibavVideoSource("1.mkv") tdecimate() assumefps(24000,1001) trim(1484,1522) #cg grafitti section awarpsharp2(depth=4) santiag(3,3) qtgmc(preset="very slow", inputtype=2, sharpness=0.1) awarpsharp2(depth=4) santiag(2,2) qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
2) But others sections which would take some semi-manual work, motion tracking (a tracking repair) , some compositing and masking (roto) - such as the TV text ~00:01:13, ~00:02:23 . Basically you redo the text and zoom .
Just an observation, but many generic neural net algorithms mess up text upscaling. Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm. Ideally you'd redo the book text, but that is not an easy image or font to find. You need 1 clean frame (ideally the largest size, zoomed in with everything visible, and you do the zoom backwards with the motion tracked data) -
1) There are sections with temporal aliasing that remain in this last version, that you could probably improve with temporal AA filters, such as QTGMC in progressive mode (just splice in with trim() to mix/match sections) such as the CG grafitti ~00:01:01, a few other sections the like couch ~00:01:11 , a few others. They can be globally filtered or limited to parts within frames with masking/roto . Then you plug in the clean frames to the neural net +/- additional filtering
I used the mkv version. The avs version has a slight red shift, I think from awarpsharp2; but the vapoursynth version does not . grafitti_compare.mp4 below (this is globally filtered (entire frames)
Code:LWLibavVideoSource("1.mkv") tdecimate() assumefps(24000,1001) trim(1484,1522) #cg grafitti section awarpsharp2(depth=4) santiag(3,3) qtgmc(preset="very slow", inputtype=2, sharpness=0.1) awarpsharp2(depth=4) santiag(2,2) qtgmc(preset="very slow", inputtype=2, sharpness=0.1)
But others sections which would take some semi-manual work, motion tracking (a tracking repair) , some compositing and masking (roto) - such as the TV text ~00:01:13, ~00:02:23 . Basically you redo the text and zoom.
Just an observation, but many generic neural net algorithms mess up text upscaling Some of the book text ~00:00:16 is made worse than if you used a "normal" scaling algorithm.
You need 1 clean frame (ideally the largest size, zoomed in with everything visible, and you do the zoom backwards with the motion tracked data)
Ideally you'd redo the book text, but that is not an easy image or font to find. -
Topaz is not, and never has been, known for quality filters. This "AI" video upscaler is not any different. Newbies are bamboozled because of some Youtube videos, and because the software has a dummy-friendly GUI, but it's really pretty lousy software.
It reminds me of NeatVideo, vReveal, Super Resolution, and some others. Avisynth makes those look quaint.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
For the sake of not having this thread turn hostile again, I will just say, I disagree. In my experience, the results the software spits out can be downright amazing, it just depends on the source material. The tech is still in an infancy, and I don't consider it "AI" either, however, its noise reduction and the detail it seems to create are very impressive to me.
Last edited by embis2003; 27th Mar 2021 at 07:21.
-
There are progressive video clips with limited “short combing” that QTGMC works good on!
Regarding the OP's first clip.
I have seen the exact same combing in progressive mp4 clips?
Example shows original and 500% size for easy viewing.
[Attachment 76975 - Click to enlarge]
This suggests this exact combing could be result of particular editing practice.
I have used Vdub Internal Filter called field bob.
Loaded twice through Vdub filter add function.
For example:
Set first field bob to smooth/down.
Set the second to smooth/smooth.
Depends on clip.
QTGMC works better in a final process after field bob, but this might be a result of my QTGMC settings?
------------
What was decided about the best processing for the OP's first video "combing only"?
I am not interested in manual editing, just full clip processing.
The thread was hard to follow, could you compile a basic code based on repairing the OP’s first sample.
It could be very handy to process OP type of video.
Thanks.Last edited by Charles-Roberts; 13th Feb 2024 at 13:15.
-
There are different types of "combing" with different causes , and therefore different solutions.
There were old threads dealing with this on some similar issues on porn videos; not all the treatments are the same
Depends on clip.
QTGMC works better in a final process after field bob, but this might be a result of my QTGMC settings?
What was decided about the best processing for the OP's first video "combing only"?
Since that old post, there are better interpolation methods using RIFE - cleaner results, fewer artifacts for frame interpolation than using mvtools2 or svpflow. It requires 1 good frame before and after, and they interpolate the frame(s) in between. There are RIFE interpolation functions posted in threads here and doom9 . In general, RIFE produces better results for frame interpolation. MVtools2 and SVPflow have higher chance of edge occlusions and "blobby" edge artifacts, as well as other problems
I am not interested in manual editing, just full clip processing.
The thread was hard to follow, could you compile a basic code based on repairing the OP’s first sample.
It could be very handy to process OP type of video.
Thanks.
If it's been upscaled from the original resolution using a progressive algorithm, while still interlaced with interleaved fields - then you might be able to "undo" it using a reverse kernel (such as debicubic , debilinear) to reconstruct the original fields. There were examples of code used on porn videos in another thread. But if it's downscaling, then you cannot really fix it properly, because more information is lost.
Start a new thread if you want advice on specific video -
The clip I refereed to is exactly the same as OP video, in the areas and frequency, placement and appearance of combing.
This is why I mentioned clips having something the same in treatment to be so similar. We are talking exact, as best can be noted.
Whatever was done, is very likely identical for both videos.
The material is not suitable for forum, and in this case the treatment required may well be exactly like OP,
only using the new approaches you talk of.
Not worth starting a new thread without being able to supply material.
I would just like more info about RIFE interpolation functions.
I will need to study up on this from scratch.
Perhaps a short sample of code to get started? -
Frequency - are you describing the pattern in a particular frame as a spatial description; or among a range of frames - a temporal descriptoin?
ie. Do you have clean reference frames to interpolate "from" ? Otherwise that method won't work for you.
What is the pattern of clean vs. combed frames ?
A more typical case for a porn video mishandling is simple progressive scaling, with interleaved fields - That's not quite what the OP has, because there are many good frames during motion
Not worth starting a new thread without being able to supply material.
I would just like more info about RIFE interpolation functions.
I will need to study up on this from scratch.
Perhaps a short sample of code to get started?
https://forum.videohelp.com/threads/407293-ReplaceFrameX-InsertFrameX -
The form of combing is the same, but there are more areas where good reference frames are not available.
What you have said is correct.
I will need the versatile RIFE based function to interpolate multiple consecutive "bad" frames!
I have assumed the fps can remain the same with this process?
I have AVISynth installed, do I need any other bits to make the function work?
Should I convert to original dimensions and aspect before, during or after this?
I have not looked into it, but might be able to find the original SAR/DAR before what was likely down-scaled video.
Thanks.Last edited by Charles-Roberts; 12th Feb 2024 at 13:24. Reason: Moved unread forward
-
Yes, FPS remains the same .
But once you have more than a few consecutive "bad" frames, interpolation because less useful as a technique - because it cannot recreate the actual missing data from missing time samples. The "tweening" motion will look very robotic and fake. RIFE (or related methods) use linear interpolation between 2 good frames - and real life motion is usually not linear at all
The more typical porn case is it started with interlaced video. The badly resized / badly deinterlaced version is 1/2 the field rate. So for PAL area, it would be 25FPS, when it should have been double rate deinterlaced to 50FPS (For NTSC, it would be analogous 29.97FPS and 59.94FPS) . Motion is smoother at 50 FPS or 59.94 FPS - that's what the video should have been - because it's usually "video", not film
If you can get a clean 25p or 29.97p single rate version from processing, then you can try RIFE on the whole thing to synthesize 50p or 59.94p - to emulate what it should have been
If the version you have is upscaled (larger than 720x480 "NTSC", or 720x576 for "PAL"), then I would try the inverse kernel methods, such as DeBicubic, Debilinear.
I have AVISynth installed, do I need any other bits to make the function work?
https://github.com/Asd-g/AviSynthPlus-RIFE
https://github.com/Asd-g/AviSynthPlus-RIFE/releases
Should I convert to original dimensions and aspect before, during or after this?
I have not looked into it, but might be able to find the original SAR/DAR before what was likely down-scaled video.
.
For frame interpolation cases you'd usually convert after, but in the upscaled case the inverse kernel tries to reverse the scaling method used back to the original and hopefully "fixes" the fields -
I will have to look into it when on desktop.
Thank you for all the information, I have copied it all to study.
Have plenty of info to get started.
Below is the QTGMC code I use for remnant interlace on progressive video.
It works good for most material but not good on this combing.
PHP Code:Import("C:\Program Files (x86)\[url=https://www.videohelp.com/software/Avisynth]AviSynth[/url]+\plugins+\QTGMC.avsi")
DirectShowSource("C:\video.avi")
#ConvertToYV12
t = QTGMC( Preset="Placebo", InputType=2, SourceMatch=3, Lossless=2, NoiseProcess=2, GrainRestore=0.4, NoiseRestore=0.15, Sigma=1.8, NoiseDeint="Generate", StabilizeNoise=true )
b = QTGMC( Preset="Placebo", InputType=3, SourceMatch=3, Lossless=2, NoiseProcess=2, GrainRestore=0.4, NoiseRestore=0.15, Sigma=1.8, NoiseDeint="Generate", StabilizeNoise=true )
Repair( t, b, 1 )
#PrevGlobals="Reuse"
This type of combing in OP and my material is not what I normally encounter! -
-
The RIFE models is a big download, might get back to this another time.
This might be put in the two hard basket.
No access to an original, but I have discovered the video was down-scaled from 1920x1080 59.94.
Sorry I have no details with me about the downloaded version but about 1400x770 30fps Progressive.
I intend to try up-scaling to 1920x1080 29.97.
Retry my QTGMC progressive code.
Another experiment could be re-interlacing the up-scaled 29.97 version and re-deinterlace.
What is used to re-interlace? -
It probably was HD interlaced 1920x1080i29.97 . Since there was downscaling, it's unlikely that you'd get any benefit from using any reverse kernel methods - they can be helpful in the opposite situation when video is poorly upscaled
Another experiment could be re-interlacing the up-scaled 29.97 version and re-deinterlace.
What is used to re-interlace?
Code:#Start with 59.94p or 50p (for PAL areas) AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
-
Thanks, the interlacing code will be interesting.
Not sure if I should start a fresh thread?
I have been going through forum threads on related subjects and wondering if jagabo 's script would be worth trying.
I am trying to work it out, and wondering if its worth posting the question on old thread or make a new one?
https://forum.videohelp.com/threads/404277-Bad-interlace-lines-on-progressive-video#post2643462
The johnmeyer script to repair bad deinterlacing makes my head hurt just looking at it http://forum.doom9.org/showthread.php?p=1686309#post1686309
Hypothetical videos with no samples can create issues.
Calling it vid.mp4 1400x770 30fps Progressive from from HD interlaced 1920x1080i 29.97.Last edited by Charles-Roberts; 13th Feb 2024 at 12:13.
-
I think it's always worth trying all of them
Even if they don't improve the issue for your specific video - maybe they are beneficial for some slightly different video with similar problems - and you will have more "tools in the toolbelt" -
Honestly, if I knew how to recreate these artifacts exactly, I could probably train a neural net to specifically tackle the issue. But the best solution i've found is interpolation of the frames (if there is enough clean frames). if there isn't, then resizing to half height (for example for a video that is 720 x 540, then you'd resize to 720 x 220), which will blur the combing away. and then using some upscaling algorithim to double the height, such as the many ESRGAN models or derivations, or even topaz (sorry lordsmurf). then resize back to proper aspect ratio using the alg of your choice. though, if you can handle the blur, you can skip the neural net.
Similar Threads
-
Deinterlacing an online video with comb artifacts?
By Master Tape in forum RestorationReplies: 4Last Post: 21st Jul 2018, 03:48 -
Remove interlaced artifacts on progressive video.
By TempUser_ in forum Video ConversionReplies: 4Last Post: 18th Jul 2018, 12:38 -
Why are interlace/combing artifacts only an issue on progressive displays?
By 90sTV in forum Newbie / General discussionsReplies: 12Last Post: 28th Jan 2017, 08:22 -
Artifacts In Progressive SD Animation Source
By LouieChuckyMerry in forum DVD RippingReplies: 48Last Post: 13th May 2016, 21:27 -
Dot Crawl Artifacts from Composite Source?
By Ish Kabibble in forum CapturingReplies: 9Last Post: 20th Mar 2015, 06:48