Hello friends,
I'm facing a problem with artifacts when converting the frame rate to 59 fps using Twixtor Pro in Vegas, unfortunately I don't have a video card at the moment and I can't do the conversion using RIFE in Avisynth, so at the moment I'm only having this option.
Twixtor makes me have artifacts at the top of the screen shown at 22 seconds of the video and at 27 to 29 seconds.
Is there a configurable option in Twixtor to avoid this side effect?
By the way, if anyone suggests any visual improvements for this video, I would appreciate it.
+ Reply to Thread
Results 1 to 28 of 28
-
-
For twixtor - changing the warp mode to forward instead of inverse/smart blend should help .
But you have other problems than those artifacts - duplicate frames stuttering every 5th/6th, artifact/blended scene changes, overdenoised, oversharpened - looks like "waterpainting" effect . Try to decimate the duplicates first, and don't denoise or sharpen so much -
It depends on what you started with before processing. Ideally you would remove duplicates very early on in the workflow, because duplicates negatively affect other processes such as temporal filters and denoising
If you perform decimation after the interpolation on the MKV, it would be TDecimate(Cycle=6), because you want 1 in 6 decimation -
-
-
Poisondeathray is exactly right: you MUST deal with the frame rate first. With any project, the very first thing you must do is make sure you have exactly one frame of video each time you step forward one frame. If you find that you have duplicates, where nothing happens when you step forward, or blends, where you see images from two adjacent frames, you have to deal with those issues before you do anything else or you will end up with a mess, like what you posted.
Also, I don't know RIFE, but it appears to generate the same artifacts as all the other motion estimation tools that I've used over the past twenty years. The classic is the garbage around the rifle when the video cuts to a closeup of the dancers.
Post the original footage of just the closeup shot (which is much easier to analyze). Make sure to not re-encode it. Use a tool which simply cuts the video. Since this appears to be video and not film, I suspect this may be a PAL <--> NTSC frame rate issue. -
You're absolutely right! I started the entire project from scratch, trying to restore the frames after deinterlacing with QTGMC using ''restorefps''. In the end, I'll double the frame rate using Hybrid.
But now comes the most complicated part of this restoration, which would be changing the colors and gamma of the video, since this raw file barely has the original colors that the show transmitted. There is a color remaster that someone did years ago, but it was heavily compressed, making the resolution look like crap, so I tried to emulate the colors of this remaster first using the script:
''ColorYUV(gamma_y=60, off_y=-18)
ConvertToRGB(matrix="PC.601").RGBAdjust(r=190.0/167.0, g=163.0/126.0, b=140.0/87.0).ConvertToYV12(matrix="PC.601")
Tweak(sat=1.4)'' and doing a color finalization in Vegas Pro,
However, when increasing the levels and changing the colors, the video gives me several problems, (Banding, loss of details...) Would it be possible to emulate the colors of the sample that I will present with techniques that the video would not look like my first sample with the ''water paint'' effect?
Attached are 2 videos, (The original interlaced source and the sample showing the colors I want in the video) -
Deinterlacing is NOT what you need to do. You need to do inverse telecine. They are two different things.
-
This was a blended convertfps style conversion from "NTSC" to "PAL", much like discussed in the other thread. Deinterlacing is the correct thing to do in this case, because you need access to all the fields in order for restorefps to work to "undo" the blends. The "ideal" restorefps value is different for that sample. It's closer to 0.5-0.6, different than the other thread. I would check the whole thing, or you might have to divide it up into sections for the clearest deblending
"3,Colors.VOB" as a "target" has the wrong levels for DVD or normal video - it uses full range instead of "normal" range where reference black is Y=16, white is Y=235 (ie. in vegas you go below 0 IRE, and above 100 IRE). On most displays you will clip shadow and highlight detail. I get it - there is a subset of people that like that oversaturated, high contrast look, but it's technically wrong, technically "illegal" and you lose details on most displays. Are you sure this is what you want? Or maybe you want to purposely do this to avoid seeing the details/noisy shadows instead of dealing with it ?Last edited by poisondeathray; 8th Mar 2025 at 12:23.
-
-
So my friend,
I like this color level, because for me the raw file has very dull colors. However, I don't know how to change the levels so much without causing so many problems to the quality. I'm very new to Avisynth techniques, but from what I've noticed, if I'm not mistaken, when converting colors to RGB the image loses a lot of quality, or is it just me?
Is there a better adjustment script to get to a level close to the ''3, Colors.VOB'' sample without causing so many side effects, or would it really be impossible? -
Whenever you make color adjustments something, you are usually limited by the noise and compression artifacts that are already in the source. The more you alter something in the grade, the more artifacts and garbage get enhanced. You are increasing contrast, so the artifact's contrast get increased and thus more visible as well. This is the main issue in this scenario in terms of artifacts - they are already present , and in some frames quite bad
Another factor is 8bit color manipulation - you introduce banding . You can see gaps in the waveform/ histograms as you make manipulations. You can work at the higher bit depth and dither on the downconversion, and/or add noise or grain to reduce the visible problem . But that is a relatively minor issue compared to above
Another factor deblending - blurring and blending can help obscure artifacts. When you deblend and align images, the image becomes more clear, but some artifacts also become more clear
In that DVD source, the scenes with higher motion tend to have more problems from compression artifacts - this is expected. But if you denoise/deblock/deband using 1 set of settings, you will degrade "good" sections more than you have to, predisposing you to "waterpainting" effect. Ideally you would filter different sections, differently. The cleaner sections do don't want to denoise so heavily, the frames with heavy artifacts apply stronger filters
I'm very new to Avisynth techniques, but from what I've noticed, if I'm not mistaken, when converting colors to RGB the image loses a lot of quality, or is it just me?
Is there a better adjustment script to get to a level close to the ''3, Colors.VOB'' sample without causing so many side effects, or would it really be impossible?
If matching "3,Colors.VOB" colors is your main goal, I already suggested doing it in other programs like Resolve in your other thread. But you should be aware the "reference" is technically "wrong" because it has "illegal" levels . Also, "3,Colors.VOB" reference does not match photos of the concert - maybe that's what you want - but it looks less authentic to me compared to the photos. The stage lighting is colored but not as severe as "3,Colors.VOB"
https://en.wikipedia.org/wiki/Re-Invention_World_Tour
You can export a LUT from whatever program (e.g. Resolve, NLE of your choice etc..) and apply it in avisynth with full range steps, then convert to limited for the final one - it will be closer than trying to use avisynth filters directly (at least I find it more difficult match colors in avisynth compared to other programs)
Apply this LUT before whatever you used to denoise or process the prores step (something with your prores sample , highlight compression compared to the source), but after double rate deinteracing and restorefps. You might have to adjust your denosing / other filters a bit, and the end pixel format (I used YUV420P8 for the demo). If you don't have Nvidia GPU, you can use AVSCube instead of DGCube.
Code:. . . z_convertformat(pixel_type="RGBP16", colorspace_op="170m:709:709:f=>rgb:709:709:f") DGCube("PATH\roughgrade.cube", in="full", lut="full", out="full", interp="tetrahedral") z_convertformat(pixel_type="YUV420P8", colorspace_op="rgb:709:709:f=>170m:709:709:f") . . .
-
For what it's worth here a demo of what gets damaged when applying a standard 8bit limited YUV->full range RGB conversion to the original.VOB source. The cyan pixels on the right panel indicate the damaged pixels, means those which got clipped one or several (R,G,B) components upon YUV->RGB conversion.
-
-
Absolutely. It has both some illegal YUV violating the limited YUV range, plus many YUV which are well within the limited range (see the histogram) but outside the inner RGB block in the YUV cube. It would be less dramatic for a limited YUV->limited RGB conversion (for editing, all 8bit integer realm).
(Sometimes there is a confusion about the usage and meaning of the term "legal", I think)Last edited by Sharc; 9th Mar 2025 at 13:52.
-
-
"Cannot init CUDA"
Do you have a supported Nvidia GPU ? Maybe you need to update drivers
If no supported Nvidia GPU, you can use AVSCube
http://avisynth.nl/index.php/AVSCube -
-
Instead of DGCube, call it with Cube. The other default settings are the same and can be left out
Code:Cube("PATH\roughgrade.cube")
-
The LUT colors are incredibly the same,
but the 8-bit manipulation in this source really gives me a lot of gaps or histograms as you said. I just don't understand technically this part where you said "You can work in higher bit depth and hesitate in downconversion". Is there any filter I can use to fill these gaps or would it just be debanding filters like "GradFun3()" or grains?
There is another source that manipulated the colors and managed to even out this gap and even added artificial details, I don't know what technique was used in this case, especially about the added details -
Most of the problems occur probably after your other filters like denoising
If you can , use higher bit depth filters, and the downconversion can use dithering such as error diffusion ( floyd steinberg )
Depending on your other filters used, the down conversion and pixel format conversions steps can use dithering for the bit depth conversion . For the demo I used YUV420P8, but you should use higher bit depth for the other steps such as denoising, if they are supported by those filters. In general, 10 or 16bit filtering will have fewer additional problems with banding, introduced by the filters and calculations ( you started with a crappy 8 bit source, working at higher bit depth won't magically make the problems disappear; higher bit depth just reduce the additional problems caused by 8bit manipulations)
So RGBP16 (16bit RGB) , gets converted to 10bit 420 YUV, and uses error diffusion for the dithering . You might use 16bit if your other filter steps accepted that pixel format.
Code:z_convertformat(pixel_type="YUV420P10", colorspace_op="rgb:709:709:f=>170m:709:709:f", dither_type="error_diffusion")
But yes, moderate to heavy denoising would require additional debanding such as GradFun family, or F3KDb, or grain at the very end
There is another source that manipulated the colors and managed to even out this gap and even added artificial details, I don't know what technique was used in this case, especially about the added details -
Sorry to be giving you so much trouble, could you suggest a denoiser and debanging method that is ideal for this situation?
I couldn't create a syntax for the ''Neo_f3kdb'' filter and when I try to run any of the ''GradFun'' family with YUV420P10, Avisynth doesn't seem to support it and closes by itself -
No - because all the filtering - denoising, debanding , sharpening etc... and/or using machine learning filters - depends on subjective personal taste.
e.g You like certain color manipulations, but I dislike them. You might like the look of some filters, I might dislike them
The only step that I would consider mandatory is the deblending step
I couldn't create a syntax for the ''Neo_f3kdb'' filter and when I try to run any of the ''GradFun'' family with YUV420P10, Avisynth doesn't seem to support it and closes by itself
http://avisynth.nl/index.php/F3kdb
You call them with neo_f3kdb or F3dkb , and adjust the settings as in the description
For GradFun, there is a GradFun3DbMod based on GradFun3 that supports high bit depth
https://github.com/Asd-g/AviSynthPlus-Scripts/blob/master/GradFun3DBmod.avsi -
-
In avisynth the input clip argument can use "implied last" , ie whatever clip preceeded it. Otherwise you can use "last", or some other clip variable
Code:. . neo_f3kdb(range=15, Y=64)
ex_luts is found in dogway's ex_tools
https://github.com/Dogway/Avisynth-Scripts/blob/master/ExTools.avsi
Similar Threads
-
Viewing two videos side by side in sync inside Media Player Classic? (KLC)
By archz2 in forum Newbie / General discussionsReplies: 4Last Post: 15th Jul 2023, 13:36 -
Twixtor, adding frames and Vegas Pro 19
By magnu in forum RestorationReplies: 5Last Post: 21st Apr 2022, 03:08 -
[SOLVED] Windows app to view two video files side by side?
By yetanotherlogin in forum Software PlayingReplies: 7Last Post: 24th Jan 2021, 15:25 -
FPS and screen refresh rate?
By Chauceratemyhamster in forum Newbie / General discussionsReplies: 3Last Post: 4th Jul 2020, 04:30 -
Sony Vegas twixtor question
By RyDeRz in forum Newbie / General discussionsReplies: 3Last Post: 20th Jun 2020, 14:12