Just tried a 40 second DV clip to HD with modified settings from above:
- Hybrid took 30 minutes to output the file. Is this a usual sort of time?
Also, I can't get H264s to export using any settings I've tried. It crashes every time, even if I manage to get no errors when adding to the queue. ProRes is fine, but would be nice to only need one step to create a final file to send to clients. Any advice please?
Try DVDFab Video Downloader and rip Netflix video! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 121 to 130 of 130
Thread
-
Last edited by Fryball; 3rd Feb 2021 at 06:29.
-
Hybrid has a cut option which uses trim in Vapoursynth.
You can enable it through "Config->Internals->Cut Support" (make sure to read the tool-tips), this enabled additional controls in the Base-tab.
Also, I can't get H264s to export using any settings I've tried. It crashes every time, even if I manage to get no errors when adding to the queue. ProRes is fine, but would be nice to only need one step to create a final file to send to clients. Any advice please?
-> read https://www.selur.de/support and either post here in the Hybrid thread or over in my own forum with details.
CU Selurusers currently on my ignore list: deadrats, Stears555 -
There is no real "GUI" for vapoursynth; the closest thing would be vapoursynth editor (vsedit), where you can preview scripts and play them. Also you can benchmark scripts and determine where bottlenecks are
You can use Trim in the script to specify a range
eg. frames 100 to 300 inclusive
clip = core.std.Trim(clip, first=100, last=300)
It seems slow . What is your CPU and GPU % usage during an encode?
It might be your GPU's openCL performance causing a bottleneck. Is it a discrete GPU or Intel GPU on the Macbook ? If it's Intel GPU, I suspect znedi3_rpow2 might be faster in your case than OpenCL using nnedi3cl_rpow2. When i check on a windows laptop, it's a few times slower using iGPU than discrete GPU or CPU (in short, iGPU OpenCL performance is poor)
You can use vsedit's benchmark function to optimize the script. You have to switch out filters, play with settings, remeasure. Ideal settings are going to be different for different hardware setups
I don't use hybrid, but the ffmpeg demuxer also makes a difference -f vapoursynth_alt is generally faster than -f vapoursynth when more than 1 filter is used
Also "DV" is usually BFF, and you set FieldBased=1 (BFF), but you have TFF=True in the QTGMC call. Usually the filter argument overrides the frame props
Also, there is no 601=>709 matrix conversion. Usually HD is "709" by convention, otherwise the colors will get shifted when playing back the HD version
I would post in another thread or selur's forum, because this is the wrong thread to deal with those topics -
This thread was hilarious to read
but you nailed the point right there. You either spend $200 to get 80%+ of the result with one click, or you spend countless hours doing trial-and-error in order to get 100% of the result for free. That's all there is to it. I don't think "AI" was ever meant to do anything better that a human can manually. At the end of the day, what would I prefer to do? Spend 2 hours learning about kubernetes deployment or spend that time producing multiple clips and mucking around with Avisynth plugins? Yep, I paid $200 and spent that extra free time deploying plex on a kubernetes cluster, which is more fun and interesting.
-
Usually HD is "709" by convention, otherwise the colors will get shifted when playing back the HD versionusers currently on my ignore list: deadrats, Stears555
-
Flagging does not change the actual colors in YUV, but you should should do both and flag it properly too - that's best practices
When you upscale SD=>HD, typically you perform a 601=> 709 colormatrix or similar conversion . When you perform a HD => SD conversion, you typically perform a 709=>601 colormatrix (or equivalent) conversion . -
Flagging does not change the actual colors in YUV,...
..that's best practices.
When you upscale SD=>HD, typically you perform a 601=> 709 colormatrix or similar conversion . When you perform a HD => SD conversion, you typically perform a 709=>601 colormatrix (or equivalent) conversion .
-> I agree that:
a. flagging according to the color characteristics is necessary
b. it is 'best practice' to convert the color characteristics (and adjust the flagging accordingly) when converting between HD and SD.
But it this conversion is not needed if your playback devices/software properly honors the color flags.
Cu Selur
Ps.: no clue whether topaz know properly adjusts the color matrix and flagging,... (an early version I tested with did not)users currently on my ignore list: deadrats, Stears555 -
Not what I said...
Flagging is best practices - it's not necessary, but ideal.
Applying the colormatrix transform is far more important today and yesterday. It's critical.
If you had to choose one, the actual color change with colormatrix (or similar) is drastically more important than flags; You can get away with applying colormatrix transform without flagging (undef) and it will look ok in 99% of scenarios. But the reverse is not true, SD flagging 601/170m/470bg/ etc.. with HD dimensions is problematic. It might be ok in a few specific software players. It's easy to demonstrate with colorbars
You should cover your bases, because SD color and flags for HD resolution will cause trouble in many scenarios - web, youtube, many portable devices, NLE's, some players... Flags today are still much less important than the 709 for HD assumption
Topaz fixed a bunch of bugs with the RGB conversion, it should be ok now for 601/709 . I didn't check about the interlaced chroma upscaling bug to see if it was fixed -
Just found this interesting colab project: https://github.com/AlphaAtlas/VapourSynthColab
Originally Posted by VapourSynthColab -
Gigapixel and Video Enhancer use settings derived from machine learning to perform the scaling of images. The difference is in how they re-implement the same process afterward.
Gigapixel works with single images by creating multiple tests and comparing, which is very similar to the original machine learning process used to generate the initial settings used (statistical mappings for tonal and shape areas that are detected). I know because I ran tests, and monitored my system. It also used a DAT file, or a basic data file, which I can only assume was marking the different identified areas of the image before applying the algorithm for each detected area. You can still adjust the blur and the noise level generated in Gigapixel, which really just determines how it weights motion edges and the depth of contrast noise removal, not color noise. With two sliders, you adjust how the final output is handled, not the algorithm itself, which is handled only by the machine-learning-based process.
Video enhancer uses similar starting points but only makes one or two comparisons before applying the adjustment. It also only applies an adjustment to the areas of the data that exist in the frame, since most frames have a little data from earlier frames and a little that matches the next few frames. If you set your output to an image type, it uses detected tonal areas from earlier frames to apply the adjustment to tonal areas, to keep it consistent. If the data in the frames changes rapidly, however, you may be getting some artifacts appearing in the motion blur. That's why there are different whole algorithms to use in the Video app. The first part is the decoder, which affects input quality, the next is the processing style (how much weight to before and after frames), and the last part is how to re-encode the image data generated. Even TIFF mode gives you a compressed file, so the manner of compression must be respected. If you have a lot of motion in your file, you might be better off processing the file in another encoder, double the frame rate, and use a FRAME BLENDING algorithm to create intermediate frames. For me, on videos of up to 2 minutes at 30i or 30p as input, running through Media Encoder and re-encoding with frame blending for double the frame rate did create a sad motion effect, but afterward, I just re-encoded again to the original rate and there were very few artifacts, if any at all. For interlaced media, I've found it more effective to do an old school deinterlace by dropping half the fields then run the app. YES, it treats all video like an interlace, as part of the algorithm, but not in the same fashion as an interlace. It uses the alternate field to "Guess" the data between the two for the upscale. This is why you are better off re-coding the video in a lower resolution as progressive frame data before you put it into the app. Apple Compressor used to do a similar comparison to track interlaced motion so that it could then REFRAME the video as Progressive. It took a long time to run, but the results were good. After that, a slight edge blur in after effects removed the aliasing edge, keeping true to the fuller resolution. Using a blend mode to apply it to the data only in the dark areas kept it to most of the edges, however, an edge detection with a black and white wash, turned into a mask for the same effect on an adjustment layer set to darken or multiply also had great results. I'm thinking of testing this app again soon, with one of those videos, if I can find them.
Similar Threads
-
how can i restore an enhance audio?
By enable in forum AudioReplies: 4Last Post: 21st Feb 2021, 17:26 -
DVDFab Video Upscaling--Enhance DVD from SD (480p) to Full HD (1080p) video
By DVDFab Staff in forum Latest Video NewsReplies: 2Last Post: 6th Aug 2020, 04:31 -
Is format factory can enhance video ?.
By mrs.faith in forum Video ConversionReplies: 1Last Post: 21st Apr 2017, 15:15 -
How Enhance video quality in potplayer?
By asiboy in forum Software PlayingReplies: 5Last Post: 1st Jan 2017, 16:01 -
Enhance this image to get the license plate
By thestolz in forum RestorationReplies: 7Last Post: 18th Jul 2016, 13:47