VideoHelp Forum


Try DVDFab Video Downloader and rip Netflix video! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Page 5 of 5
FirstFirst ... 3 4 5
Results 121 to 132 of 132
Thread
  1. Mr
    Join Date
    Feb 2021
    Location
    Salisbury, Wiltshire, UK
    Search Comp PM
    Just tried a 40 second DV clip to HD with modified settings from above:
    - Hybrid took 30 minutes to output the file. Is this a usual sort of time?

    Also, I can't get H264s to export using any settings I've tried. It crashes every time, even if I manage to get no errors when adding to the queue. ProRes is fine, but would be nice to only need one step to create a final file to send to clients. Any advice please?
    Last edited by Fryball; 3rd Feb 2021 at 05:29.
    Quote Quote  
  2. Hybrid has a cut option which uses trim in Vapoursynth.
    You can enable it through "Config->Internals->Cut Support" (make sure to read the tool-tips), this enabled additional controls in the Base-tab.

    Also, I can't get H264s to export using any settings I've tried. It crashes every time, even if I manage to get no errors when adding to the queue. ProRes is fine, but would be nice to only need one step to create a final file to send to clients. Any advice please?
    No clue what you are doing, no clue about the error.
    -> read https://www.selur.de/support and either post here in the Hybrid thread or over in my own forum with details.

    CU Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. Originally Posted by Fryball View Post
    While I'm testing, it would be nice not to have to process a whole clip just to see the results. Is there a straightforward way to limit the output frames or trim the base clip?
    I guess this means importing a script into Vapoursynth, so is there one that I can make work with it's GUI? If loading that is a process I only need to do once I can follow instruction well, just want to enable easy trimming for future use, if possible.

    There is no real "GUI" for vapoursynth; the closest thing would be vapoursynth editor (vsedit), where you can preview scripts and play them. Also you can benchmark scripts and determine where bottlenecks are

    You can use Trim in the script to specify a range
    eg. frames 100 to 300 inclusive

    clip = core.std.Trim(clip, first=100, last=300)


    Originally Posted by Fryball View Post
    Just tried a 40 second DV clip to HD with modified settings from above:
    - Hybrid took 30 minutes to output the file. Is this a usual sort of time?


    It seems slow . What is your CPU and GPU % usage during an encode?

    It might be your GPU's openCL performance causing a bottleneck. Is it a discrete GPU or Intel GPU on the Macbook ? If it's Intel GPU, I suspect znedi3_rpow2 might be faster in your case than OpenCL using nnedi3cl_rpow2. When i check on a windows laptop, it's a few times slower using iGPU than discrete GPU or CPU (in short, iGPU OpenCL performance is poor)

    You can use vsedit's benchmark function to optimize the script. You have to switch out filters, play with settings, remeasure. Ideal settings are going to be different for different hardware setups

    I don't use hybrid, but the ffmpeg demuxer also makes a difference -f vapoursynth_alt is generally faster than -f vapoursynth when more than 1 filter is used




    Also "DV" is usually BFF, and you set FieldBased=1 (BFF), but you have TFF=True in the QTGMC call. Usually the filter argument overrides the frame props

    Also, there is no 601=>709 matrix conversion. Usually HD is "709" by convention, otherwise the colors will get shifted when playing back the HD version


    I would post in another thread or selur's forum, because this is the wrong thread to deal with those topics
    Quote Quote  
  4. Originally Posted by Fryball View Post
    I really don't want to have to learn & try every million options just to do the basics, I just don't have the time, motivation or will ever earn enough from this to be worth it. Again, which is what drew me to the Topaz thing.
    This thread was hilarious to read but you nailed the point right there. You either spend $200 to get 80%+ of the result with one click, or you spend countless hours doing trial-and-error in order to get 100% of the result for free. That's all there is to it. I don't think "AI" was ever meant to do anything better that a human can manually. At the end of the day, what would I prefer to do? Spend 2 hours learning about kubernetes deployment or spend that time producing multiple clips and mucking around with Avisynth plugins? Yep, I paid $200 and spent that extra free time deploying plex on a kubernetes cluster, which is more fun and interesting.
    Quote Quote  
  5. Usually HD is "709" by convention, otherwise the colors will get shifted when playing back the HD version
    Only if you didn't flag your output correctly and/or the playback device ignores the playback flag.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  6. Originally Posted by Selur View Post
    Usually HD is "709" by convention, otherwise the colors will get shifted when playing back the HD version
    Only if you didn't flag your output correctly and/or the playback device ignores the playback flag.
    Flagging does not change the actual colors in YUV, but you should should do both and flag it properly too - that's best practices

    When you upscale SD=>HD, typically you perform a 601=> 709 colormatrix or similar conversion . When you perform a HD => SD conversion, you typically perform a 709=>601 colormatrix (or equivalent) conversion .
    Quote Quote  
  7. Flagging does not change the actual colors in YUV,...
    No, that's not what I wanted to imply. If your colors are correct in 601, you didn't change the colors the output is correctly flagged as 601 then the player should still display the colors properly.
    ..that's best practices.
    When you upscale SD=>HD, typically you perform a 601=> 709 colormatrix or similar conversion . When you perform a HD => SD conversion, you typically perform a 709=>601 colormatrix (or equivalent) conversion .
    Yes, it's best practice which was introduced since a lot of players did not honor the flagging of a source but either always used 601 or 709 or if you were lucky at least used 601 for SD and 709 for HD. (Some players also did not always honor tv/pc scale flagging which is why it was 'best practice' to stay in tv scale or convert to tv scale at an early stage in the processing of a source in case it wasn't tv scale.)
    -> I agree that:
    a. flagging according to the color characteristics is necessary
    b. it is 'best practice' to convert the color characteristics (and adjust the flagging accordingly) when converting between HD and SD.
    But it this conversion is not needed if your playback devices/software properly honors the color flags.

    Cu Selur

    Ps.: no clue whether topaz know properly adjusts the color matrix and flagging,... (an early version I tested with did not)
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  8. Originally Posted by Selur View Post
    Flagging does not change the actual colors in YUV,...
    No, that's not what I wanted to imply. If your colors are correct in 601, you didn't change the colors the output is correctly flagged as 601 then the player should still display the colors properly.
    ..that's best practices.
    When you upscale SD=>HD, typically you perform a 601=> 709 colormatrix or similar conversion . When you perform a HD => SD conversion, you typically perform a 709=>601 colormatrix (or equivalent) conversion .
    Yes, it's best practice which was introduced since a lot of players did not honor the flagging of a source but either always used 601 or 709 or if you were lucky at least used 601 for SD and 709 for HD. (Some players also did not always honor tv/pc scale flagging which is why it was 'best practice' to stay in tv scale or convert to tv scale at an early stage in the processing of a source in case it wasn't tv scale.)
    -> I agree that:
    a. flagging according to the color characteristics is necessary
    b. it is 'best practice' to convert the color characteristics (and adjust the flagging accordingly) when converting between HD and SD.
    But it this conversion is not needed if your playback devices/software properly honors the color flags.

    Cu Selur

    Ps.: no clue whether topaz know properly adjusts the color matrix and flagging,... (an early version I tested with did not)
    Not what I said...

    Flagging is best practices - it's not necessary, but ideal.

    Applying the colormatrix transform is far more important today and yesterday. It's critical.

    If you had to choose one, the actual color change with colormatrix (or similar) is drastically more important than flags; You can get away with applying colormatrix transform without flagging (undef) and it will look ok in 99% of scenarios. But the reverse is not true, SD flagging 601/170m/470bg/ etc.. with HD dimensions is problematic. It might be ok in a few specific software players. It's easy to demonstrate with colorbars

    You should cover your bases, because SD color and flags for HD resolution will cause trouble in many scenarios - web, youtube, many portable devices, NLE's, some players... Flags today are still much less important than the 709 for HD assumption



    Topaz fixed a bunch of bugs with the RGB conversion, it should be ok now for 601/709 . I didn't check about the interlaced chroma upscaling bug to see if it was fixed
    Quote Quote  
  9. Just found this interesting colab project: https://github.com/AlphaAtlas/VapourSynthColab

    Originally Posted by VapourSynthColab
    A Google Colab notebook set up for both conventional and machine learning-based video processing. Run ESRGAN or MXNet models, OpenCL and CUDA filters, and CPU filters on video frames simultaneously in VapourSynth scripts, or use VapourSynth filters to pre/post process videos for other ML Colab projects, and do it all in the cloud for free.
    Please do not abuse.
    Quote Quote  
  10. Gigapixel and Video Enhancer use settings derived from machine learning to perform the scaling of images. The difference is in how they re-implement the same process afterward.
    Gigapixel works with single images by creating multiple tests and comparing, which is very similar to the original machine learning process used to generate the initial settings used (statistical mappings for tonal and shape areas that are detected). I know because I ran tests, and monitored my system. It also used a DAT file, or a basic data file, which I can only assume was marking the different identified areas of the image before applying the algorithm for each detected area. You can still adjust the blur and the noise level generated in Gigapixel, which really just determines how it weights motion edges and the depth of contrast noise removal, not color noise. With two sliders, you adjust how the final output is handled, not the algorithm itself, which is handled only by the machine-learning-based process.

    Video enhancer uses similar starting points but only makes one or two comparisons before applying the adjustment. It also only applies an adjustment to the areas of the data that exist in the frame, since most frames have a little data from earlier frames and a little that matches the next few frames. If you set your output to an image type, it uses detected tonal areas from earlier frames to apply the adjustment to tonal areas, to keep it consistent. If the data in the frames changes rapidly, however, you may be getting some artifacts appearing in the motion blur. That's why there are different whole algorithms to use in the Video app. The first part is the decoder, which affects input quality, the next is the processing style (how much weight to before and after frames), and the last part is how to re-encode the image data generated. Even TIFF mode gives you a compressed file, so the manner of compression must be respected. If you have a lot of motion in your file, you might be better off processing the file in another encoder, double the frame rate, and use a FRAME BLENDING algorithm to create intermediate frames. For me, on videos of up to 2 minutes at 30i or 30p as input, running through Media Encoder and re-encoding with frame blending for double the frame rate did create a sad motion effect, but afterward, I just re-encoded again to the original rate and there were very few artifacts, if any at all. For interlaced media, I've found it more effective to do an old school deinterlace by dropping half the fields then run the app. YES, it treats all video like an interlace, as part of the algorithm, but not in the same fashion as an interlace. It uses the alternate field to "Guess" the data between the two for the upscale. This is why you are better off re-coding the video in a lower resolution as progressive frame data before you put it into the app. Apple Compressor used to do a similar comparison to track interlaced motion so that it could then REFRAME the video as Progressive. It took a long time to run, but the results were good. After that, a slight edge blur in after effects removed the aliasing edge, keeping true to the fuller resolution. Using a blend mode to apply it to the data only in the dark areas kept it to most of the edges, however, an edge detection with a black and white wash, turned into a mask for the same effect on an adjustment layer set to darken or multiply also had great results. I'm thinking of testing this app again soon, with one of those videos, if I can find them.
    Quote Quote  
  11. This repository collects the state-of-the-art algorithms for video/image enhancement using deep learning (AI) in recent years, including super resolution, compression artifact reduction, deblocking, denoising, image/color enhancement, HDR:

    https://github.com/jlygit/AI-video-enhance
    Quote Quote  
  12. Thanks.
    From the looks of it it only contains short summaries (in Chinese), but it also contains links to the original papers.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  



Similar Threads