VideoHelp Forum




+ Reply to Thread
Results 1 to 4 of 4
  1. As in:
    - install via the following steps
    - pip x, y, z to satisfy python reqs
    - a start to finish upscale example using model A with with runtime B (ideally TRT) with vsmlrt.py wrapper
    - the same start to finish using models directly
    etc.

    If there's nothing available, anyone care to provide one?

    P.S. I have read the wiki but am looking for a bit more hand holding in this instance
    Quote Quote  
  2. iirc. you don't really need to install any additional dependencies (assuming you got Vapoursynth working), aside from the stuff on the release page.
    Download everything from one release (aside from the source code packages) and extract it into a folder.
    Then exit your Vapoursynth Script to:
    a. load the dll your use (for trt its vstrt.dll)
    b. import vsmlrt.py
    c. convert your content to RGBS
    d. use vsmlrt
    e. convert back to the color space you want.
    Here's a simple example script:
    Code:
    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    import site
    import sys
    import os
    core = vs.core
    # Import scripts folder
    scriptPath = 'F:/Hybrid/64bit/vsscripts'
    sys.path.insert(0, os.path.abspath(scriptPath))
    os.environ["CUDA_MODULE_LOADING"] = "LAZY"
    # Loading Plugins
    core.std.LoadPlugin(path="F:/Hybrid/64bit/vs-mlrt/vstrt.dll")
    core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
    core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
    # Import scripts
    from importlib.machinery import SourceFileLoader
    vsmlrt = SourceFileLoader('vsmlrt', 'F:/Hybrid/64bit/vs-mlrt/vsmlrt.py').load_module()
    # source: 'G:\TestClips&Co\test.avi'
    # current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
    # Loading G:\TestClips&Co\test.avi using LWLibavSource
    clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/test.avi", format="YUV420P8", stream_index=0, cache=0, prefer_hw=0)
    # Setting detected color matrix (470bg).
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    # Setting color transfer info (470bg), when it is not set
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    # Setting color primaries info (), when it is not set
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 25
    clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
    # changing range from limited to full range
    clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
    # Setting color range to PC (full) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
    # adjusting color space from YUV420P8 to RGBS for vsVSMLRT
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="full")
    # resizing using VSMLRT
    from vsmlrt import Backend
    clip = vsmlrt.inference([clip],network_path="F:/Hybrid/64bit/onnx_models/1x_BleedOut_Compact_300k_net_g.onnx", backend=Backend.TRT(fp16=True, device_id=0,num_streams=1,verbose=True,use_cuda_graph=False, workspace=1 << 30,builder_optimization_level=3))
    # resizing 640x352 to 640x352
    # adjusting resizing
    clip = core.fmtc.resample(clip=clip, w=640, h=352, kernel="spline64", interlaced=False, interlacedd=False)
    # changing range from full to limited range
    clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="full", dither_type="error_diffusion")
    # set output frame rate to 25fps (progressive)
    clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
    # Output
    clip.set_output()
    Note that using TRT means that .engine files will get build, which can take quite some time. (TRT takes ages to build the engine files, but is 'fast' after that)


    Cu Selur
    Ps.: depending of your general video processing understanding, try playing around with Hybrid. If its to your liking and not too complicated, then I can send you some links tomorrow to my latest dev with torch- and vsmlrt addons which integrate vsmlrt and a bunch of other ai tools into Hybrid.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  3. Thanks a lot. I will try that out.

    Funnily enough, I was playing around with Hybrid just now (I've also used it in the past before I got into vapoursynth)! Please do send me the links.

    P.S. what sort of footage has a 470bg color matrix?
    Quote Quote  
  4. 470bg <> bt.601 so lots of sd content.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!