VideoHelp Forum

Try DVDFab and copy Ultra HD Blu-rays and DVDs! Or rip iTunes movies and music! Download free trial !
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 36 of 36
  1. Originally Posted by chris319 View Post
    Python would be too slow for pixel-by-pixel.
    Numpy is not slow, Vapoursynth is not slow as well, same Avisynth, it uses C or C++ for frame manipulations, functions, not sure what now. Or you use DLL's that are C or C++. I do not know which, maybe they could be written in both.
    I have slow PC , first i5 that were around , 10 years old+, and SD resolution is running that top code, piping from ffmpeg runs at 220fps. 1920x1080 video, processes as 30fps. Mostly using laptops even and it is fast enough. And that script I posted in previous thread, runs very fast as well. If you convert to video from that script, it is negligible, because conversion takes forever in comparison.
    ctypes.memmove is very fast, that is C also. Converting from vapoursynth frame to numpy frame are very fast as well, milliseconds for frame, they are just memory operations. Running just using Vapoursynth source Plugin, lsmas.LibavSMASHSource gives similar speeds and you can go back and forth. Using just pipe, like you do, it is one way only, one read and frame is gone.

    That last sentence I'd pull out again, because of using pipe, you cannot view that video in a sense going back and forth, you have one shot for a frame. That is why I included lots of try - except blocks in that script, because if you view that video, it could give you all sort of weirdness or errors or how do you know that pipe needs to be terminated if you just decide to be done viewing. That ffmpeg pipe needs to be terminated manually if not reading all frames. Maybe this is why you have lots of problems with this, because you have difficulty with feedback, what you do.

    I know what you are trying to do now, I read that thread you are active in, looking at those codes, that is why i posted some Python code for comparison. And you include for yourself something that limits RGB. Now you need the same but piping YUV 10bit , not RGB from YUV8bit.
    But all you need is to figure out arguments, starting with those lines I mentioned, and settle on it for one vapoursynth yuv to rgb conversion line. Instead you try to come up again for completely new codes, because you use YUV10bit. That is why Avisynth and Vapoursth are here (thorough video pixel manipulations and piping it with ease to encoders). Or as a matter of fact, opencv(using numpy and having terms to put it on screen right away, GUI), or PIL, even Qt (but that mostly regarding GUI).
    Last edited by _Al_; 4th Mar 2020 at 21:43.
    Quote Quote  
  2. I just tested getting RGB as you want to those particular coefficients to process YUV to RGB manually and also same conversion by Vapoursynth line. They are identical.
    r = (255/219)*y + (255/112)*v*(1-Kr) - (255*16/219 + 255*128/112*(1-Kr)) 
    g = (255/219)*y - (255/112)*u*(1-Kb)*Kb/Kg - (255/112)*v*(1-Kr)*Kr/Kg
        - (255*16/219 - 255/112*128*(1-Kb)*Kb/Kg - 255/112*128*(1-Kr)*Kr/Kg) 
    b = (255/219)*y + (255/112)*u*(1-Kb) - (255*16/219 + 255*128/112*(1-Kb))
    import vapoursynth as vs
    core = vs.core
    #color bar YUV420P8 clip
    c = core.colorbars.ColorBars(format=vs.YUV444P10)
    c = core.std.SetFrameProp(clip=c, prop="_FieldBased", intval=0) 
    c = core.std.Convolution(c,mode="h",matrix=[1,2,4,2,1])
    c = core.resize.Point(clip=c, matrix_in_s ='709', format=vs.YUV420P8)
    c = c * (30 * 30000 // 1001)
    clip = core.std.AssumeFPS(clip=c, fpsnum=30000, fpsden=1001)
    Kr = 0.2126
    Kg = 0.7152
    Kb = 0.0722
    #yuv [16,235] <-> rgb [0,255]
    R1 = 255/219
    R2 = (255/112)*(1-Kr)
    R3 = (255*16/219 + 255*128/112*(1-Kr))
    G1 = 255/219
    G2 = (255/112)*(1-Kb)*Kb/Kg
    G3 = (255/112)*(1-Kr)*Kr/Kg
    G4 = (255*16/219 - 255/112*128*(1-Kb)*Kb/Kg - 255/112*128*(1-Kr)*Kr/Kg)
    B1 = 255/219
    B2 = (255/112)*(1-Kb)
    B3 = (255*16/219 + 255*128/112*(1-Kb))
    yuv444 = core.resize.Point(clip, format = vs.YUV444P8)   #to have no subsumpling for expressions
    planes = [core.std.ShufflePlanes(yuv444, planes=i,  colorfamily=vs.GRAY)  for i in range(3)]
    R = core.std.Expr(clips = planes, expr = [f"{R1} x * {R2} z * + {R3} -"])
    G = core.std.Expr(clips = planes, expr = [f"{G1} x * {G2} y * - {G3} z * - {G4} -"])
    B = core.std.Expr(clips = planes, expr = [f"{B1} x * {B2} y * + {B3} -"])
    rgb = core.std.ShufflePlanes(clips = [R,G,B], planes= [0,0,0], colorfamily=vs.RGB)
    rgb = core.std.SetFrameProp(rgb, prop='_Matrix', delete=True)
    #this is Vapoursynth conversion YUV to RGB
    rgb2 = core.resize.Point(clip, format = vs.RGB24, matrix_in_s = '709')
    #both outputs are the same
    I used color bars, but you can do it with any YUV42P8 clip, basically loading it:
    clip = core.lsmas.LibavSMASHSource(r"C:\video.mp4")
    Quote Quote  
  3. I'm not having any luck getting Vapoursynth to go. See my post in the "ffmpeg color range" thread in Video Conversion.
    Last edited by chris319; 5th Mar 2020 at 16:18.
    Quote Quote  
  4. Can you describe step by step how you installed it?
    Quote Quote  
  5. Originally Posted by _Al_ View Post
    Can you describe step by step how you installed it?
    Clicked on the installer program and ran it.
    Quote Quote  

Similar Threads