Numpy is not slow, Vapoursynth is not slow as well, same Avisynth, it uses C or C++ for frame manipulations, functions, not sure what now. Or you use DLL's that are C or C++. I do not know which, maybe they could be written in both.
I have slow PC , first i5 that were around , 10 years old+, and SD resolution is running that top code, piping from ffmpeg runs at 220fps. 1920x1080 video, processes as 30fps. Mostly using laptops even and it is fast enough. And that script I posted in previous thread, runs very fast as well. If you convert to video from that script, it is negligible, because conversion takes forever in comparison.
ctypes.memmove is very fast, that is C also. Converting from vapoursynth frame to numpy frame are very fast as well, milliseconds for frame, they are just memory operations. Running just using Vapoursynth source Plugin, lsmas.LibavSMASHSource gives similar speeds and you can go back and forth. Using just pipe, like you do, it is one way only, one read and frame is gone.
That last sentence I'd pull out again, because of using pipe, you cannot view that video in a sense going back and forth, you have one shot for a frame. That is why I included lots of try - except blocks in that script, because if you view that video, it could give you all sort of weirdness or errors or how do you know that pipe needs to be terminated if you just decide to be done viewing. That ffmpeg pipe needs to be terminated manually if not reading all frames. Maybe this is why you have lots of problems with this, because you have difficulty with feedback, what you do.
I know what you are trying to do now, I read that thread you are active in, looking at those codes, that is why i posted some Python code for comparison. And you include for yourself something that limits RGB. Now you need the same but piping YUV 10bit , not RGB from YUV8bit.
But all you need is to figure out arguments, starting with those lines I mentioned, and settle on it for one vapoursynth yuv to rgb conversion line. Instead you try to come up again for completely new codes, because you use YUV10bit. That is why Avisynth and Vapoursth are here (thorough video pixel manipulations and piping it with ease to encoders). Or as a matter of fact, opencv(using numpy and having terms to put it on screen right away, GUI), or PIL, even Qt (but that mostly regarding GUI).
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 31 to 36 of 36
Thread
-
Last edited by _Al_; 4th Mar 2020 at 21:43.
-
I just tested getting RGB as you want to those particular coefficients to process YUV to RGB manually and also same conversion by Vapoursynth line. They are identical.
Code:r = (255/219)*y + (255/112)*v*(1-Kr) - (255*16/219 + 255*128/112*(1-Kr)) g = (255/219)*y - (255/112)*u*(1-Kb)*Kb/Kg - (255/112)*v*(1-Kr)*Kr/Kg - (255*16/219 - 255/112*128*(1-Kb)*Kb/Kg - 255/112*128*(1-Kr)*Kr/Kg) b = (255/219)*y + (255/112)*u*(1-Kb) - (255*16/219 + 255*128/112*(1-Kb))
Code:import vapoursynth as vs core = vs.core #color bar YUV420P8 clip c = core.colorbars.ColorBars(format=vs.YUV444P10) c = core.std.SetFrameProp(clip=c, prop="_FieldBased", intval=0) c = core.std.Convolution(c,mode="h",matrix=[1,2,4,2,1]) c = core.resize.Point(clip=c, matrix_in_s ='709', format=vs.YUV420P8) c = c * (30 * 30000 // 1001) clip = core.std.AssumeFPS(clip=c, fpsnum=30000, fpsden=1001) #BT709 Kr = 0.2126 Kg = 0.7152 Kb = 0.0722 #yuv [16,235] <-> rgb [0,255] R1 = 255/219 R2 = (255/112)*(1-Kr) R3 = (255*16/219 + 255*128/112*(1-Kr)) G1 = 255/219 G2 = (255/112)*(1-Kb)*Kb/Kg G3 = (255/112)*(1-Kr)*Kr/Kg G4 = (255*16/219 - 255/112*128*(1-Kb)*Kb/Kg - 255/112*128*(1-Kr)*Kr/Kg) B1 = 255/219 B2 = (255/112)*(1-Kb) B3 = (255*16/219 + 255*128/112*(1-Kb)) yuv444 = core.resize.Point(clip, format = vs.YUV444P8) #to have no subsumpling for expressions planes = [core.std.ShufflePlanes(yuv444, planes=i, colorfamily=vs.GRAY) for i in range(3)] R = core.std.Expr(clips = planes, expr = [f"{R1} x * {R2} z * + {R3} -"]) G = core.std.Expr(clips = planes, expr = [f"{G1} x * {G2} y * - {G3} z * - {G4} -"]) B = core.std.Expr(clips = planes, expr = [f"{B1} x * {B2} y * + {B3} -"]) rgb = core.std.ShufflePlanes(clips = [R,G,B], planes= [0,0,0], colorfamily=vs.RGB) rgb = core.std.SetFrameProp(rgb, prop='_Matrix', delete=True) #this is Vapoursynth conversion YUV to RGB rgb2 = core.resize.Point(clip, format = vs.RGB24, matrix_in_s = '709') #both outputs are the same rgb.set_output() rgb2.set_output(1)
Code:clip = core.lsmas.LibavSMASHSource(r"C:\video.mp4")
-
I'm not having any luck getting Vapoursynth to go. See my post in the "ffmpeg color range" thread in Video Conversion.
https://forum.videohelp.com/threads/395939-ffmpeg-Color-Range/page6#post2575806Last edited by chris319; 5th Mar 2020 at 16:18.
Similar Threads
-
10-bit to 8-bit Video File Conversions - the easy (free) way!
By VideoWiz in forum Video ConversionReplies: 10Last Post: 6th Feb 2020, 03:24 -
Help with running 32-bit DLL in 64-bit AVSynth with MP_Pipeline.
By Vitality in forum Video ConversionReplies: 1Last Post: 12th Jan 2019, 16:26 -
16-bit PNG ~> 10-bit MKV GPU encode
By Hylocereus in forum Newbie / General discussionsReplies: 8Last Post: 29th Mar 2018, 23:07 -
MPC HC: 32-bit or 64-bit on Windows 7 64-bit
By flashandpan007 in forum Software PlayingReplies: 20Last Post: 22nd Jul 2016, 09:22 -
why in windows 64 bit I don't see 64 bit codec with DSFmgr?
By marcorocchini in forum Newbie / General discussionsReplies: 2Last Post: 8th Sep 2015, 09:01