With YV12 video each frame has a Y, U and V channel. The Y channel (greyscale) is at the full resolution of the frame. The U and V channels (colors components that are added or subtracted from the Y channel to produce the final color) are at 1/2 the resolution of the Y channel.
Code:StackHorizontal(GreyScale(v).Subtitle("Y"), StackVertical(UtoY(v).Subtitle("U"), VtoY(v).Subtitle("V")))
[Attachment 60806 - Click to enlarge]
VtoY() copies the V channel to the Y channel of a new video, creating a greyscale image of the V channel:
[Attachment 60809 - Click to enlarge]
The runtime function AverageLuma() calculates the average luma of the entire frame:
Code:VtoY(v) ScriptClip(last, "Subtitle(String(AverageLuma))")
[Attachment 60812 - Click to enlarge]
But since this is really the V channel from the original video it is the average V value of that frame of the original video.
Highly saturated reds have very high V values. It's highly unlikely that other frames will have a high average V value over the entire frame (or large part thereof).
Actually, I could have used the AverageChromaU() filter in AviSynth. But I wanted to view the V channel as well as calculate the Average U. Using AverageChromaV() the script could be reduced to:
Code:LSmashVideoSource("transit1.mp4")+LSmashVideoSource("transit2.mp4") WriteFileIf(last, "match.txt", "(AverageChromaV()>176.5) && (AverageChromaV()<176.9)", "current_frame", append = false)
I don't know VapourSynth but I suspect there's a single function like AverageChromaV() since it was inspired by AviSynth. _Al_ will probaly let you know...
+ Reply to Thread
Results 31 to 60 of 172
-
Last edited by jagabo; 19th Sep 2021 at 18:45.
-
The last jagabo's script is not using template at all, that V average is for V channel of searched video.
But pythons method using opencv is using RGB video , not YUV, so R plane as for red is going to have different values then V in YUV.
Opencv is loading a frame, some capture.read(), and returns BGR arrays, conversion is happening, not YUV arrays like naturally source plugin would in avisynth or vapoursynth.
So that jagabo averageV value cannot be used if using just opencv for loading video.
Though if loading a clip with source plugin in vapoursynth,using python, you'd load YUV values (as present in video), same like avisynth:
Code:import vapoursynth as vs from vapoursynth import core clip = core.lsmas.LibavSMASHSource('transit1.mp4') clip = core.std.PlaneStats(clip, plane=2, prop='PlaneStatsV') #vapoursynth calculates average for all formats in float 0 to 1 #so 'even if V values are 8bit integers ( 0 to 255) #so jagabo's values would be: 176.5/255 and 176.9/255 for n, frame in enumerate(clip.frames()): averageV = frame.props['PlaneStatsVAverage'] if 176.5/255 < averageV < 176.9/255: print(n)
Code:458 974 >>>
but you'd need to make an executive decision if using numpy and opencv only or going vapoursyth. Or using all, numpy, opencv and vapoursynth, all can complete the other wonderfully.
Just using opencv (which is using numpy as well) and no vapoursynth you can calculate average for Red channel:
Code:import numpy as np import cv2 def process_img(img_rgb, frame_number): #opencv loads video as BGR, its last value 2 for R #those two colon characters mean all resolution for x and y, but 2 for R (0 would be for just B, 1 for G) averageR = np.average(img_rgb[:,:,2]) print('frame number = ', frame_number,' averageR = ' ,averageR) vidcap = cv2.VideoCapture('transit1.mp4') frame_number = 0 while True: success,image = vidcap.read() #in here image is already bgr numpy array if not success: break process_img(image, frame_number) frame_number += 1
Code:frame number = 449 averageR = 84.90040268132717 frame number = 450 averageR = 89.89326834972994 frame number = 451 averageR = 93.17933485243056 frame number = 452 averageR = 96.81973982445987 frame number = 453 averageR = 104.7651035638503 frame number = 454 averageR = 117.38791232638889 frame number = 455 averageR = 125.64064007040895 frame number = 456 averageR = 125.99353780864197 frame number = 457 averageR = 122.60029658564815 frame number = 458 averageR = 122.70225393036266 frame number = 459 averageR = 124.42465699749228 frame number = 460 averageR = 122.75095546392747 frame number = 461 averageR = 118.16046670042438 frame number = 462 averageR = 110.9293800636574 frame number = 463 averageR = 88.71288158275463 frame number = 464 averageR = 81.31635199652777
That template method is more exact, flexible and universal and setting threshold value higher (closer to 1, but not 1) gives more precise frame output only if template is pretty large (like yours) and decisive. But after jagabo's prep avisynth's or vapoursynth's method is not bad either, for your case. But search changes and you'd need to prep it again.Last edited by _Al_; 19th Sep 2021 at 20:13.
-
for any plane, in opencv, here red channel is represented by number 2, 0 would be blue, 1 would be green
opencv uses bgr
Code:import numpy as np import cv2 image1 = cv2.imread(r'D:\Videos\Test\0458_Center.PNG') averageR = np.average(image1[:,:,2]) print(averageR)
Last edited by _Al_; 19th Sep 2021 at 20:25.
-
Keep in mind bright grey, yellow, and magenta pixels have high R values. So looking for high R values will find white, yellow, and magenta as well as red.
-
Hello:
I have tried to use your python vapoursynth code, the following is my code:
Code:import vapoursynth as vs from vapoursynth import core print(core.version()); clip = core.lsmas.LibavSMASHSource(source=r'D:\Video\Test\transit1.mp4') clip = core.std.PlaneStats(clip, plane=2, prop='PlaneStatsV') for n, frame in enumerate(clip.frames()): averageV = frame.props['PlaneStatsVAverage'] if 176.5/255 < averageV < 176.9/255: print(n)
C:\Python\VapoursynthLoadYUVPy\VapoursynthLoadYUVP y>python VapoursynthLoadYUVPy.py
VapourSynth Video Processing Library
Copyright (c) 2012-2021 Fredrik Mellbin
Core R54
API R3.6
Options: -
Traceback (most recent call last):
File "C:\Python\VapoursynthLoadYUVPy\VapoursynthLoadYUV Py\VapoursynthLoadYUVPy.py", line 5, in <module>
clip = core.lsmas.LibavSMASHSource(source=r'D:\Video\Test \transit1.mp4')
File "src\cython\vapoursynth.pyx", line 1891, in vapoursynth._CoreProxy.__getattr__
File "src\cython\vapoursynth.pyx", line 1754, in vapoursynth.Core.__getattr__
AttributeError: No attribute with the name lsmas exists. Did you mistype a plugin namespace?
C:\Python\VapoursynthLoadYUVPy\VapoursynthLoadYUVP y>
It seems not easy to run your Vapoursynth code.
However, I found in OpenCV, there are some functions can convert RGB to YUV, like:
img_out = cv2.cvtColor(img_in, cv2.COLOR_BGR2YUV)
And with the following equations, we should be able to calculate average Y for AverageLumnia() right?
Y-> 0.299 R + 0.587 G + 0.114 B
U-> 0.492 (B - Y)
V-> 0.877 (R – Y)
I have the following Python code to convert RGB image to YUV:
Code:import cv2 import numpy as np image_rgb = cv2.imread('D:/Videos/Test/458_Center.PNG') # Convert from BGR to YUV image_yuv = cv2.cvtColor(image_rgb, cv2.COLOR_BGR2YUV) img_bgr_restored = cv2.cvtColor(image_yuv, cv2.COLOR_YUV2BGR) diff = image_rgb.astype(np.int16) - img_bgr_restored print("mean/stddev diff (BGR => YUV => BGR)", np.mean(diff), np.std(diff))
C:\Python\RGB2YUV_YUV2RGB_PY\RGB2YUV_YUV2RGB_PY>py thon RGB2YUV_YUV2RGB_PY.py
mean/stddev diff (BGR => YUV => BGR) 0.07414288545382482 0.5106037307280958
But I need python code to calculate the AverageLuma().
Please advise!
Thanks, -
Traceback (most recent call last):
File "C:\Python\VapoursynthLoadYUVPy\VapoursynthLoa dYUV Py\VapoursynthLoadYUVPy.py", line 5, in <module>
clip = core.lsmas.LibavSMASHSource(source=r'D:\Video\Test \transit1.mp4')
File "src\cython\vapoursynth.pyx", line 1891, in vapoursynth._CoreProxy.__getattr__
File "src\cython\vapoursynth.pyx", line 1754, in vapoursynth.Core.__getattr__
AttributeError: No attribute with the name lsmas exists. Did you mistype a plugin namespace?
download from here:
https://www.dropbox.com/sh/3i81ttxf028m1eh/AAABkQn4Y5w1k-toVhYLasmwa?dl=0 -
Hello:
I have downloaded the vapoursynth 64bit version of L-SMASH-Works plugin, and my code is working now.
But I found the plugin was more than 4 years old, so it seems not so good.
Anyway, do you have any idea on how I can use python to calculate AverageLumnia() value from an image?
Thanks, -
It might be old, I use the same, it's fine. There suppose to be API 4 out soon with public release with audio, but lots of changes, so even that plugin needs to be updated with that release, so I leave it for now for that API 3.6.
The other thing, you keep saying AverageLuma(), but it is average for V plane in that jagabo last script. In his script he put V plane into one plane channel clip. And then he uses average for that channel, called luma, but in reality is former V channel.
Or do you need really average luma for your video?
In opencv it was covered already in my previous post. You have numpy array of a video and then you use:
Code:average_plane_in_frame = np.average(img_rgb[:,:,X])
If you have YUV in opencv , then 0 would be Y plane, luma.
But, I would not go that way, YUV is subsampled, U and V planes have different dimension, size than Y. Scripts would get complicated. For YUV I'd use avisynth or vapoursynth. -
Also if you want latest plugins for vapoursynth you might use vsrepo. https://github.com/vapoursynth/vsrepo. You download those py files and run it for whatever you need, using syntax as that web page says. That should get you proper dll in that plugin directory. But unfortunately I never tested it.
Look in a directory called local, all of that you can install if you need, all those plugins. So for lsmas it perhaps would be:
Code:vsrepo.py install lsmas
or you can check what you can install, that should give you proper naming if wanting to install something:
Code:vsrepo.py available
Last edited by _Al_; 20th Sep 2021 at 12:26.
-
Hello:
According to your post, average V channel, or so called AverageLuma().
Now I have the following Python code to calculate it:
Code:import cv2 import numpy as np img_rgb = cv2.imread('D:/Videos/Test/458_Center.PNG') average_plane0_in_frame = np.average(img_rgb[:,:,0]) average_plane1_in_frame = np.average(img_rgb[:,:,1]) average_plane2_in_frame = np.average(img_rgb[:,:,2]) print(average_plane0_in_frame); print(average_plane1_in_frame); print(average_plane2_in_frame);
C:\Python\RGB2YUV_YUV2RGB_PY>python RGB2YUV_YUV2RGB_PY.py
69.26378567181926
75.91626412009512
168.72337618906064
So, the average V channel should be the last one: 168.7, right?
If you have the 458_Center.PNG, you can check.
But this value: 168.7 is far away from 176.5 and 176.9.
Let me know if I can use average_plane2_in_frame to replace average V channel.
Thanks, -
Those are completely unrelated values. Also 176.5 and 176.9 values are for the whole frame, not just some cut off crop. jagabos last method does not involve that crop template but you use it in your example.
PNG is in RGB, but loaded with opencv you get BGR, so last is Red. Yes. But YUV and RGB are completely unrelated formats to represent values for video pixel. Well, yes, there is a formula for conversion YUV to RGB or RGB to YUV, there is more formulas depending on color space, but V in YUV before conversion has nothing to do with R value after conversion. Those values cannot be the same.
V in YUV is different value as R in RGB.
Note: Take any gray color for example, no color, gray color in RGB means all values are the same' (0,0,0) is black and (255,255,255) is white. Whatever between are just shades of gray. Example, (20,20,20) or (210,210,210). Higher values means brighter gray. But in YUV you'd get (33,128,128) or (196,128,128) for those values.
So you see brightness in YUV is in Y plane, but in RGB it is"stored" in all of them R,G,B.
Also other thing , no color means 0.0 in video for YUV, from -0.5 to 0.5, for U or V, chroma channels, but as an integer, that 0 is as 128, half way between 0 and 255 in YUV. To study gray colors is a good way to start to understand differences between RGB and YUV
So jagabo's method recognizing that your frame, that is searched, has most red color, is applicable for a solution, watching highest red chroma values (representing in V) YUV color space, not RGB color space.Last edited by _Al_; 20th Sep 2021 at 13:18.
-
Hello:
I understand most of what you have said, but I don’t have much experience for code on those topics.
But do you have python code, which can calculate the average V channel from the OpenCV BGR file?
Thanks, -
I would not convert YUV from RGB in opencv, its messy, or work with YUV in opencv, , you can read something here.
----There is way in docs to capture YUV directly using CAP_PROP_CONVERT_RGB, but it is buggy, it just does not work on the level:
Code:vidcap = cv2.VideoCapture(r'transit1.mp4') vidcap.set(cv2.CAP_PROP_CONVERT_RGB ,False) #tried 0 or 0.0 , nothing works
Code:img_yuv = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2YUV) averageV = np.average(img_yuv[:,:,2])# values are off by ~20
Not sure why not to do that in vapoursynth as long you have it going. If you need to do some plane calculations.
As for recommended method using matchTemplate(), it does not matter how exact is YUV to RGB done by opencv by capturing that YUV mp4 video file, because using threshold in that script you have wiggle room for values.
And also, if using myself some conversions (between vapoursynth and opencv, even back and forth) it is strictly done by a vapoursynth function. Thats how it should be done.Last edited by _Al_; 20th Sep 2021 at 15:26.
-
The two transit samples are flagged limited range bt.709. That sounds right given they appear to be drop-field and downscaled from a 1080i25 broadcast.
-
not possible to select arguments for matrix, full vs. limited so perhaps if one just goes with it, preps it again with new values:
Code:import numpy as np import cv2 def process_img(img_rgb, frame_number): img_yuv = cv2.cvtColor(img_rgb,cv2.COLOR_BGR2YUV) averageV = np.average(img_yuv[:,:,2]) if 194.85 < averageV < 196: print(frame_number) vidcap = cv2.VideoCapture('transit1.mp4') frame_number = 0 while True: success,image = vidcap.read() if not success: break process_img(image, frame_number) frame_number += 1
Code:458 459 974 >>>
-
Wanted to know how is opencv actually capturing a video, using what defaults, by simultaneously loading two clips, one loaded in vapoursynth and one in opencv, into viewer using this:
Code:import vapoursynth as vs from vapoursynth import core import numpy as np import cv2 def opencv_frame_to_vs_frame(n, f): vsFrame = f.copy() vidcap.set(1, n) #set opencv to capture n-th frame success, image = vidcap.read() [np.copyto(np.asarray(vsFrame.get_write_array(i)), image[:, :, i]) for i in [2,1,0]] #np array into vsframe, order does not matter return vsFrame vs_clip = core.lsmas.LibavSMASHSource('transit1.mp4') vs_clip = core.resize.Point(vs_clip, format = vs.RGB24, matrix_in_s='709') vidcap = cv2.VideoCapture('transit1.mp4') opencv_clip = core.std.ModifyFrame(vs_clip, vs_clip, opencv_frame_to_vs_frame) opencv_clip = core.std.ShufflePlanes(opencv_clip, planes=[2,1,0], colorfamily=vs.RGB) #BGR to RGB import view view.Preview([vs_clip, opencv_clip]) vidcap.release()
So opencv is capturing using default BT601 for YUV to RGB conversion, otherwise conversion is like it suppose to be. -
And opencv , correct rgb into yuv is weirder, not sure what they use, I could not come up what they use:
Code:def rgb_to_yuv_in_opencv(n, f): vsFrame = f.copy() image_rgb = np.dstack([np.asarray(vs_clip_rgb.get_frame(n).get_read_array(i)) for i in [2,1,0]]) #vapoursynth frame into np array image_yuv = cv2.cvtColor(image_rgb,cv2.COLOR_BGR2YUV) #note, opencv makes YUV444P8 [np.copyto(np.asarray(vsFrame.get_write_array(i)), image_yuv[:, :, i]) for i in [0,1,2]] #np array into vapoursynth frame return vsFrame vs_clip_yuv = core.lsmas.LibavSMASHSource('transit1.mp4') vs_clip_yuv = core.resize.Point(vs_clip_yuv, format = vs.YUV444P8) vs_clip_rgb = core.resize.Point(vs_clip_yuv, format=vs.RGB24, matrix_in_s = '709') opencv_clip_yuv = core.std.ModifyFrame(vs_clip_yuv, vs_clip_yuv, rgb_to_yuv_in_opencv) import view_output34 view_output34.Preview([vs_clip_yuv, opencv_clip_yuv])
Last edited by _Al_; 20th Sep 2021 at 21:10.
-
Hello:
I encounter some issues when running some AVS scripts:
My code:
D:\Videos\AVScripts>type FindC1AvgLum.AVS
Code:v = LSmashVideoSource("Transit_C1.mp4") testclip = VtoY(v) colon = ": " WriteFile(testclip, "Transit_C1.txt", "current_frame", "colon", "AverageLuma()", append = false)
D:\Videos\AVScripts>ffmpeg -hide_banner -loglevel error -i FindC1AvgLum.AVS -c copy -f null -
[avisynth @ 0000024d531ce0c0] Filter Error: Attempted to request a planar frame that wasn't mod2 in height!
FindC1AvgLum.AVS: Unknown error occurred
And I tried another MP4 file with the similar AVS script, I got the same error:
D:\Videos\AVScripts>type FindC2AvgLum.AVS
Code:v = LSmashVideoSource("Transit_C2.mp4") testclip = VtoY(v) colon = ": " WriteFile(testclip, "Transit_C2.txt", "current_frame", "colon", "AverageLuma()", append = false)
[avisynth @ 0000023db257e0c0] Filter Error: Attempted to request a planar frame that wasn't mod2 in height!
FindC2AvgLum.AVS: Unknown error occurred
I upload the both mp4 files.
Please let me know what wrong with the mp4 files?
Thanks, -
Code:
image_yuv = cv2.cvtColor(image_rgb,cv2.COLOR_BGR2YUV)
-returns YUV444P8 #which is ok to get average from
-uses BT601 for conversion
-does full RGB to full YUV conversion
-if video is changed back to limited range, colors (red) are still off
so if wanting to get average value for V in opencv, you need to follow #45 respond, you get your own values for particular frame or in general you get your values first:
Code:def process_img(img_rgb, frame_number): img_yuv = cv2.cvtColor(img_rgb,cv2.COLOR_BGR2YUV) averageY = np.average(img_yuv[:,:,0]) averageU = np.average(img_yuv[:,:,1]) averageV = np.average(img_yuv[:,:,2]) print(f'frame number={frame_number}\n averageY={averageY}, averageU={averageU}, averageV={averageV}')
Code:def process_img(img_rgb, frame_number): img_yuv = cv2.cvtColor(img_rgb,cv2.COLOR_BGR2YUV) averageV = np.average(img_yuv[:,:,2]) if 194.85 < averageV < 196: print(frame_number)
-
you can also successfully give an interest to some regions in video only (or combine it with full image)
you can make a crop like this in numpy (opencv)
Code:example for your video size, video width=768, height=432 x=0, y=0 x=767 +---------------------+ y=0 | x1,y1 | | +-----+ | | | | | | | | | | +-----+ | | x2,y2 | | | +---------------------+ x=767 x=0 y=431 y=431 cropped_img = img[y1:y2, x1:x2]
img_rgb[100:300, 100:400], so in code:
Code:def process_img(img_rgb, frame_number): img_rgb_cropped = img_rgb[100:300, 100:400] img_yuv = cv2.cvtColor(img_rgb_cropped,cv2.COLOR_BGR2YUV) averageV = np.average(img_yuv[:,:,2]) #etc
-
Hello:
Yes, I think you are right, to crop some area seems to be better.
However, I found yet another more difficult one.
Look at the transit_I1.mp4 file.
And give me some idea on how to find the start/end of the replay scenes.
Thanks,
[Attachment 60849 - Click to enlarge]Last edited by zydjohn; 21st Sep 2021 at 16:15. Reason: Wrong post mp4 file with image
-
Hello:
Yes, I think you are right, to crop some area seems to be better.
However, I found yet another more difficult one.
Look at the transit_I1.mp4 file.
And give me some idea on how to find the start/end of the replay scenes.
Thanks, -
using vapoursynth previewer, manually navigated to frame 1085 where there I found two clearly blue rectangles, draw them and read x and y values. Then I ran script and printed those average values, read them, then added conditions and it found those frames:
Code:import numpy as np import cv2 x1=212 y1=376 x2=212+84 y2=376+46 X1=388 Y1=20 X2=388+28 Y2=20+44 def process_img(img_rgb, frame_number): yuv = cv2.cvtColor(img_rgb,cv2.COLOR_BGR2YUV) yuv1 = yuv[y1:y2,x1:x2] yuv2 = yuv[Y1:Y2,X1:X2] U1 = np.average(yuv1[:,:,1]) U2 = np.average(yuv2[:,:,1]) ## print(frame_number, U1, U2) if U1>180 and U2>160: print(frame_number) vidcap = cv2.VideoCapture('transit_I1.mp4') frame_number = 0 while True: success,image = vidcap.read() if not success: break process_img(image, frame_number) frame_number += 1
Code:1085 1845 1846 >>>
-
Hello:
I don't quite understand your code, you put a lot of x1, y1, x2, y2, X1, Y1, X2, Y2.
I have no idea on what they are doing.
But I can give you yet another MP4 video with the same transit scene.
You can see if your code works for this one: transit_I2.mp4 -
yes, it works too, but I lowered that blue threshold for first rectangle, from 180 value to 178:
Code:if U1>178 and U2>160:
Code:1232 2511 >>>
you can literally lower those thresholds and come up with another small area, or even more,very small regions for example those bright outlines etc.Last edited by _Al_; 21st Sep 2021 at 17:30.
-
This seams more accurate for that transparent transition.
The least transparent areas were selected, blue and purple gradients , but U and V values are verified at the same time. So you can use very wide range -10 and +10 for thresholds and it still finds all off it and only those parts.
More conditions the better (in this case 4) means higher tolerances for values is allowed.
Code:import numpy as np import cv2 #vapoursynth crop would be: clip = clip.std.CropAbs(width=22, height=12, left=212, top=76) x1=212 y1=76 x2=212+22 y2=76+12 #vapoursynth crop would be: clip = clip.std.CropAbs(width=22, height=14, left=580, top=316) X1=580 Y1=316 X2=580+22 Y2=316+14 def process_img(img_rgb, frame_number): r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2],cv2.COLOR_BGR2YUV) r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV) r1_U = np.average(r1[:,:,1]) r1_V = np.average(r1[:,:,2]) r2_U = np.average(r2[:,:,1]) r2_V = np.average(r2[:,:,2]) ## if frame_number == 1233: #for transit_I2.mp4 ## print(f'{frame_number}\n {r1_U} {r1_V}\n {r2_U} {r2_V}') if 147<r1_U< 167 and 44<r1_V<64 and 162<r2_U<182 and 167<r2_V<187: print(frame_number) vidcap = cv2.VideoCapture(r'transit_I1.mp4') frame_number = 0 while True: success,image = vidcap.read() if not success: break process_img(image, frame_number) frame_number += 1
Code:12 1087 1847
Code:1233 2513
Last edited by _Al_; 21st Sep 2021 at 21:56.
-
I guess you've decided not to use AviSynth but just to follow up here's something similar in AviSynth:
Code:LWLibavVideoSource("transit_I2.mp4") x1 = 206 y1 = 74 x2 = 582 y2 = 324 cy = Crop(x1,y1,16,16).MaskHS(StartHue=290, EndHue=305, minSat=45, maxSat=55) mg = Crop(x2,y2,16,16).MaskHS(StartHue=30, EndHue=40, MinSat=40, MaxSat=50) testclip = Overlay(cy, mg, mode="multiply") wf = WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
Similar Threads
-
Need help to find Avisynth solution for Luma issue on video
By Fonzzie31 in forum RestorationReplies: 1Last Post: 15th Jun 2021, 06:07 -
AviSynth: Clips must have same number of channels
By smike in forum EditingReplies: 6Last Post: 12th Sep 2020, 14:26 -
Concatenate clips with as little code as possible in AviSynth+
By miguelmorin in forum EditingReplies: 7Last Post: 22nd Feb 2020, 07:12 -
How to use AviSynth and FetDups with VirtualDub to remove duplicate frames?
By pernicio in forum Newbie / General discussionsReplies: 8Last Post: 20th Jan 2019, 11:37 -
Replace random duplicate frames with black frames (AVISYNTH)
By benzio in forum EditingReplies: 7Last Post: 31st Jan 2018, 16:43