Hello:
I just want to see the crop image used in your python code.
I have the following python code, just show the cropped area:
But I got run time error:Code:import cv2 x1=212 y1=76 x2=234 y2=88 img = cv2.imread("D:\Videos\AVScripts\I1_1088.png") if img is not None: crop_img = img[x1:y1, x2:y2] cv2.imshow("cropped", crop_img) cv2.waitKey(0)
Message=OpenCV(4.5.3) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-sn_xpupm\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
Source=C:\SoccerVideos\OpenCV\OpenCVCropImage\Open CVCropImage\OpenCVCropImage.py
StackTrace:
File "C:\SoccerVideos\OpenCV\OpenCVCropImage\OpenCVCrop Image\OpenCVCropImage.py", line 11, in <module> (Current frame)
cv2.imshow("Cropped", crop_img)
However, to show the original image, my code works, like the following:
Searched around, I can’t find any good solution for this issue.Code:import cv2 x1=212 y1=76 x2=234 y2=88 img = cv2.imread("D:/Videos/AVScripts/I1_1848.png", 1) if img is not None: cv2.imshow("Original", img) cv2.waitKey(0)
Any suggestion for this issue?
Thanks,
+ Reply to Thread
Results 61 to 90 of 172
-
-
-
Hello:
I changed to use this following AVS script, just show each frame, but it is not working:
D:\Videos\AVScripts>type FindC1Frames.avs
v = LSmashVideoSource("Transit_C1.mp4")
testclip = VtoY(v)
WriteFile(testclip, "Transit_C1.txt", "current_frame", append = false)
D:\Videos\AVScripts>ffmpeg -hide_banner -loglevel error -i FindC1Frames.avs -c copy -f null -
[avisynth @ 000002350639e0c0] Filter Error: Attempted to request a planar frame that wasn't mod2 in height!
FindC1Frames.avs: Unknown error occurred
Let me know how I can modify this script to show the "current_frame"? -
It cannot work with other resolution because it is based on crop coordinates in video. If you change resolution, crop values change. Even for the same transition.
ok,jagabo answered that already, did not noticed third page,
I'll respond late in the day.
Just glancing at it, you have syntax wrong, check my previous code:
Code:cropped_img = img[y1:y2, x1:x2]
-
The script is crashing because the YV12 clips must be mod2. Your source is 720x406 YV12 so the result of VtoY() would be 360x203. YV12 can't be 203 pixels tall so the function fails. You get around that by cropping, adding borders, or using a color format that doesn't use chroma subsampling (like YV24).
But the rest of your script doesn't make sense. All it does is write the frame number of each frame. What exactly were you trying to accomplish? -
This script doesn't make any sense, I just want to show you that this script is not working with YV12 clips.
I actually want to calculate the AverageLuma() for each frame.
But since the script didn't work, so I simply want to know if I can show current frame.
However, since the YV12 is not taking this video.
Then how I can run similar script using YV24?
Thanks, -
Hello:
Let me know if I understand correctly.
I think you want to select a rectangle from each areas indicated by the red arrow, since those two areas use solid color to cover the background objects. Therefore, if the similar transit scenes appear, then the selected rectangles will remain the same, and thus could have the same Average V values, right?
However, I have something don’t quite understand is the coordinates.
For this image: in my image collection, it is named: I1_1087.PNG.
It has weight of 768 pixel, and height of 432 pixel.
Inside the upper red arrow, the arrow header’s coordinates inside MS-Paint is: (191, 91); and inside the lower red arrow, the arrow header’s coordinates in MS-Paint is: (592, 374).
So, I don’t know which coordinates you are using in Python code.
You can see my picture:
[Attachment 60877 - Click to enlarge]
Please let me know how to find the rectangle’s coordinates in your code? Do you open the image with MS-paint or other software?
Thanks, -
Here's an example using YV12:
Code:v = LSmashVideoSource("Transit_C1.mp4") testclip = VtoY(v.Crop(0,0,-0,-2)) # crop v to mod4 for VtoY() space = " " # convoluted way to get a space between current_frame and AverageLuma WriteFile(testclip, "Transit_C1.txt", "current_frame", "space", "AverageLuma", append = false)
Code:v = LSmashVideoSource("Transit_C1.mp4").ConvertToYV24() testclip = VtoY(v) space = " " # convoluted way to get a space between current_frame and AverageLuma WriteFile(testclip, "Transit_C1.txt", "current_frame", "space", "AverageLuma", append = false)
Code:vid = LSmashVideoSource("Transit_C1.mp4") space = " " # convoluted way to get a space between current_frame and AverageLuma WriteFile(vid, "Transit_C1.txt", "current_frame", "space", "AverageChromaV", append = false)
Last edited by jagabo; 22nd Sep 2021 at 11:01.
-
I am thinking this way, let me know if I am right or not:
In your code, you make four times logic operation:
if 147<r1_U< 167 and 44<r1_V<64 and 162<r2_U<182 and 167<r2_V<187:
print(frame_number)
If all four conditions are met, then print the fram_number.
If I guess correctly, if most of the frames or more than half of the frames don't meet any of the condition.
Then trying to compare the first 2 conditions first: if 147<r1_U< 167 and 44<r1_V<64
If the first 2 conditions are met, then find out the last 2 conditions: if 162<r2_U<182 and 167<r2_V<187:
So, if you change the loop control to work like this:
If the first 2 conditions are NOT met, then get the next frame. Not even do this: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV)
If the first 2 conditions are met, then check the last 2 conditions, if they both met, then print the frame; otherwise, get the next frame.
Let's say in one minute video, there are 1500 frames (25frame/second), you could save operation nearly 3000 times.
As I have many videos have length of 2hours, which is about 180,000 frames, then it could save a lot of computation, right?
Let me know what you think.
Thanks, -
If the first 2 conditions are NOT met, then get the next frame. Not even do this: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV)
But yes, good idea to speed up a code, try this for example:
Code:def process_img(img_rgb, frame_number): r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2],cv2.COLOR_BGR2YUV) if 147<np.average(r1[:,:,1])<167 and 44<np.average(r1[:,:,2])<64: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV) if 162<np.average(r2[:,:,1])<182 and 167<np.average(r2[:,:,2])<187: print(frame_number)
I try that jagabo's idea, to check for HUE instead. To explain it, we try to check on correct color and we do it now checking chroma U and V. Instead jagabo tries only a color in HSV color space. HSV color space is defined by color, saturation and value. It might be even faster, not sure. I do not know what is faster , to chane RGB to YUV or HSV.Last edited by _Al_; 22nd Sep 2021 at 18:19.
-
this is using HSV colors (Hue and saturation), instead of YUV (U and V):
Code:def process_img(img_rgb, frame_number): r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2], cv2.COLOR_BGR2HSV) if 90<np.average(r1[:,:,0])<100 and 125<np.average(r1[:,:,1])<150: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2], cv2.COLOR_BGR2HSV) if 134<np.average(r2[:,:,0])<152 and 130<np.average(r2[:,:,1])<150: print(frame_number)
But speed is about the same as it seams.
But it seams to be more straightforward and more intuitive to look for hue and saturation then U and V range. -
This is jagabo's method in opencv, using masks with Hue and saturation, but changed a bit. If there is half points in range for a rectangle, it qualifies:
Code:def process_img(img_rgb, frame_number): r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2], cv2.COLOR_BGR2HSV) lower = np.array([92,125,0]) #hue from 92 to 98, saturation from 125 to 150, any value upper = np.array([98,150,255]) mask1 = cv2.inRange(r1, lower, upper) if np.average(mask1) > 128: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2], cv2.COLOR_BGR2HSV) lower = np.array([134,130,0]) upper = np.array([152,150,255]) mask2 = cv2.inRange(r2, lower, upper) if np.average(mask2) > 128: print(frame_number)
Last edited by _Al_; 22nd Sep 2021 at 20:30.
-
So this brings us back to checking U and V from YUV, this clears the code, it is same fast!
Code:def process_img(img_rgb, frame_number): r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2], cv2.COLOR_BGR2YUV) lower = np.array([0,147,44]) #any luma, U from 147 to 167, V from 44 to 64 upper = np.array([255,167,64]) mask1 = cv2.inRange(r1, lower, upper) if np.average(mask1) > 160: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2], cv2.COLOR_BGR2YUV) lower = np.array([0,162,167]) upper = np.array([255,182,187]) mask2 = cv2.inRange(r2, lower, upper) if np.average(mask2) > 160: print(frame_number)
but that HUE, saturations looks more intuitive somehow as for setting those values -
Given that U,V are components of the video and H,S must be calculated from U,V I'd expect the former to be a little faster. No doubt it's done with a lookup table so there's not much of a speed penalty.
I see it as just two ways of addressing the colors. U,V are Cartesian coordinates, H,V are equivalent polar coordinates. People may more intuitively understand H,V but they are harder to get when dealing with YUV video where you already have the U,V values and the tools are there to show them. -
maybe some little utility that reads coordinates and values from mouse, whatever format is used, using opencv, from video, maybe some slider to it, I look into something tomorrow
-
Hello:
Do you have some agreement? Which method is both intuitively and performant, to use U,V or HUE, saturations?
By the way, please let me know how to get the coordinates in your code as in my post #68.
Thanks, -
For me speed was about the same, but perhaps those 2000 frame samples and small areas could not reveal much of a speed difference at all.
I might post a utility here that gives you cropping area directly in numpy coordinates. -
Good to hear that,
However, please let me know how to get the coordinates in your code as in my post #68. -
I sometimes use Animate() to vary a setting over a range of values. Here's an example with hues and saturations animated over a full UV plane:
Code:function GreyRamp() { black = BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32") white = BlankClip(color=$010101, width=1, height=256, pixel_type="RGB32") StackHorizontal(black,white) StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2)) StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4)) StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8)) StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16)) StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32)) StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64)) StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128)) } function AnimateHS(clip v, int StartH, int EndH, int StartS, EndS) { black = BlankClip(v) hsmask = v.MaskHS(StartHue=StartH, EndHue=EndH, MinSat=StartS, MaxSat=EndS) Overlay(v, black, mask=hsmask) Subtitle("StartHue="+String(StartH)+" EndHue="+String(EndH)) Subtitle("StartSat="+String(StartS)+" EndSat="+String(EndS), y=20) } GreyRamp() ConvertToYV24() YtoUV(last, TurnRight(last), ColorYUV(cont_y=-256)) black = BlankClip(last) ah = Animate(0, 180, "AnimateHS", last,0,10,0,100, last,350,360,0,100) as = Animate(0, 180, "AnimateHS", last,0,360,0,10, last,0,360,90,100) ab = Animate(0, 180, "AnimateHS", last,0,10,0,10, last,350,360,90,100) StackHorizontal(ah, as, ab)
Code:function ShowHS(clip c, int hue) { MaskHS(c, startHue=hue, endHue=hue+1) Subtitle(string(hue)) } v = LWLibavVideoSource("transit_C1.mp4", cache=false, prefer_hw=2) Trim(v, 656,-1) # frame 656 only Loop(360,0,0) # repeat for a total of 360 frames Animate(0,358, "ShowHS", last,0, last,358)
[Attachment 60902 - Click to enlarge] -
Hello:
I want to know how you can display the hue from frame 161 to 163, I will try to do the same.
By the way, could you please explain what Animate() can do, and why you use this function here.
Thanks, -
If you want to see the actual colors corresponding to each hue (rather than the mask) you could use Overlay():
Code:function ShowHS(clip c, int hue) { MaskHS(c, startHue=hue, endHue=hue+1) Subtitle(string(hue)) } v = LWLibavVideoSource("transit_C1.mp4", cache=false, prefer_hw=2) v = v.ConvertToYV24() # necessary because of the non-mod4 frame size Trim(v, 656,-1) Loop(360,0,0) mask = Animate(0,358, "ShowHS", last,0, last,358) Overlay(last, mask, mask=mask.Invert())
[Attachment 60904 - Click to enlarge]
Animate() allows you to vary a filtering parameter over a number of frames.
http://avisynth.nl/index.php/Animate
So Animate(0,358, "ShowHS", last,0, last,358) calls ShowHS() over frames 0 to 358 with the variable hue linearly interpolated from 0 to 358:
frame 0: ShowHS(last, 0)
frame 1: ShowHS(last, 1)
frame 2: ShowHS(last, 2)
...
frame 357: ShowHS(last, 357)
frame 358: ShowHS(last, 358) -
Hello:
I tried to use your code to show hue, but the frames go very quick.
Can I change the code so that I can have like one second pause for each frame, so I can see more clearly, or some kind of slow motion?
Thanks, -
Open the AVS script in an editor like VirtualDub2. You can step through frame by frame, scrub through with the scrollbar, or "play" the script at the video's frame rate. If you really want to play the video in a media player at 1 fsp just add AssumeFPS(1.0) to the end of the script.
-
Hello:
Thanks for your advice, I can see the frames in slow motion.
However, I can’t see anything useful, 99% of the time, I see only blank screen.
I think the Transit_C1.mp4 is not very good, it seems it was combined by two different soccer games. Now, I upload Transit_C2.mp4, which is much better.
You can test your code to see if it works for Transit_C2.mp4 or not.
Thanks, -
The script is a tool to help you identify the hue of different picture elements. Parts of the picture that aren't the specified hue are shown as black. Only parts of the picture that match the hue value (printed in the top left corner) are shown. In the earlier version of the script they were shown in white. In the newer version they are shown in the native color. I used it here to find the hue of the yellowish circle of the transition effect in frame 656 of transit_C1.mp4 -- mostly 161 and 162. Original frame 656 and hue 162:
[Attachment 60906 - Click to enlarge]
It works fine with both videos. For C2 you need to change the frame number to 1295 (or one of the others with the transition) if you want to see the hue of the yellowish circle in the transition effect.Last edited by jagabo; 23rd Sep 2021 at 16:57.
-
Another way to get the hue and saturation values is to use the Histogram() filter:
Code:LWLibavVideoSource("transit_C1.mp4") Histogram(v,mode="color2")
[Attachment 60907 - Click to enlarge]
The angle around the circle in the UV plot is the hue (I added the angle labels in white) and the distance from the center is the saturation. When I do it this way I usually make a rough estimate for the hue first:
Code:MaskHS(startHue=150, endHue=180)
[Attachment 60908 - Click to enlarge]
I then narrow or widen the range until just the area I want is covered:
Code:MaskHS(startHue=160, endHue=164)
[Attachment 60909 - Click to enlarge]
Then estimate the parameters for saturation and adjust its range similarly. This is where a program with an AVS editor comes in handy -- like VirtualDub2 or avspmod. You can change the script and press F5 to update the preview in the editor.Last edited by jagabo; 23rd Sep 2021 at 17:52.
-
Hello:
I have yet one more difficult transit scene: transit_L1.mp4.
There are a few factors make it more difficult: first, it was raining, so the video has extra disturbance, second, the transit scenes use the yellow sharps, which are not totally opaque, so I can still see some background images.
Let me know how you can detect such transit scenes. Use the rectangles in red squares seem not very good, as the above reasons.
[Attachment 60940 - Click to enlarge]
Please advise!
Thanks, -
for transit_L1.mp4, that yellow transparent transition, you just go for the whole Hue range, yellow, range roughly 10 to 40, two rectangles selected for search (you can add if not enough), that will return always a short sequence of 6 frames, because that transition is not changing for that while. If you want to return only one frame then you'd need to compare found frames and ignore them if less then 6 difference
Code:crops = [ [306, #x1 64, #y1 306+48, #x2 64+40] #y2 , [382, 326, 382+58, 326+42] ] def process_img(img_rgb, frame_number): lower = np.array([10,125,0]) upper = np.array([40,255,255]) for c in crops: r = cv2.cvtColor(img_rgb[c[1]:c[3],c[0]:c[2]], cv2.COLOR_BGR2HSV) mask = cv2.inRange(r, lower, upper) if np.average(mask) > 140: continue else: return print(frame_number)
Code:1095 1096 1097 1098 1099 1547 1548 1549 1550 1551 1552 1553 >>>
Last edited by _Al_; 25th Sep 2021 at 10:38.
Similar Threads
-
Need help to find Avisynth solution for Luma issue on video
By Fonzzie31 in forum RestorationReplies: 1Last Post: 15th Jun 2021, 06:07 -
AviSynth: Clips must have same number of channels
By smike in forum EditingReplies: 6Last Post: 12th Sep 2020, 14:26 -
Concatenate clips with as little code as possible in AviSynth+
By miguelmorin in forum EditingReplies: 7Last Post: 22nd Feb 2020, 07:12 -
How to use AviSynth and FetDups with VirtualDub to remove duplicate frames?
By pernicio in forum Newbie / General discussionsReplies: 8Last Post: 20th Jan 2019, 11:37 -
Replace random duplicate frames with black frames (AVISYNTH)
By benzio in forum EditingReplies: 7Last Post: 31st Jan 2018, 16:43