VideoHelp Forum




+ Reply to Thread
Page 3 of 6
FirstFirst 1 2 3 4 5 ... LastLast
Results 61 to 90 of 172
  1. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Originally Posted by _Al_ View Post
    rectangles were selected roughly in those marked areas:
    Hello:
    I just want to see the crop image used in your python code.
    I have the following python code, just show the cropped area:

    Code:
    import cv2
    
    x1=212
    y1=76
    x2=234
    y2=88
    img = cv2.imread("D:\Videos\AVScripts\I1_1088.png")
    if img is not None:
        crop_img = img[x1:y1, x2:y2]
        cv2.imshow("cropped", crop_img)
        cv2.waitKey(0)
    But I got run time error:
    Message=OpenCV(4.5.3) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-sn_xpupm\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

    Source=C:\SoccerVideos\OpenCV\OpenCVCropImage\Open CVCropImage\OpenCVCropImage.py
    StackTrace:
    File "C:\SoccerVideos\OpenCV\OpenCVCropImage\OpenCVCrop Image\OpenCVCropImage.py", line 11, in <module> (Current frame)
    cv2.imshow("Cropped", crop_img)

    However, to show the original image, my code works, like the following:
    Code:
    import cv2
    
    x1=212
    y1=76
    x2=234
    y2=88
    img = cv2.imread("D:/Videos/AVScripts/I1_1848.png", 1)
    if img is not None:
        cv2.imshow("Original", img)
        cv2.waitKey(0)
    Searched around, I can’t find any good solution for this issue.
    Any suggestion for this issue?
    Thanks,
    Quote Quote  
  2. Originally Posted by zydjohn View Post
    can you tell me why AviSynth scripts not working for Transit_C1.mp4 and Transit_C2.mp4?
    The only difference I can see is the resolution: they both use 720px by 406px, while others use 768px by 432px.
    Of course it doesn't work with those videos. They use a different transition effect. You have to modify the script for each transition effect different frame sizes, caps with different brightness, etc.
    Quote Quote  
  3. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I changed to use this following AVS script, just show each frame, but it is not working:
    D:\Videos\AVScripts>type FindC1Frames.avs
    v = LSmashVideoSource("Transit_C1.mp4")
    testclip = VtoY(v)
    WriteFile(testclip, "Transit_C1.txt", "current_frame", append = false)

    D:\Videos\AVScripts>ffmpeg -hide_banner -loglevel error -i FindC1Frames.avs -c copy -f null -
    [avisynth @ 000002350639e0c0] Filter Error: Attempted to request a planar frame that wasn't mod2 in height!
    FindC1Frames.avs: Unknown error occurred

    Let me know how I can modify this script to show the "current_frame"?
    Quote Quote  
  4. .....
    Last edited by _Al_; 22nd Sep 2021 at 08:28.
    Quote Quote  
  5. It cannot work with other resolution because it is based on crop coordinates in video. If you change resolution, crop values change. Even for the same transition.
    ok,jagabo answered that already, did not noticed third page,
    I'll respond late in the day.

    Just glancing at it, you have syntax wrong, check my previous code:
    Code:
    cropped_img = img[y1:y2, x1:x2]
    Quote Quote  
  6. Originally Posted by zydjohn View Post
    Code:
    v = LSmashVideoSource("Transit_C1.mp4")
    testclip = VtoY(v)
    WriteFile(testclip, "Transit_C1.txt", "current_frame",  append = false)
    Let me know how I can modify this script to show the "current_frame"?
    The script is crashing because the YV12 clips must be mod2. Your source is 720x406 YV12 so the result of VtoY() would be 360x203. YV12 can't be 203 pixels tall so the function fails. You get around that by cropping, adding borders, or using a color format that doesn't use chroma subsampling (like YV24).

    But the rest of your script doesn't make sense. All it does is write the frame number of each frame. What exactly were you trying to accomplish?
    Quote Quote  
  7. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    This script doesn't make any sense, I just want to show you that this script is not working with YV12 clips.
    I actually want to calculate the AverageLuma() for each frame.
    But since the script didn't work, so I simply want to know if I can show current frame.
    However, since the YV12 is not taking this video.
    Then how I can run similar script using YV24?
    Thanks,
    Quote Quote  
  8. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Let me know if I understand correctly.
    I think you want to select a rectangle from each areas indicated by the red arrow, since those two areas use solid color to cover the background objects. Therefore, if the similar transit scenes appear, then the selected rectangles will remain the same, and thus could have the same Average V values, right?
    However, I have something don’t quite understand is the coordinates.
    For this image: in my image collection, it is named: I1_1087.PNG.
    It has weight of 768 pixel, and height of 432 pixel.
    Inside the upper red arrow, the arrow header’s coordinates inside MS-Paint is: (191, 91); and inside the lower red arrow, the arrow header’s coordinates in MS-Paint is: (592, 374).
    So, I don’t know which coordinates you are using in Python code.
    You can see my picture:

    Image
    [Attachment 60877 - Click to enlarge]

    Please let me know how to find the rectangle’s coordinates in your code? Do you open the image with MS-paint or other software?
    Thanks,
    Quote Quote  
  9. Originally Posted by zydjohn View Post
    This script doesn't make any sense, I just want to show you that this script is not working with YV12 clips.
    I actually want to calculate the AverageLuma() for each frame.
    But since the script didn't work, so I simply want to know if I can show current frame.
    However, since the YV12 is not taking this video.
    Then how I can run similar script using YV24?
    Thanks,
    Here's an example using YV12:
    Code:
    v = LSmashVideoSource("Transit_C1.mp4")
    testclip = VtoY(v.Crop(0,0,-0,-2)) # crop v to mod4 for VtoY()
    space = " " # convoluted way to get a space between current_frame and AverageLuma
    WriteFile(testclip, "Transit_C1.txt", "current_frame", "space", "AverageLuma",  append = false)
    Or using YV24:
    Code:
    v = LSmashVideoSource("Transit_C1.mp4").ConvertToYV24()
    testclip = VtoY(v)
    space = " " # convoluted way to get a space between current_frame and AverageLuma
    WriteFile(testclip, "Transit_C1.txt", "current_frame", "space", "AverageLuma",  append = false)
    And it's not necessary to use VtoY() in the first script. You could simply measure AverageChromaV directly:

    Code:
    vid = LSmashVideoSource("Transit_C1.mp4")
    space = " " # convoluted way to get a space between current_frame and AverageLuma
    WriteFile(vid, "Transit_C1.txt", "current_frame", "space", "AverageChromaV",  append = false)
    Last edited by jagabo; 22nd Sep 2021 at 11:01.
    Quote Quote  
  10. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Originally Posted by _Al_ View Post
    This seams more accurate for that transparent transition.
    The least transparent areas were selected, blue and purple gradients , but U and V values are verified at the same time. So you can use very wide range -10 and +10 for thresholds and it still finds all off it and only those parts.

    More conditions the better (in this case 4) means higher tolerances for values is allowed.
    Code:
    import numpy as np
    import cv2
    
    #vapoursynth crop would be: clip = clip.std.CropAbs(width=22, height=12, left=212, top=76)
    x1=212
    y1=76
    x2=212+22
    y2=76+12
    
    
    #vapoursynth crop would be: clip = clip.std.CropAbs(width=22, height=14, left=580, top=316)
    X1=580
    Y1=316
    X2=580+22
    Y2=316+14
        
    def process_img(img_rgb, frame_number):
        r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2],cv2.COLOR_BGR2YUV)
        r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV)
        r1_U = np.average(r1[:,:,1])
        r1_V = np.average(r1[:,:,2])
        r2_U = np.average(r2[:,:,1])
        r2_V = np.average(r2[:,:,2])
    ##    if frame_number == 1233: #for transit_I2.mp4
    ##        print(f'{frame_number}\n {r1_U}  {r1_V}\n {r2_U}  {r2_V}')      
        if 147<r1_U< 167 and 44<r1_V<64 and 162<r2_U<182 and 167<r2_V<187:
            print(frame_number)
            
    vidcap = cv2.VideoCapture(r'transit_I1.mp4')
    frame_number = 0
    while True:
      success,image = vidcap.read()
      if not success: break
      process_img(image, frame_number)
      frame_number += 1
    for transitI1.mp4 (there are actually three transitions in transitI1.mp4)
    Code:
    12
    1087
    1847
    and transitI2.mp4:
    Code:
    1233
    2513
    I am thinking this way, let me know if I am right or not:
    In your code, you make four times logic operation:
    if 147<r1_U< 167 and 44<r1_V<64 and 162<r2_U<182 and 167<r2_V<187:
    print(frame_number)
    If all four conditions are met, then print the fram_number.

    If I guess correctly, if most of the frames or more than half of the frames don't meet any of the condition.
    Then trying to compare the first 2 conditions first: if 147<r1_U< 167 and 44<r1_V<64
    If the first 2 conditions are met, then find out the last 2 conditions: if 162<r2_U<182 and 167<r2_V<187:
    So, if you change the loop control to work like this:
    If the first 2 conditions are NOT met, then get the next frame. Not even do this: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV)
    If the first 2 conditions are met, then check the last 2 conditions, if they both met, then print the frame; otherwise, get the next frame.
    Let's say in one minute video, there are 1500 frames (25frame/second), you could save operation nearly 3000 times.
    As I have many videos have length of 2hours, which is about 180,000 frames, then it could save a lot of computation, right?
    Let me know what you think.
    Thanks,
    Quote Quote  
  11. If the first 2 conditions are NOT met, then get the next frame. Not even do this: r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV)
    Notice, you only convert to YUV a cropped area already, not the whole image gets converted.

    But yes, good idea to speed up a code, try this for example:
    Code:
    def process_img(img_rgb, frame_number):
        r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2],cv2.COLOR_BGR2YUV)
        if  147<np.average(r1[:,:,1])<167 and 44<np.average(r1[:,:,2])<64:
            r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2],cv2.COLOR_BGR2YUV)    
            if 162<np.average(r2[:,:,1])<182 and 167<np.average(r2[:,:,2])<187:  
                print(frame_number)
    I tried it and it is really faster.

    I try that jagabo's idea, to check for HUE instead. To explain it, we try to check on correct color and we do it now checking chroma U and V. Instead jagabo tries only a color in HSV color space. HSV color space is defined by color, saturation and value. It might be even faster, not sure. I do not know what is faster , to chane RGB to YUV or HSV.
    Last edited by _Al_; 22nd Sep 2021 at 18:19.
    Quote Quote  
  12. this is using HSV colors (Hue and saturation), instead of YUV (U and V):
    Code:
    def process_img(img_rgb, frame_number):
        r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2], cv2.COLOR_BGR2HSV)
        if  90<np.average(r1[:,:,0])<100 and 125<np.average(r1[:,:,1])<150:
            r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2], cv2.COLOR_BGR2HSV)
            if 134<np.average(r2[:,:,0])<152 and 130<np.average(r2[:,:,1])<150:   
                print(frame_number)
    avisynth has different values for hue, opencv uses these values and then saturation 0 to 255 and value 0 to 255

    But speed is about the same as it seams.
    But it seams to be more straightforward and more intuitive to look for hue and saturation then U and V range.
    Quote Quote  
  13. This is jagabo's method in opencv, using masks with Hue and saturation, but changed a bit. If there is half points in range for a rectangle, it qualifies:
    Code:
    def process_img(img_rgb, frame_number):
        r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2], cv2.COLOR_BGR2HSV)
        lower = np.array([92,125,0])  #hue from 92 to 98, saturation from 125 to 150, any value
        upper = np.array([98,150,255])
        mask1 = cv2.inRange(r1, lower, upper)
        if np.average(mask1) > 128:
            r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2], cv2.COLOR_BGR2HSV)
            lower = np.array([134,130,0])
            upper = np.array([152,150,255])
            mask2 = cv2.inRange(r2, lower, upper)
            if np.average(mask2) > 128:
                 print(frame_number)
    surprisingly, speed is fine, looks the same, and code is easier to read
    Last edited by _Al_; 22nd Sep 2021 at 20:30.
    Quote Quote  
  14. So this brings us back to checking U and V from YUV, this clears the code, it is same fast!
    Code:
    def process_img(img_rgb, frame_number):
        r1 = cv2.cvtColor(img_rgb[y1:y2,x1:x2], cv2.COLOR_BGR2YUV)
        lower = np.array([0,147,44]) #any luma, U from 147 to 167, V from 44 to 64
        upper = np.array([255,167,64])
        mask1 = cv2.inRange(r1, lower, upper)
        if np.average(mask1) > 160:
            r2 = cv2.cvtColor(img_rgb[Y1:Y2,X1:X2], cv2.COLOR_BGR2YUV)
            lower = np.array([0,162,167])
            upper = np.array([255,182,187])
            mask2 = cv2.inRange(r2, lower, upper)
            if np.average(mask2) > 160:
                 print(frame_number)
    notice, changing that value in condition (in this case to 160), you can fine tune it, if it returns more frames from that area of transition

    but that HUE, saturations looks more intuitive somehow as for setting those values
    Quote Quote  
  15. Originally Posted by _Al_ View Post
    U,V vs. H,S... But speed is about the same as it seams.
    Given that U,V are components of the video and H,S must be calculated from U,V I'd expect the former to be a little faster. No doubt it's done with a lookup table so there's not much of a speed penalty.

    Originally Posted by _Al_ View Post
    But it seams to be more straightforward and more intuitive to look for hue and saturation then U and V range.
    I see it as just two ways of addressing the colors. U,V are Cartesian coordinates, H,V are equivalent polar coordinates. People may more intuitively understand H,V but they are harder to get when dealing with YUV video where you already have the U,V values and the tools are there to show them.
    Quote Quote  
  16. maybe some little utility that reads coordinates and values from mouse, whatever format is used, using opencv, from video, maybe some slider to it, I look into something tomorrow
    Quote Quote  
  17. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Do you have some agreement? Which method is both intuitively and performant, to use U,V or HUE, saturations?
    By the way, please let me know how to get the coordinates in your code as in my post #68.
    Thanks,
    Quote Quote  
  18. For me speed was about the same, but perhaps those 2000 frame samples and small areas could not reveal much of a speed difference at all.

    I might post a utility here that gives you cropping area directly in numpy coordinates.
    Quote Quote  
  19. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Good to hear that,
    However, please let me know how to get the coordinates in your code as in my post #68.
    Quote Quote  
  20. I sometimes use Animate() to vary a setting over a range of values. Here's an example with hues and saturations animated over a full UV plane:

    Code:
    function GreyRamp()
    {
       black = BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32")
       white = BlankClip(color=$010101, width=1, height=256, pixel_type="RGB32")
       StackHorizontal(black,white)
       StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2))
       StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4))
       StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8))
       StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16))
       StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32))
       StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64))
       StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128))
    }
    
    
    function AnimateHS(clip v, int StartH, int EndH, int StartS, EndS)
    {
        black = BlankClip(v)
        hsmask = v.MaskHS(StartHue=StartH, EndHue=EndH, MinSat=StartS, MaxSat=EndS)
        Overlay(v, black, mask=hsmask)
        Subtitle("StartHue="+String(StartH)+"  EndHue="+String(EndH))
        Subtitle("StartSat="+String(StartS)+"  EndSat="+String(EndS), y=20)
    }
    
    
    GreyRamp()
    ConvertToYV24()
    YtoUV(last, TurnRight(last), ColorYUV(cont_y=-256))
    black = BlankClip(last)
    
    ah = Animate(0, 180, "AnimateHS", last,0,10,0,100,  last,350,360,0,100)
    as = Animate(0, 180, "AnimateHS", last,0,360,0,10, last,0,360,90,100)
    ab = Animate(0, 180, "AnimateHS", last,0,10,0,10, last,350,360,90,100)
    
    StackHorizontal(ah, as, ab)
    Using frame 656 of transit_C1.mp4 to search for the hue of the yellowish band:

    Code:
    function ShowHS(clip c, int hue)
    {
        MaskHS(c, startHue=hue, endHue=hue+1)
        Subtitle(string(hue))
    }
    
    v = LWLibavVideoSource("transit_C1.mp4", cache=false, prefer_hw=2)
    Trim(v, 656,-1) # frame 656 only
    Loop(360,0,0) # repeat for a total of 360 frames
    Animate(0,358, "ShowHS", last,0,  last,358)
    You can see that the yellow band has a hue around 161 to 163:
    Image
    [Attachment 60902 - Click to enlarge]
    Quote Quote  
  21. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I want to know how you can display the hue from frame 161 to 163, I will try to do the same.
    By the way, could you please explain what Animate() can do, and why you use this function here.
    Thanks,
    Quote Quote  
  22. Originally Posted by zydjohn View Post
    I want to know how you can display the hue from frame 161 to 163, I will try to do the same.
    If you want to see the actual colors corresponding to each hue (rather than the mask) you could use Overlay():

    Code:
    function ShowHS(clip c, int hue)
    {
        MaskHS(c, startHue=hue, endHue=hue+1)
        Subtitle(string(hue))
    }
    
    v = LWLibavVideoSource("transit_C1.mp4", cache=false, prefer_hw=2)
    v = v.ConvertToYV24() # necessary because of the non-mod4 frame size
    Trim(v, 656,-1)
    Loop(360,0,0)
    mask = Animate(0,358, "ShowHS", last,0,  last,358)
    Overlay(last, mask, mask=mask.Invert())
    All colors converted to black, except the current hue:
    Image
    [Attachment 60904 - Click to enlarge]



    Originally Posted by zydjohn View Post
    By the way, could you please explain what Animate() can do, and why you use this function here.
    Animate() allows you to vary a filtering parameter over a number of frames.

    http://avisynth.nl/index.php/Animate

    So Animate(0,358, "ShowHS", last,0, last,358) calls ShowHS() over frames 0 to 358 with the variable hue linearly interpolated from 0 to 358:

    frame 0: ShowHS(last, 0)
    frame 1: ShowHS(last, 1)
    frame 2: ShowHS(last, 2)
    ...
    frame 357: ShowHS(last, 357)
    frame 358: ShowHS(last, 358)
    Quote Quote  
  23. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I tried to use your code to show hue, but the frames go very quick.
    Can I change the code so that I can have like one second pause for each frame, so I can see more clearly, or some kind of slow motion?
    Thanks,
    Quote Quote  
  24. Open the AVS script in an editor like VirtualDub2. You can step through frame by frame, scrub through with the scrollbar, or "play" the script at the video's frame rate. If you really want to play the video in a media player at 1 fsp just add AssumeFPS(1.0) to the end of the script.
    Quote Quote  
  25. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Thanks for your advice, I can see the frames in slow motion.
    However, I can’t see anything useful, 99% of the time, I see only blank screen.
    I think the Transit_C1.mp4 is not very good, it seems it was combined by two different soccer games. Now, I upload Transit_C2.mp4, which is much better.
    You can test your code to see if it works for Transit_C2.mp4 or not.
    Thanks,
    Image Attached Files
    Quote Quote  
  26. Originally Posted by zydjohn View Post
    I can’t see anything useful, 99% of the time, I see only blank screen.
    The script is a tool to help you identify the hue of different picture elements. Parts of the picture that aren't the specified hue are shown as black. Only parts of the picture that match the hue value (printed in the top left corner) are shown. In the earlier version of the script they were shown in white. In the newer version they are shown in the native color. I used it here to find the hue of the yellowish circle of the transition effect in frame 656 of transit_C1.mp4 -- mostly 161 and 162. Original frame 656 and hue 162:

    Image
    [Attachment 60906 - Click to enlarge]


    Originally Posted by zydjohn View Post
    I think the Transit_C1.mp4 is not very good, it seems it was combined by two different soccer games. Now, I upload Transit_C2.mp4, which is much better.
    You can test your code to see if it works for Transit_C2.mp4 or not.
    It works fine with both videos. For C2 you need to change the frame number to 1295 (or one of the others with the transition) if you want to see the hue of the yellowish circle in the transition effect.
    Last edited by jagabo; 23rd Sep 2021 at 16:57.
    Quote Quote  
  27. Another way to get the hue and saturation values is to use the Histogram() filter:

    Code:
    LWLibavVideoSource("transit_C1.mp4")
    Histogram(v,mode="color2")
    Image
    [Attachment 60907 - Click to enlarge]


    The angle around the circle in the UV plot is the hue (I added the angle labels in white) and the distance from the center is the saturation. When I do it this way I usually make a rough estimate for the hue first:

    Code:
    MaskHS(startHue=150, endHue=180)
    Then examine the result:

    Image
    [Attachment 60908 - Click to enlarge]


    I then narrow or widen the range until just the area I want is covered:

    Code:
    MaskHS(startHue=160, endHue=164)
    Image
    [Attachment 60909 - Click to enlarge]


    Then estimate the parameters for saturation and adjust its range similarly. This is where a program with an AVS editor comes in handy -- like VirtualDub2 or avspmod. You can change the script and press F5 to update the preview in the editor.
    Last edited by jagabo; 23rd Sep 2021 at 17:52.
    Quote Quote  
  28. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I have yet one more difficult transit scene: transit_L1.mp4.
    There are a few factors make it more difficult: first, it was raining, so the video has extra disturbance, second, the transit scenes use the yellow sharps, which are not totally opaque, so I can still see some background images.
    Let me know how you can detect such transit scenes. Use the rectangles in red squares seem not very good, as the above reasons.
    Image
    [Attachment 60940 - Click to enlarge]

    Please advise!
    Thanks,
    Quote Quote  
  29. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    I think the transit_L1.mp4 is missing.
    Image Attached Files
    Quote Quote  
  30. for transit_L1.mp4, that yellow transparent transition, you just go for the whole Hue range, yellow, range roughly 10 to 40, two rectangles selected for search (you can add if not enough), that will return always a short sequence of 6 frames, because that transition is not changing for that while. If you want to return only one frame then you'd need to compare found frames and ignore them if less then 6 difference
    Code:
    crops = [
        
    [306,   #x1
    64,     #y1
    306+48, #x2
    64+40]  #y2
    ,
    [382,
    326,
    382+58,
    326+42]
    ]
    
    def process_img(img_rgb, frame_number):
        lower = np.array([10,125,0])
        upper = np.array([40,255,255])
        for c in crops:
            r = cv2.cvtColor(img_rgb[c[1]:c[3],c[0]:c[2]], cv2.COLOR_BGR2HSV)
            mask = cv2.inRange(r, lower, upper)
            if np.average(mask) > 140:
                continue
            else:
                return
        print(frame_number)
    Code:
    1095
    1096
    1097
    1098
    1099
    1547
    1548
    1549
    1550
    1551
    1552
    1553
    >>>
    Last edited by _Al_; 25th Sep 2021 at 10:38.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!